dorm.contrib.prometheus¶
Stdlib-only Prometheus text-exposition exporter — no
prometheus_client dependency. Connects to post_query to emit:
| Metric | Type | Labels |
|---|---|---|
dorm_queries_total |
counter | vendor, outcome |
dorm_query_duration_seconds |
histogram | vendor |
dorm_pool_size |
gauge | alias |
dorm_pool_in_use |
gauge | alias |
dorm_cache_hits_total |
counter | alias |
dorm_cache_misses_total |
counter | alias |
Quick start (FastAPI / any ASGI)¶
from fastapi import FastAPI
from fastapi.responses import PlainTextResponse
from dorm.contrib.prometheus import install, metrics_response
app = FastAPI()
@app.on_event("startup")
def startup():
install() # connect counters / histograms to dorm signals
@app.get("/metrics")
def metrics():
return PlainTextResponse(
metrics_response(),
media_type="text/plain; version=0.0.4",
)
API¶
dorm.contrib.prometheus.install() -> None
¶
Connect the receiver. Idempotent.
Called explicitly so projects that don't expose metrics pay no overhead from the post-query timing fan-out.
dorm.contrib.prometheus.uninstall() -> None
¶
Disconnect the receiver and reset every counter / histogram.
dorm.contrib.prometheus.metrics_response() -> str
¶
Return the Prometheus text-exposition payload as str.
Wrap in your framework's Response helper. Output is a string;
the canonical content type is text/plain; version=0.0.4.
dorm.contrib.prometheus.record_cache_hit(alias: str = 'default') -> None
¶
Optional helper for cache backends — call from your custom
BaseCache.get when the read landed a hit. RedisCache /
LocMemCache don't call this automatically yet so apps that
don't care pay zero overhead.
dorm.contrib.prometheus.record_cache_miss(alias: str = 'default') -> None
¶
Histogram buckets¶
Fixed layout (1 ms → 5 s, doubling):
0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.25, 0.5, 1.0, 2.5, 5.0.
Apps that need richer / configurable buckets should swap to
prometheus_client and translate from the same dorm signals.