Skip to content

dorm.cache

Pluggable result-cache layer for queryset + single-row reads. HMAC-SHA256 signed payloads on top of pickle so a writeable Redis isn't an RCE vector; per-model invalidation version closes the classic read-then-write race.

Backends

dorm.cache.BaseCache

Minimal cache contract every backend implements.

All methods accept string keys and serialised bytes values; the queryset layer takes care of (de)serialising rows so backends don't have to know about model classes.

default_timeout: int property

Fallback TTL used by qs.cache() callers that don't pass an explicit timeout. Backends override by setting self._default_timeout from the TTL settings key.

delete_pattern(pattern: str) -> int

Bulk-evict keys matching a glob pattern (e.g. "qs:books:*"). Returns the number of keys removed. Used by signal-driven invalidation to drop every cached queryset for a model in one call.

dorm.cache.redis.RedisCache

Bases: BaseCache

Redis-backed cache with sync + async clients.

Settings::

CACHES = {
    "default": {
        "BACKEND": "dorm.cache.redis.RedisCache",
        "LOCATION": "redis://localhost:6379/0",
        "OPTIONS": {"socket_timeout": 1.0},
        "TTL": 300,
    },
}

LOCATION accepts any URL format redis-py understands — redis://, rediss:// (TLS), unix://. OPTIONS are passed through to Redis.from_url. TTL is the default expiry applied by qs.cache() when no per-call timeout is given.

Connection pooling: redis-py keeps an internal pool per Redis instance, so a single :class:RedisCache is enough for the whole process. The async client gets its own pool; they can't share because the underlying socket protocol is blocking vs awaitable.

delete_pattern(pattern: str) -> int

Walk Redis with SCAN (non-blocking) and unlink every matching key. Used by signal-driven invalidation to drop every cached queryset for a model in one call.

close() -> None

Release the sync client's connection pool. Called by :func:dorm.cache.reset_caches and tests that swap configs mid-suite.

dorm.cache.locmem.LocMemCache

Bases: BaseCache

Thread-safe LRU with a secondary prefix index.

The primary store is an OrderedDict (LRU). A secondary defaultdict(set) indexes keys by their first-colon prefix — every dorm cache key uses namespace:specifics shape, so delete_pattern("dormqs:app.User:*") finds matches in O(matches) instead of O(n) scanning the whole store. Patterns that aren't a literal-prefix-followed-by-glob still fall back to the full fnmatch scan.

clear() -> None

Drop every entry. Test-helper — not part of the :class:BaseCache contract.

RedisCache is the production default (multi-worker, multi-host). LocMemCache is the in-process LRU — useful for tests, single- process scripts, or as a cheap layer in front of Redis.

Helpers

dorm.cache.get_cache(alias: str = 'default') -> BaseCache

Return (constructing on first use) the cache backend for alias.

Reads settings.CACHES and instantiates the BACKEND class with the alias's configuration. Result is memoised in this module so subsequent get_cache(alias) calls reuse the same client (Redis connection pool, in-memory dict, etc.).

dorm.cache.reset_caches() -> None

Drop every memoised backend instance.

Called by :func:dorm.configure when the CACHES setting changes; tests can also call it directly to force a re-read.

dorm.cache.model_cache_namespace(model: Any) -> str

Build the cache-key prefix shared by every queryset that targets model. Signal-driven invalidation calls delete_pattern(f"{namespace}:*") after a save / delete so a stale row can't survive a write.

dorm.cache.model_cache_version(model: Any) -> int

Return the current invalidation version for model.

Used by :class:QuerySet's cache layer to namespace keys per write epoch. Bumped via :func:bump_model_cache_version on every post_save / post_delete.

dorm.cache.bump_model_cache_version(model: Any) -> int

Increment the model's cache version. Returns the new value.

Called by the auto-invalidation signal handler immediately before it issues delete_pattern so any racing _cache_store_* call lands its bytes under a key that no subsequent read will ask for.

dorm.cache.sign_payload(payload: bytes) -> bytes

Wrap payload with an HMAC-SHA256 signature header.

The signed envelope looks like:

``b"dormsig1:<hex64>:<payload>"``

Verification on load checks the prefix + digest before handing payload to :func:pickle.loads.

dorm.cache.verify_payload(blob: bytes) -> bytes | None

Strip + verify the signature header from blob.

Returns the inner payload bytes when the signature matches, None otherwise (the caller treats that as a cache miss and falls through to the database). Invalid / unsigned blobs are rejected by default; set settings.CACHE_INSECURE_PICKLE = True to disable verification for legacy caches.

QuerySet integration

QuerySet.cache(timeout=…) opts a single queryset into result caching. Manager.cache_get(pk=…) / cache_get_many(pks=[…]) read individual rows through the cache before the DB.

# Queryset cache — N+1 friendly read of a hot listing.
top = Article.objects.filter(published=True).order_by("-rank")[:20].cache(60)

# Row cache — hot single-instance reads.
user = User.objects.cache_get(pk=42, timeout=300)
users = User.objects.cache_get_many(pks=[1, 2, 3])

Cache miss / outage falls through silently — the row from the database is the source of truth.

See Result cache (Redis) for the full configuration / invalidation story.