Saltar a contenido

dorm.contrib.idempotency

Idempotency-key primitive — replay-safe retries.

See Idempotency keys for recipes.

API

dorm.contrib.idempotency.IdempotencyRecord

Bases: Model

Abstract base for the idempotency table. Subclass with a concrete Meta.db_table (and any extra columns your app needs) and run makemigrations.

Columns:

  • key — caller-supplied identifier (the HTTP header value); uniqueness is the contract.
  • response — JSON blob of the cached payload.
  • status_code — optional integer the caller can use to mirror an HTTP status. Free-form.
  • created_at — fingerprint timestamp; useful for TTL purges.

dorm.contrib.idempotency.idempotency_key(key: str, *, model: Type[IdempotencyRecord], using: str = 'default')

Acquire an idempotency-key context.

The block runs inside an atomic() transaction so the side-table row and any business writes commit (or roll back) together.

Important — replay handling: on a replay hit the body of the with block STILL executes. Always branch on ctx.replay before running side-effecting code, otherwise the replay re-runs the work::

with idempotency_key(key, model=IdpEntry) as ctx:
    if ctx.replay:
        return ctx.cached_response          # ← short-circuit
    result = process_payment(...)            # only on first call
    ctx.store(result, status_code=201)
    return result

Concurrency: a tiny race window exists between the lookup and the eventual store(). Two simultaneous requests with the same key will both see replay=False, both run the work, and one will fail with an IntegrityError on commit (the unique constraint). The losing transaction's surrounding atomic() rolls back its work; the caller can retry and pick up the cached row. For higher-throughput needs, wrap the block in a select_for_update(skip_locked=True) row-level lock on the key.

dorm.contrib.idempotency.purge_expired(model: Type[IdempotencyRecord], *, older_than_seconds: int, using: str = 'default') -> int

Delete idempotency rows older than older_than_seconds.

Returns the number of rows deleted. Wire this into a periodic job (cron / Celery beat / APScheduler) to keep the table bounded — idempotency rows are only useful for the brief window a client might retry; keeping them forever just wastes disk.