Getting started¶
A 10-minute tour from "I haven't installed it" to "I've inserted and queried real rows". No FastAPI, no async — just the basics. For the async / FastAPI flavor, jump to the Tutorial.
1. Install¶
For PostgreSQL: pip install "djanorm[postgresql]".
For MySQL / MariaDB (3.1+): pip install "djanorm[mysql]"
(pure-Python pymysql + aiomysql, no C toolchain).
For S3 file uploads: pip install "djanorm[s3]" (works with AWS S3,
MinIO, Cloudflare R2, Backblaze B2).
2. Scaffold a project¶
This creates:
.
├── blog/
│ ├── __init__.py
│ └── models.py # starter User model
└── settings.py # commented-out DB and STORAGES blocks
The generated settings.py includes commented templates for both
SQLite/PostgreSQL and the file-storage STORAGES (local disk, AWS S3,
and S3-compatible MinIO). Uncomment whichever ones you need.
3. Configure the database¶
Open settings.py and uncomment the SQLite section:
dorm autodiscovers any sibling directory that has __init__.py +
models.py, so you don't need an INSTALLED_APPS list for the simple
case.
4. Define your models¶
Edit blog/models.py:
import dorm
class Author(dorm.Model):
name = dorm.CharField(max_length=100)
email = dorm.EmailField(unique=True)
bio = dorm.TextField(null=True, blank=True)
class Meta:
ordering = ["name"]
class Post(dorm.Model):
title = dorm.CharField(max_length=200)
body = dorm.TextField()
author = dorm.ForeignKey(Author, on_delete=dorm.CASCADE, related_name="posts")
published = dorm.BooleanField(default=False)
created_at = dorm.DateTimeField(auto_now_add=True)
5. Create and apply migrations¶
You should see:
Detecting changes for 'blog'...
Created migration: blog/migrations/0001_initial.py
Applying blog.0001_initial... OK
6. Insert and query¶
Drop into the dorm shell — it pre-imports your models and runs IPython if available:
>>> alice = Author.objects.create(name="Alice", email="alice@example.com")
>>> Post.objects.create(title="Hello", body="World", author=alice, published=True)
<Post: pk=1>
>>> Author.objects.count()
1
>>> for p in Post.objects.filter(published=True).select_related("author"):
... print(p.author.name, "—", p.title)
Alice — Hello
>>> # F expressions, Q objects, aggregates — all here
>>> from dorm import F, Q, Count
>>> Author.objects.annotate(post_count=Count("posts")).values_list("name", "post_count")
[('Alice', 1)]
7. Switch to PostgreSQL¶
When you're ready to leave SQLite, all you need to change is settings.py:
DATABASES = {
"default": {
"ENGINE": "postgresql",
"NAME": "blog",
"USER": "postgres",
"PASSWORD": "secret",
"HOST": "localhost",
"PORT": 5432,
}
}
Re-run dorm migrate against the empty PG database. Your code, models,
and queries stay identical.
8. MySQL / MariaDB (3.1+)¶
Install the extra and point at the MySQL service:
DATABASES = {
"default": {
"ENGINE": "mysql", # or "mariadb"
"NAME": "blog",
"USER": "root",
"PASSWORD": "secret",
"HOST": "localhost",
"PORT": 3306,
}
}
Caveats: DDL is not transactional on MySQL — wrapping
ALTER TABLE in atomic() won't roll it back. RETURNING works
on MariaDB 10.5+ but not on MySQL; the insert path uses
cursor.lastrowid for autoincrement PKs. The wrapper forces
ANSI_QUOTES mode so dorm's double-quoted identifiers parse the
same as PostgreSQL / SQLite.
9. DuckDB for embedded analytics (4.0+)¶
When the workload is analytical (dashboards, local ETL, ML feature stores) instead of OLTP, the DuckDB backend runs columnar vectorised queries in-process, no server.
DATABASES = {
"default": {
"ENGINE": "duckdb",
"NAME": "analytics.duckdb", # file on disk; ":memory:" works too
}
}
Your code stays identical — Author.objects.filter(...),
bulk_create, aggregations. Caveat: DuckDB has no SAVEPOINT, so
nested atomic() blocks degrade to no-op boundaries; the outer
rollback discards everything. See DuckDB for details.
For serious OLTP keep PostgreSQL — DuckDB shines on bulk vectorised reads, not on concurrent writes.
What next?¶
- Models & fields — every field type and their options
- Querying — filter, exclude, Q, F, aggregations
- Async patterns —
acreate,aiterator,aatomic - Tutorial — wire it up with FastAPI
- What's new in 4.0 — overview of the 4.0 features
- DuckDB — embedded OLAP backend
- File uploads with
FileField— local disk by default, switch to S3 / MinIO / R2 with aSTORAGESchange