aiosqlitepool is a high-performance connection pool for asyncio SQLite applications. By managing a pool of reusable database connections, it eliminates connection overhead and delivers significant performance gains.
Important: aiosqlitepool is not a SQLite database driver.
It's a performance-boosting layer that works with an asyncio driver like aiosqlite, not as a replacement for it.
aiosqlitepool in three points:
- Eliminates connection overhead: It avoids repeated database connection setup (syscalls, memory allocation) and teardown (syscalls, deallocation) by reusing long-lived connections.
- Faster queries via "hot" cache: Long-lived connections keep SQLite's in-memory page cache "hot." This serves frequently requested data directly from memory, speeding up repetitive queries and reducing I/O operations.
- Maximizes concurrent throughput: Allows your application to process significantly more database queries per second under heavy load.
- Installation
- Usage
- Configuration
- How it works
- Do you need a connection pool with SQLite?
- Benchmarks
- Compatibility
- License
aiosqlitepool requires the aiosqlite driver to be installed as a peer dependency.
Install with your preferred package manager:
pip
pip install aiosqlite aiosqlitepooluv
uv add aiosqlite aiosqlitepoolPoetry
poetry add aiosqlite aiosqlitepoolYou must provide a connection_factory - an async function that creates and returns a database connection:
import asyncio
import aiosqlite
from aiosqlitepool import SQLiteConnectionPool
async def connection_factory():
return await aiosqlite.connect("basic.db")
async def main():
pool = SQLiteConnectionPool(connection_factory)
async with pool.connection() as conn:
await conn.execute("CREATE TABLE IF NOT EXISTS users (name TEXT)")
await conn.execute("INSERT INTO users VALUES (?)", ("Alice",))
# You must handle transaction management yourself
await conn.commit()
cursor = await conn.execute("SELECT name FROM users")
row = await cursor.fetchone()
print(f"Found user: {row[0]}")
await pool.close()
if __name__ == "__main__":
asyncio.run(main())Note: The pool manages connections, not transactions. You're responsible for calling commit() or rollback() as needed. The pool ensures connections are safely reused but doesn't interfere with your transaction logic.
The pool can be used as an async context manager for automatic cleanup:
async def main():
async with SQLiteConnectionPool(create_connection) as pool:
async with pool.connection() as conn:
# Do database work
pass
# Pool is automatically closedFor high-performance applications, configure your connection factory with optimized SQLite pragmas.
import aiosqlite
from aiosqlitepool import SQLiteConnectionPool
async def sqlite_connection() -> aiosqlite.Connection:
# Connect to your database
conn = await aiosqlite.connect("your_database.db")
# Apply high-performance pragmas
await conn.execute("PRAGMA journal_mode = WAL")
await conn.execute("PRAGMA synchronous = NORMAL")
await conn.execute("PRAGMA cache_size = 10000")
await conn.execute("PRAGMA temp_store = MEMORY")
await conn.execute("PRAGMA foreign_keys = ON")
await conn.execute("PRAGMA mmap_size = 268435456")
return conn
async def main():
# Initialize the connection pool with your high-performance connection factory
pool = SQLiteConnectionPool(
connection_factory=sqlite_connection,
)
# Use the pool
async with pool.connection() as conn:
# Your database operations here
# cursor = await conn.execute("SELECT ...")
# rows = await cursor.fetchall()
pass
# Clean up
await pool.close()PRAGMA journal_mode = WAL - Writes go to a separate WAL file, reads continue from main database. Multiple readers can work simultaneously with one writer.
PRAGMA synchronous = NORMAL - SQLite syncs to disk at critical moments, but not after every write. ~2-3x faster writes than FULL synchronization.
PRAGMA cache_size = 10000 - Keeps 10,000 database pages (~40MB) in memory. Frequently accessed data served from RAM, not disk
PRAGMA temp_store = MEMORY - Stores temporary tables, indexes, and sorting operations in RAM. Eliminates disk I/O for temporary operations
PRAGMA foreign_keys = ON - Enforces foreign key constraints automatically. Prevents data corruption, reduces application-level checks
PRAGMA mmap_size = 268435456 - Maps database file directly into process memory space. Reduces system calls, faster access to large databases
This section demonstrates an effective pattern for integrating aiosqlitepool with FastAPI applications.
The pattern addresses three key requirements:
- Lifecycle management: The pool is created during application startup and gracefully closed on shutdown using FastAPI's
lifespancontext manager - Global access: The pool is stored in the application's state, making it accessible to all route handlers
- Dependency injection: A reusable dependency function provides clean access to pooled connections with automatic resource management
import asyncio
from typing import AsyncGenerator
from contextlib import asynccontextmanager
import aiosqlite
from aiosqlitepool import SQLiteConnectionPool
from fastapi import (
Request,
Depends,
FastAPI,
HTTPException,
)
async def sqlite_connection() -> aiosqlite.Connection:
"""A factory for creating new connections."""
conn = await aiosqlite.connect("app.db")
return conn
@asynccontextmanager
async def lifespan(app: FastAPI):
"""
Manage the connection pool's lifecycle.
The pool is created when the application starts and gracefully closed when it stops.
"""
db_pool = SQLiteConnectionPool(connection_factory=sqlite_connection, pool_size=10)
app.state.db_pool = db_pool
yield
await db_pool.close()
app = FastAPI(lifespan=lifespan)
async def get_db_connection(request: Request) -> AsyncGenerator[Connection]:
"""
A dependency that provides a connection from the pool.
It accesses the pool from the application state.
"""
db_pool = request.app.state.db_pool
async with db_pool.connection() as conn:
yield conn
@app.get("/users/{user_id}")
async def get_user(
user_id: int, db_conn: Connection = Depends(get_db_connection)
) -> dict[str, any]:
cursor = await db_conn.execute("SELECT * FROM users WHERE id = ?", (user_id,))
user = await cursor.fetchone()
if not user:
raise HTTPException(status_code=404, detail="User not found")
return dict(user)SQLiteConnectionPool accepts these parameters:
Required parameters
connection_factory- An async function that creates and returns a new database connection. This function will be called whenever the pool needs to create a new connection.
Optional parameters
-
pool_size(int) - Maximum number of connections to maintain in the pool (default:5) -
acquisition_timeout(int) - Maximum seconds to wait for an available connection (default:30) -
idle_timeout(int) - Maximum seconds a connection can remain idle before replacement (default:86400- 24 hours)
Recommended configurations
Most web applications work well with these settings:
pool = SQLiteConnectionPool(
connection_factory,
pool_size=10,
acquisition_timeout=30
)For read-heavy workloads like analytics or reporting:
pool = SQLiteConnectionPool(
connection_factory,
pool_size=20,
acquisition_timeout=15
)For write-heavy workloads:
pool = SQLiteConnectionPool(
connection_factory,
pool_size=5,
acquisition_timeout=60
)The pool automatically:
- Creates connections on-demand up to the pool size limit
- Reuses idle connections to avoid creation overhead
- Performs health checks to detect broken connections
- Rolls back any uncommitted transactions when connections are returned
- Replaces connections that have been idle too long
For server-based databases like PostgreSQL or MySQL, the answer is always yes. Connecting over a network is slow and expensive. A connection pool is a critical pattern to minimize latency and manage server resources.
But SQLite is different. It's an embedded, in-process database. There's no network, no sockets, just a file on disk. The overhead of creating a connection is measured in microseconds, not even milliseconds.
So, is a connection pool just needless complexity?
The primary challenge with SQLite in a concurrent environment (like an asyncio web application) is not connection time, but write contention. SQLite uses a database-level lock for writes. When multiple asynchronous tasks try to write to the database simultaneously through their own separate connections, they will collide. This contention leads to a cascade of SQLITE_BUSY or SQLITE_LOCKED errors.
Most applications will not encounter these issues, only a small percentage under heavy load!
Here's a quick checklist. Use aiosqlitepool if:
- Your service handles steady web traffic: Your application serves more than 5-10 requests per second from the database.
- Your workers process high-throughput jobs: Your background workers run more than ~30 queries per second.
- Your application requires predictable low latency: Your service operates under a tight performance budget (e.g., p99 latency < 50ms).
- You aim for a minimal footprint: You design your applications to be resource-efficient, knowing that reducing CPU and I/O load contributes to leaner, more sustainable infrastructure.
You don't need aiosqlitepool if your application is:
- A short-lived script that runs a few queries and exits.
- A very low-traffic service with fewer than a few requests per minute.
Database setup:
- 10,000 users
- 100,000 posts
- 5,000,000 comments
- 10,000,000 likes
100,000 complex read operations across 200 workers
Each operation executes a multi-table JOIN query with aggregations:
- JOINs across
posts,users,comments, andlikestables - COUNT aggregations for comments and likes per post
- GROUP BY operations with multiple fields
| Metric | Without Pool | With Pool | Improvement |
|---|---|---|---|
| Throughput | 3,059 ops/s | 6,069 ops/s | +98% |
| Average latency | 65.3ms | 20.5ms | -69% |
| Median latency | 63.7ms | 8.1ms | -87% |
| P90 latency | 71.4ms | 8.7ms | -88% |
| P99 latency | 87.2ms | 11.2ms | -87% |
10,000 simple SELECT operations across 5 concurrent workers
| Approach | Throughput | Avg Latency | Median | P90 | P99 | Total Time |
|---|---|---|---|---|---|---|
| Open/close per query | 4,121 ops/s | 1.21ms | 1.20ms | 1.42ms | 1.63ms | 2.4s |
| Connection pool | 12,370 ops/s | 0.40ms | 0.40ms | 0.43ms | 0.48ms | 0.8s |
| Improvement | +200% | -67% | -67% | -70% | -70% | -67% |
aiosqlitepool is designed to be a flexible pooling layer that works with different asyncio SQLite drivers.
To be compatible, a connection object from a driver must have implemented the following three async methods:
class Connection:
async def execute(self, *args, **kwargs):
...
async def rollback(self) -> None:
...
async def close(self) -> None:
...Note on commit: The commit method is intentionally not part of the protocol. Transaction management is considered the responsibility of the application developer, not the pool. aiosqlitepool never commits on your behalf.
The following libraries are tested and confirmed to work out-of-the-box with aiosqlitepool:
If you are using another asyncio SQLite library that follows the protocol, it should work seamlessly. Just pass your driver's connection function to the connection_factory.
If you encounter an issue with a specific driver, please let us know by opening a GitHub issue.
aiosqlitepool is available under the MIT License.