OxiPool is a connection pooler for OxiDB — think PgBouncer, but purpose-built for OxiDB. It sits between your application and one or more OxiDB server instances, multiplexing client connections over a smaller pool of backend connections. Written in Rust with Tokio, OxiPool is a single 675-line binary that handles connection pooling, read/write routing, transaction pinning, automatic failover, and client rate limiting.
Why Use a Connection Pooler?
Without a pooler, every client opens a direct TCP connection to the database server. This works fine for a handful of clients, but becomes a problem at scale:
- Connection overhead — each TCP connection consumes memory and file descriptors on the server. With 1,000 clients, that's 1,000 persistent connections.
- Connection storms — when your application restarts or scales up, hundreds of connections hit the server simultaneously.
- No read scaling — without routing, all queries go to a single server, even if you have read replicas.
- No failover — if the server crashes, every client loses its connection with no automatic recovery.
OxiPool solves all of these. Your application connects to OxiPool (which accepts up to 1,000 clients by default), and OxiPool manages a small pool of backend connections (default 10) to the actual database server.
Architecture
┌─────────────────────────────────┐
│ OxiPool │
Clients │ (:4445) │ Backends
│ │
┌───────┐ │ ┌───────────────────────┐ │ ┌─────────┐
│ App 1 │────────▶│ │ Master Pool (10) │───────│─────▶│ Master │
├───────┤ │ │ ┌──┬──┬──┬──┬──┐ │ │ │ :4444 │
│ App 2 │────────▶│ │ │ │ │ │ │ │ │ │ └─────────┘
├───────┤ │ │ └──┴──┴──┴──┴──┘ │ │
│ App 3 │────────▶│ └───────────────────────┘ │ ┌─────────┐
├───────┤ │ ┌───────────────────────┐ │ │Replica 1│
│ ... │────────▶│ │ Replica Pool (10) │───────│─────▶│ :4444 │
├───────┤ │ │ ┌──┬──┬──┬──┬──┐ │ │ └─────────┘
│App 1K │────────▶│ │ │ │ │ │ │ │ │ │ ┌─────────┐
└───────┘ │ │ └──┴──┴──┴──┴──┘ │───────│─────▶│Replica 2│
│ └───────────────────────┘ │ │ :4444 │
└─────────────────────────────────┘ └─────────┘
1,000 clients ──▶ 10+10 backend connections
Configuration
OxiPool is configured entirely via environment variables:
| Variable | Default | Description |
|---|---|---|
OXIPOOL_LISTEN | 127.0.0.1:4445 | Address OxiPool listens on |
OXIPOOL_MASTER | 127.0.0.1:4444 | Master OxiDB server address |
OXIPOOL_REPLICAS | (empty) | Comma-separated replica addresses |
OXIPOOL_MASTER_SIZE | 10 | Backend connections to master |
OXIPOOL_REPLICA_SIZE | 10 | Backend connections per replica |
OXIPOOL_MAX_CLIENTS | 1000 | Maximum concurrent client connections |
OXIPOOL_CONNECT_TIMEOUT | 5s | Timeout for backend TCP connections |
OXIPOOL_STATS_INTERVAL | 60s | Periodic stats logging interval (0 = disabled) |
Basic Setup
# Start OxiDB server
oxidb-server
# Start OxiPool pointing at the server
OXIPOOL_MASTER=127.0.0.1:4444 \
OXIPOOL_LISTEN=0.0.0.0:4445 \
OXIPOOL_MASTER_SIZE=20 \
oxipool
With Read Replicas
OXIPOOL_MASTER=db-master:4444 \
OXIPOOL_REPLICAS=db-replica1:4444,db-replica2:4444 \
OXIPOOL_MASTER_SIZE=10 \
OXIPOOL_REPLICA_SIZE=10 \
OXIPOOL_MAX_CLIENTS=2000 \
oxipool
Your application connects to OxiPool instead of the server directly. No client-side changes needed — OxiPool speaks the same protocol:
from oxidb import OxiDbClient
# Connect to OxiPool instead of OxiDB directly
db = OxiDbClient("127.0.0.1", 4445) # OxiPool port
db.insert("users", {"name": "Alice"}) # routed to master
users = db.find("users", {}) # routed to replica
Read/Write Routing
OxiPool inspects every request and classifies it as a read or write to route it to the correct backend:
| Route | Commands | Destination |
|---|---|---|
| Write | insert, insert_many, update, update_one, delete, delete_one, create_collection, drop_collection, index ops, compact, blob ops | Master |
| Read | find, find_one, count, aggregate, list_collections, text_search, vector_search, ping | Replica (round-robin) |
| Transaction | begin_tx, commit_tx, rollback_tx | Master (pinned) |
SQL routing: For sql commands, OxiPool parses the SQL query text. SELECT, SHOW, DESCRIBE, and EXPLAIN are classified as reads; everything else (INSERT, UPDATE, DELETE, CREATE) goes to master.
Binary fallback: If the payload is binary (OxiWire protocol, not UTF-8), OxiPool defaults to routing to master. This is the safe default — writes always succeed, and reads on master are correct (just not load-balanced).
Replica Round-Robin
When multiple replicas are configured, read requests are distributed via round-robin using an atomic counter. Each read picks the next replica in sequence:
# With 2 replicas:
# Request 1 → replica1
# Request 2 → replica2
# Request 3 → replica1
# Request 4 → replica2
# ...
Transaction Pinning
Transactions require special handling. When a client begins a transaction, all subsequent operations — both reads and writes — must go to the same backend connection until the transaction ends. OxiPool handles this with connection pinning:
begin_tx— OxiPool acquires a backend connection from the master pool and pins it to the client. The connection is removed from the pool.- During transaction — all commands from this client bypass the pool and go directly through the pinned connection. Even reads go to master (not replicas) to ensure snapshot consistency.
commit_tx/rollback_tx— the transaction ends, the pinned connection is returned to the master pool for reuse by other clients.
# Client perspective — transparent pinning
db = OxiDbClient("127.0.0.1", 4445)
with db.transaction() as tx:
# begin_tx → OxiPool pins a master connection
tx.insert("orders", {"item": "widget", "qty": 5})
tx.update_one("inventory", {"item": "widget"}, {"$inc": {"qty": -5}})
order = tx.find_one("orders", {"item": "widget"}) # reads go to pinned master
# commit_tx → pin released, connection back to pool
Auto-Rollback on Disconnect
If a client disconnects while a transaction is active (crash, network failure, timeout), OxiPool automatically sends a rollback_tx command to the backend before returning the connection to the pool. This prevents:
- Phantom locks from uncommitted transactions
- Connection pool starvation from pinned-but-abandoned connections
- Data corruption from half-committed operations
# What happens when a client crashes mid-transaction:
# 1. Client TCP connection drops
# 2. OxiPool detects the disconnect
# 3. OxiPool sends {"cmd":"rollback_tx"} to the backend
# 4. Backend rolls back all pending changes
# 5. Connection returned to master pool
# → No leaked connections, no phantom data
Connection Pool Management
Each pool (master pool + one per replica) manages a fixed number of backend connections:
- Semaphore-based concurrency control — a Tokio semaphore limits the number of concurrent in-flight requests to the pool size. If all connections are busy, additional requests wait for a permit.
- Connection reuse — after a request completes, the backend connection is returned to the pool for the next client to use.
- Eager connection — all backend connections are established at startup, so the pool is fully warmed before accepting clients.
- TCP nodelay — all connections (client and backend) use
TCP_NODELAYfor minimal latency.
Automatic Reconnection
When a backend connection fails (server restart, network issue), OxiPool:
- Discards the dead connection
- Returns an error to the affected client
- Spawns an async background task to establish a replacement connection
- The background task retries with exponential backoff (500ms → 1s → 2s → ... → 10s cap) until successful
- Once reconnected, the new connection joins the pool transparently
This means the pool is self-healing. Even if the master server goes down and comes back, OxiPool rebuilds its connections without manual intervention.
Client Rate Limiting
OxiPool enforces a maximum client count via OXIPOOL_MAX_CLIENTS (default 1,000). When the limit is reached:
- New incoming TCP connections are immediately dropped
- A log message is printed:
"max clients reached, dropping connection" - Existing clients are unaffected
- As soon as a client disconnects, its slot becomes available
This protects the pooler and backend from being overwhelmed by too many concurrent clients.
Protocol Transparency
OxiPool uses the same length-prefixed frame protocol as OxiDB (4-byte little-endian length + payload, max 16 MiB). This means:
- JSON protocol works transparently — OxiPool reads the JSON to classify commands
- OxiWire binary protocol works transparently — same frame format, forwarded verbatim. Binary payloads default to write routing (safe fallback).
- No client library changes needed — just point your client at OxiPool's port instead of the server's port
Monitoring & Stats
OxiPool logs periodic statistics to stderr (configurable via OXIPOOL_STATS_INTERVAL):
[stats] reqs=15234 rps=254 master=8102 replica=7132 clients=47 tx=3
master_pool=8/10 replica_pool=9/10 errs=0
| Metric | Description |
|---|---|
reqs | Total requests processed |
rps | Requests per second (since last stats log) |
master | Requests routed to master |
replica | Requests routed to replicas |
clients | Currently connected clients |
tx | Currently active transactions (pinned connections) |
master_pool | Available / total master connections |
replica_pool | Available / total replica connections |
errs | Backend errors (triggering reconnect) |
Error Handling
When a backend error occurs, OxiPool returns a JSON error frame to the client:
{"ok": false, "error": "oxipool: backend connection failed"}
The client receives a clear error and can retry. Meanwhile, OxiPool is already reconnecting the failed backend in the background. No manual intervention required.
Production Deployment
A typical production setup with Docker Compose:
services:
oxidb-master:
image: oxidb-server
environment:
OXIDB_ADDR: "0.0.0.0:4444"
OXIDB_DATA: "/data"
volumes:
- master_data:/data
oxidb-replica:
image: oxidb-server
environment:
OXIDB_ADDR: "0.0.0.0:4444"
OXIDB_DATA: "/data"
volumes:
- replica_data:/data
oxipool:
image: oxipool
ports:
- "4445:4445"
environment:
OXIPOOL_LISTEN: "0.0.0.0:4445"
OXIPOOL_MASTER: "oxidb-master:4444"
OXIPOOL_REPLICAS: "oxidb-replica:4444"
OXIPOOL_MASTER_SIZE: "20"
OXIPOOL_REPLICA_SIZE: "20"
OXIPOOL_MAX_CLIENTS: "2000"
app:
image: my-app
environment:
OXIDB_HOST: oxipool # connect to pooler, not server
OXIDB_PORT: 4445
Tested to the Limit
OxiPool is backed by 10 dedicated test suites with over 5,000 lines of test code:
- ACID compliance — transaction persistence, rollback on disconnect, isolation
- Transaction pinning — 12 sub-tests covering commit, rollback, concurrent isolation, auto-rollback, pool saturation
- Failover — kill server during ops, rapid server flapping, full pool rebuild
- Connection leak detection — 1,000 rapid cycles, abandoned connections and transactions, FD stability
- Max clients enforcement — at/over limit, slot reclaim, 3x burst, 1,000 churn cycles
- Saturation — slow/fast query fairness, aggregation storms, recovery timing
- Injection attacks — 15 attack vectors (SQL injection, NoSQL operator injection, oversized payloads, malformed JSON)
- 1M document stress test — bulk insert, aggregation, memory leak detection
- Replica lag simulation — stale reads, eventual consistency, read-your-writes via pinning
- OxiWire protocol — binary protocol through the pooler, mixed JSON+binary clients
OxiPool is 675 lines of Rust. It manages connection pooling, read/write routing, transaction pinning, automatic failover, client rate limiting, and protocol-transparent proxying — all in a single async binary powered by Tokio.
Discussion 0
No comments yet. Start the conversation.