Building a database is one thing. Proving it works under every condition is another. OxiDB ships with 21 dedicated test suites totaling over 15,000 lines of test code, written in Go, C#/.NET, Python, and Rust. This article walks through every test suite — what it tests, how it works, and why it matters.
Test Suite Overview
| Category | Suites | Languages | Focus |
|---|---|---|---|
| ACID Compliance | 2 | Go, C# | Atomicity, Consistency, Isolation, Durability |
| OxiPool Connection Pooler | 8 | Go | Pooling, failover, leaks, saturation, injection |
| OxiWire Protocol | 2 | Go, C# | Binary protocol correctness and performance |
| Comparison Benchmarks | 5 | Go, C#, Shell | OxiDB vs PostgreSQL, MongoDB, SQLite |
| Cluster & Replication | 3 | Python | Raft consensus, failover, replica lag |
| EF Core Integration | 2 | C# | .NET Entity Framework Core provider |
1. ACID Compliance Tests
acid-go — OxiPool ACID Verification (331 lines)
Tests ACID properties through the OxiPool connection pooler:
- Committed transaction persistence — data written in a committed transaction survives and is visible to subsequent reads
- Client disconnect auto-rollback — if a client disconnects mid-transaction, all pending writes are automatically rolled back
- OxiPool crash mid-transaction — if the pooler crashes during an active transaction, the backend server rolls back
- Concurrent transaction isolation — two simultaneous transactions see isolated snapshots
- Data persistence after restart — committed data survives full OxiPool restart
// Go — Test: committed tx data persists through OxiPool
func testCommittedTxPersists(c *Client) {
c.Send("begin_tx") // start transaction
c.Send("insert", "accts", {"user": "alice", "balance": 1000})
c.Send("commit_tx") // commit
// New connection — data must be visible
c2 := connect()
result := c2.Send("find_one", "accts", {"user": "alice"})
assert(result["balance"] == 1000) // ✓ persisted
}
acid-dotnet — Full ACID Suite in C# (606 lines)
The most comprehensive ACID test suite, with 18 individual tests organized by ACID property:
Atomicity (A1–A5):
- A1: Multi-insert rollback on failure — if any insert fails, none are committed
- A2: Partial update rollback — updates to multiple documents roll back together
- A3: Mixed operation atomicity — insert + update + delete all-or-nothing
- A4: Cross-collection atomicity — operations spanning multiple collections commit or roll back as a unit
- A5: Large batch atomicity — 100 inserts in a single transaction
Consistency (C1–C5):
- C1: Unique constraint enforcement after commit
- C2: Index consistency — indexes reflect committed state after rollback
- C3: Count consistency — document counts are accurate post-transaction
- C4: Concurrent counter increments — final value matches total increments
- C5: Referential consistency across collections
Isolation (I1–I5):
- I1: Dirty read prevention — uncommitted data is invisible to other transactions
- I2: Non-repeatable read prevention
- I3: Phantom read handling
- I4: Write skew detection via OCC version conflicts
- I5: Concurrent transaction ordering
Durability (D1–D4):
- D1: Data survives clean shutdown
- D2: Data survives crash (kill -9)
- D3: WAL replay after incomplete write
- D4: Transaction log recovery
Stress Tests (S1–S3):
- S1–S3: Bank transfer simulation with OCC retry loops, concurrent transfers across accounts, balance conservation verification
// C# — Test A4: Cross-collection atomicity
var tx = client.BeginTransaction();
tx.Insert("orders", new { id = 1, total = 99.99 });
tx.Insert("inventory", new { product = "widget", qty = -1 });
tx.Rollback();
// Neither collection should have the documents
Assert.Null(client.FindOne("orders", new { id = 1 }));
Assert.Null(client.FindOne("inventory", new { product = "widget" }));
2. OxiPool Connection Pooler Tests
OxiPool is OxiDB's connection pooler (similar to PgBouncer for PostgreSQL). Eight dedicated test suites verify every aspect of pool management.
stress-go — Mixed Workload Stress Test (475 lines)
The core stress test that validates pool behavior under heavy load:
- 100 concurrent workers, each performing 100 operations
- Pool size: 10 — so 100 workers share 10 backend connections
- Mixed operations: insert (25%), find (20%), findOne (15%), update (12%), delete (8%), count (8%), SQL (6%), aggregate (4%), transaction (2%)
- Tracks latency statistics: min, max, avg, p50, p95, p99
- Reports memory usage before and after
// Go — Stress test operation distribution
ops := []struct{ name string; weight int }{
{"insert", 25}, {"find", 20}, {"findOne", 15},
{"update", 12}, {"delete", 8}, {"count", 8},
{"sql", 6}, {"aggregate", 4}, {"transaction", 2},
}
// 100 workers × 100 ops = 10,000 total operations through pool_size=10
failover-go — Backend Failover (565 lines)
Tests OxiPool's behavior when backend servers go down:
- Kill server during operation — pool detects dead connection and reconnects
- Kill server during transaction — transaction is cleanly aborted, client gets error
- Server restart — pool auto-reconnects to restarted backend
- Rapid server flapping — server up/down/up/down in quick succession
- Kill under sustained load — 5 workers running continuously, server killed and restarted mid-stream
- All pool connections die — entire pool rebuilds from scratch
leak-go — Connection Leak Detection (422 lines)
Verifies that connections are properly released back to the pool:
- 1,000 rapid connect/disconnect cycles — file descriptor count must remain stable
- 100 concurrent clients × 50 cycles each — no leaked connections
- 200 abandoned connections — clients that connect but never cleanly close
- 50 abandoned transactions — clients that begin_tx but never commit/rollback
- Server FD stability — file descriptor count on the backend server stays bounded
max-clients-go — Client Limit Enforcement (580 lines)
Tests the OXIPOOL_MAX_CLIENTS setting (set to 10 with pool_size=5):
- Under limit — connections accepted normally
- At limit — exactly 10 clients can connect simultaneously
- Over limit — 11th client is rejected with proper error
- Slot reclaim — after a client disconnects, a new client can take its slot
- 3× burst — 30 clients try to connect when limit is 10
- 1,000 churn cycles — rapid connect/disconnect near the limit boundary
saturation-go — Pool Saturation Behavior (675 lines)
Tests what happens when all pool connections are busy:
- Baseline latency measurement with 50K seeded documents
- Pool saturation with slow queries — all 5 pool slots busy with expensive aggregations
- Fast vs. slow query fairness — 5 workers running slow queries + 10 workers running fast queries simultaneously. Verifies fast queries still complete within SLA
- Transaction holds under saturation — long-held transactions don't starve other operations
- Aggregation pipeline storm — 20 workers each running 5 heavy aggregations
- Recovery after saturation — pool returns to normal latency once load decreases
tx-pinning-go — Transaction Pinning (863 lines)
When a client begins a transaction, OxiPool must "pin" that client to a specific backend connection for the entire transaction lifetime. This suite has 12 sub-tests:
- Commit persists & rollback discards
- Pinned reads see master (not stale replica) data
- Concurrent transactions get separate pins (full isolation)
- Auto-rollback on client disconnect mid-transaction
- Pin released after commit/rollback (connection returned to pool)
- Multiple sequential transactions on same client
- Concurrent transactions saturating the pool
- Non-transactional operations interleaved with pinned transaction
- Nested begin_tx correctly fails
- Commit/rollback without begin correctly fails
// Go — Verify pin released after commit
c.Send("begin_tx")
c.Send("insert", "test", {"key": "value"})
c.Send("commit_tx")
// Pin should be released — pool slot available for others
// Another client can now use that backend connection
c2 := pool.Acquire()
c2.Send("find", "test", {}) // ✓ works, pool not exhausted
injection-go — Security Injection Tests (900 lines)
Validates that OxiDB and OxiPool are immune to injection attacks. 15 attack vectors:
| # | Attack | Expected Result |
|---|---|---|
| 1 | SQL tautology: ' OR '1'='1 | Treated as literal string |
| 2 | UNION SELECT injection | Rejected / treated as literal |
| 3 | Multi-statement: ; DROP TABLE users | No side effects |
| 4 | Stacked queries | Rejected |
| 5 | Comment bypass: -- and /* */ | Treated as literal |
| 6 | Quote escaping | Properly escaped |
| 7 | NoSQL operator injection: {"$gt": ""} in value | Treated as literal object |
| 8 | $where code execution attempt | Rejected |
| 9 | Regex DoS (catastrophic backtracking) | Bounded execution |
| 10 | Nested object bomb (deep nesting) | Rejected at parse |
| 11 | OxiPool routing bypass | No routing leak |
| 12 | Oversized payload (17 MB) | Rejected (16 MiB limit) |
| 13 | Malformed JSON | Parse error, no crash |
| 14 | Null bytes in strings | Handled safely |
| 15 | Command field injection | Unknown command error |
memory-1m-go — 1 Million Document Memory Test (701 lines)
Tests OxiDB's memory behavior with large datasets through OxiPool:
- Bulk insert 1M documents using 10 workers, batches of 1,000
- Count verification — exactly 1,000,000 documents
- Find with limit — returns correct subset
- Aggregation on 1M docs — group by status/region, multi-stage pipelines
- Compaction memory check — memory after compaction ≤ memory before
- Drop + memory leak check — after dropping the collection, memory returns to baseline
3. OxiWire Binary Protocol Tests
OxiWire is OxiDB's binary wire protocol (alternative to JSON-over-TCP). It uses MessagePack-style encoding for lower overhead.
oxiwire-go — Protocol Correctness & Performance (1,017 lines)
- Codec roundtrip — encode/decode all types (int, float, string, bool, null, array, map, nested)
- Full CRUD via OxiWire (insert, find, update, delete)
- Through OxiPool — same operations via the connection pooler
- JSON vs OxiWire correctness — both protocols return identical results
- Mixed protocol clients — 5 JSON + 5 OxiWire workers simultaneously, verifying no cross-contamination
- Large document (10KB+) — roundtrip integrity check
- Bulk throughput — 10,000 inserts via JSON vs OxiWire, measuring speed difference
- Find fast path — OxiWire-optimized find returning binary-encoded results
oxiwire-dotnet — .NET Protocol Test (185 lines)
- Ping, insert/find with type roundtrips
- Range query, sort+limit, update, delete
- Transaction commit and rollback
- Error handling for invalid operations
- Negative numbers, null values
- Index operations and unique constraints
- Aggregation pipeline
- JSON vs OxiWire performance comparison — measured per-operation latency
4. Comparison Benchmarks
comparison-postgresql — OxiDB vs PostgreSQL 16 JSONB (1,332 lines)
Head-to-head benchmark with 100,000 documents using OxiWire protocol:
- Bulk insert — OxiDB pipelined insert vs PostgreSQL batch insert
- 8 query patterns: exact match, compound filter, range query, OR conditions, string contains, not-equal, sorted with limit, boolean filter
- Indexed queries — same 8 patterns with indexes created on both sides
- Count queries — filtered count performance
- Aggregation — GROUP BY equivalent on both databases
- Resource usage — Docker container disk and memory consumption comparison
- Generates styled HTML and JSON reports
comparison-mongodb — OxiDB vs MongoDB 7 (1,476 lines)
Same methodology as PostgreSQL comparison but against MongoDB 7:
- 100,000 documents with pipelined OxiWire inserts
- 8 query patterns, indexed queries, count queries
- Aggregation comparison: group by department, top 5 cities, match+group pipelines
- Resource usage: disk space, memory footprint
- Wire protocol comparison (OxiWire MsgPack vs MongoDB Wire Protocol)
comparison-blog — Blog Workload Benchmark (1,620 lines)
The most realistic benchmark — simulates an actual blog application with OxiDB vs PostgreSQL 16:
- 60 detailed blog posts with full HTML content, seeded into both databases
- 9 query patterns: slug lookup, homepage query, category filter, tag search, featured posts, recent posts, top-viewed posts, full-text search, category aggregation
- 1,000 concurrent TCP connections (per-goroutine, no connection pooling to stress raw throughput)
- 1,000 concurrent readers performing 5 different query types
- 1,000 mixed operations: 800 reads + 100 view increments + 100 comment inserts
- 1,000 connection storm — rapid connect/query/disconnect
comparison-efcore — EF Core Comparison (1,134 lines)
Benchmarks OxiDB's Entity Framework Core provider against PostgreSQL's Npgsql provider:
- CRUD operations via EF Core LINQ
- Query patterns: exact match, compound, range, OR, Contains, not-equal, sorted, boolean, skip/take
- Aggregation via LINQ GroupBy
- Bulk operations: 10,000 document batch
- Transactions via EF Core SaveChanges
- Concurrent access patterns
comparison-sqlite — Embedded Benchmark
Embedded-mode OxiDB vs SQLite with 100,000 documents. Best-of-3 runs. Uses the Rust benchmark binary directly.
5. Cluster & Replication Tests
cluster — Raft Consensus Tests (3 files, 1,302 lines total)
Tests OxiDB's Raft-based clustering with Docker Compose setups:
test_cluster.py (4-node cluster):
- Cluster formation — 4 nodes discover each other and elect a leader
- HAProxy leader routing — writes go to leader, reads distribute
- Insert + replication — data written to leader appears on followers
- Leader kill + failover — kill the leader, verify new election and continued operation
- Post-failover writes — new writes succeed after failover
test_3node.py (3-node cluster):
- Same patterns with per-node consistency verification
- Bulk inserts: 10,000 documents before failover + 5,000 after
- Per-node data verification — every node has the correct data
bench_3node.py (3-node performance benchmark):
- 200,000 document bulk insert with replication
- Replication convergence timing
- Leader vs follower vs HAProxy read performance
- Index creation across cluster
- Aggregation pipeline on cluster
- Mixed workload: 10 threads, 10 seconds sustained
- Leader failover during benchmark
replica-lag-go — Replica Lag Simulation (738 lines)
Simulates master/replica architecture with configurable replication delay:
- Read/write routing verification (writes → master, reads → replica)
- Stale read detection — reads from replica may see old data during lag
- Eventual consistency — after lag period, replica catches up
- Lag percentile measurement over 100 documents
- Read-your-writes via transaction pinning
- 500-document burst catchup timing
6. EF Core Integration Tests
efcore-embedded-test — Embedded Mode (110 lines)
Validates the .NET EF Core provider in embedded mode (UseOxiDbEmbedded("./test_data")):
- CRUD with Products and Customers entities
- LINQ queries:
Where(p => p.Category == "electronics"), price filters - Update and delete operations
efcore-tcp-test — TCP Mode (132 lines)
Same tests but using UseOxiDb(host, port) with optional authentication — verifying the EF Core provider works identically in both embedded and TCP modes.
Running the Tests
Most test suites are self-contained and can be run independently. Go tests use standard go run:
# Stress test with OxiPool
cd tests/stress-go && go run main.go
# ACID compliance (C#)
cd tests/acid-dotnet && dotnet run
# Cluster tests (Python, requires Docker)
cd tests/cluster && python test_3node.py
# Injection security tests
cd tests/injection-go && go run main.go
# 1M document memory test
cd tests/memory-1m-go && go run main.go
What This All Means
These 21 test suites cover every layer of the OxiDB stack:
- Correctness — ACID compliance, transaction isolation, data integrity
- Security — 15 injection attack vectors, all neutralized
- Reliability — failover, crash recovery, connection leak prevention
- Performance — head-to-head benchmarks against PostgreSQL, MongoDB, and SQLite
- Scalability — 1M document tests, 100-worker concurrency, pool saturation
- Protocol — OxiWire binary protocol correctness and cross-protocol compatibility
- Integration — EF Core provider in both embedded and TCP modes
- Clustering — Raft consensus, leader election, replication, failover
A database without comprehensive tests is just a JSON file with extra steps. OxiDB's 15,000+ lines of test code exist so you don't have to wonder if it works — you can verify it yourself.
Discussion 0
No comments yet. Start the conversation.