OxiMem is OxiDB's built-in in-memory data store — a Redis-compatible layer that speaks the RESP protocol and now also MQTT v3.1.1. In v0.19.3, OxiMem got a complete rewrite: native data structures replace the SQL-backed approach, pipeline throughput exceeds 3 million ops/s, and the server can double as an MQTT message broker with zero additional infrastructure.
Why OxiMem?
Many applications need both a document database and a cache/message layer. Traditionally, that means running OxiDB plus Redis plus an MQTT broker — three services to deploy, monitor, and maintain. OxiMem collapses all three into a single binary:
- Cache — GET/SET/EXPIRE with sub-millisecond latency
- Data structures — hashes, lists, sets, sorted sets
- Pub/Sub — RESP SUBSCRIBE/PUBLISH for real-time messaging
- MQTT broker — IoT devices publish on MQTT, apps subscribe via RESP (or vice versa)
- Document database — the full OxiDB query engine on the same port
Native Data Structures
Previous versions stored OxiMem data as OxiDB documents (JSON in the _kv, _hash, _list, _set collections). This was clever — you could query cache data with SQL — but added serialization overhead on every operation.
v0.19.3 introduces a native in-memory store using Rust's standard library data structures directly:
struct OxiMemStore {
strings: RwLock<HashMap<String, KvEntry>>,
hashes: RwLock<HashMap<String, HashMap<String, String>>>,
lists: RwLock<HashMap<String, VecDeque<String>>>,
sets: RwLock<HashMap<String, HashSet<String>>>,
sorted_sets: RwLock<HashMap<String, SortedSet>>,
pubsub: Mutex<HashMap<String, Vec<Sender>>>,
}
Each data type has its own RwLock, so a SET operation doesn't block LPUSH or ZADD. The optional SQL mirroring mode (OXIDB_OXIMEM_SQL=true) is still available for applications that need queryable cache data.
Pipeline Optimization: 3M+ ops/s
OxiMem achieves over 3 million operations per second on pipelined workloads through two key optimizations:
1. Lock Coalescing
When a pipeline of 16 SET commands arrives, instead of acquiring and releasing the strings lock 16 times, OxiMem acquires it once and executes all 16 in a single critical section:
// Before: 16 lock acquisitions
for cmd in pipeline {
let mut map = store.strings.write(); // lock
map.insert(key, value);
} // unlock x16
// After: 1 lock acquisition
let mut map = store.strings.write(); // lock once
for cmd in pipeline {
map.insert(key, value);
} // unlock once
2. Deferred Flush
Instead of flushing the TCP write buffer after every response, OxiMem only flushes when the read buffer is empty — meaning all available commands have been processed. This batches write syscalls, reducing kernel transitions by up to 16x for pipelined workloads.
3. Single-Command Fast Path
For non-pipelined commands (the common case), OxiMem skips the Vec allocation and pipeline machinery entirely, dispatching directly to the command handler.
Benchmark Results
| Operation | OxiMem ops/s | Mode |
|---|---|---|
| SET (single) | 240,000 | Single command |
| GET (single) | 250,000 | Single command |
| SET (pipeline P=16) | 3,200,000 | Pipeline |
| HSET (pipeline P=16) | 3,100,000 | Pipeline |
| LPUSH (pipeline P=16) | 3,100,000 | Pipeline |
Sorted Sets
Sorted sets are one of Redis's most powerful data structures, and OxiMem now supports them fully. Each sorted set uses a dual data structure:
HashMap<String, f64>— O(1) score lookups by memberBTreeSet<(Score, String)>— O(log n) sorted iteration and range queries
Supported commands:
# Add members with scores
ZADD leaderboard 100 "alice" 85 "bob" 92 "carol"
# Get rank (0-based, lowest score first)
ZRANK leaderboard "alice" # 2
# Get top players (highest scores)
ZREVRANGE leaderboard 0 2 WITHSCORES
# 1) "alice" 2) "100" 3) "carol" 4) "92" 5) "bob" 6) "85"
# Range by score
ZRANGEBYSCORE leaderboard 90 100 WITHSCORES
# 1) "carol" 2) "92" 3) "alice" 4) "100"
# Atomic increment
ZINCRBY leaderboard 15 "bob" # "100"
# Pop minimum/maximum
ZPOPMIN leaderboard 1
ZPOPMAX leaderboard 1
Pub/Sub: RESP + MQTT Unified
OxiMem's pub/sub system uses mpsc channels internally. When a client subscribes to a topic, a new channel receiver is created. When anyone publishes to that topic — whether via RESP or MQTT — the message is broadcast to all receivers:
# Terminal 1: RESP subscriber
redis-cli -p 6380
> SUBSCRIBE sensors/temperature
Reading messages...
# Terminal 2: MQTT publisher (mosquitto_pub)
mosquitto_pub -h 127.0.0.1 -p 1883 -t sensors/temperature -m "22.5"
# Terminal 1 receives:
1) "message"
2) "sensors/temperature"
3) "22.5"
This cross-protocol interop means IoT devices can publish via MQTT while your application backend subscribes via redis-cli, a Redis client library, or another MQTT client. The topics share a single namespace.
MQTT v3.1.1 Protocol
OxiDB now includes a full MQTT v3.1.1 broker. Enable it with OXIDB_MQTT_PORT=1883 (or any port). The broker supports:
- CONNECT/CONNACK — client handshake with client ID
- PUBLISH — QoS 0 (fire-and-forget) and QoS 1 (acknowledged)
- SUBSCRIBE/SUBACK — topic subscriptions with granted QoS
- UNSUBSCRIBE/UNSUBACK — clean topic removal
- PINGREQ/PINGRESP — keep-alive heartbeats
- DISCONNECT — clean session termination
MQTT-Only Mode
For pure messaging workloads, set OXIDB_MODE=mqtt to run OxiDB exclusively as an MQTT broker. The main TCP listener is skipped entirely:
OXIDB_MODE=mqtt OXIDB_MQTT_PORT=1883 oxidb-server
This gives you a lightweight, single-binary MQTT broker with no external dependencies.
Command Logging
Set OXIDB_LOG_COMMANDS=true to log every OxiMem and MQTT command with its response to stderr:
[oximem] << SET mykey hello
[oximem] >> +OK
[oximem] << GET mykey
[oximem] >> $5 hello
[mqtt] << PUBLISH topic="sensors/temp" msg="22.5"
[mqtt] >> PUBLISH topic="sensors/temp" len=4
Useful for debugging, auditing, and understanding traffic patterns.
Configuration
| Variable | Default | Description |
|---|---|---|
OXIDB_OXIMEM_PORT | 6380 | RESP protocol port (redis-cli compatible) |
OXIDB_MQTT_PORT | 1883 | MQTT v3.1.1 broker port |
OXIDB_OXIMEM_SQL | false | Mirror data to OxiDB collections (queryable via SQL) |
OXIDB_LOG_COMMANDS | false | Log all OxiMem/MQTT commands to stderr |
OXIDB_MODE | (normal) | Set to mqtt for MQTT-only broker mode |
Testing MQTT
You can test MQTT with any MQTT client. Using mosquitto command-line tools:
# Start OxiDB with MQTT enabled
OXIDB_MQTT_PORT=1883 OXIDB_LOG_COMMANDS=true oxidb-server
# Subscribe (in one terminal)
mosquitto_sub -h 127.0.0.1 -p 1883 -t "test/topic"
# Publish (in another terminal)
mosquitto_pub -h 127.0.0.1 -p 1883 -t "test/topic" -m "Hello MQTT"
Or with Python's paho-mqtt:
import paho.mqtt.client as mqtt
client = mqtt.Client()
client.connect("127.0.0.1", 1883)
client.subscribe("sensors/#")
client.on_message = lambda c, u, msg: print(f"{msg.topic}: {msg.payload}")
client.loop_forever()
OxiMem turns OxiDB into a Swiss Army knife: document database, key-value cache, sorted set engine, pub/sub hub, and MQTT broker — all in a single binary, all sharing the same data, all at millions of operations per second.
Discussion 3
Cross-protocol pub/sub is huge for us. Our sensors publish via MQTT but our dashboard subscribes via WebSocket backed by a Redis client. Having both in one server eliminates Mosquitto from our stack entirely.
3.2M SET ops/s on pipeline is impressive. The lock coalescing approach makes total sense - you're amortizing the lock overhead across the batch. How does it handle mixed-type pipelines (SET + LPUSH + ZADD)?
The Score wrapper with total_cmp for BTreeSet is a clean solution. NaN ordering in f64 is such a common footgun. Glad to see it handled properly.