Skip to main content

CAD037: KV Database

Overview

The KV Database is a replicated key-value store built on the Data Lattice. It provides Redis-like data structure operations with CRDT merge semantics, cryptographic signing of replicas, and automatic network replication via Lattice Nodes.

Each KV Database is a named, multi-writer store where independent nodes maintain signed replicas. Replicas converge through lattice merge without coordination, enabling offline-first distributed applications with rich data types.

Motivation

Decentralised applications frequently need shared mutable state beyond what a global blockchain provides. Common requirements include:

  • Session data, caches, and configuration that must be shared across nodes
  • Counters, sets, and sorted sets that multiple writers update concurrently
  • Per-user or per-organisation databases with cryptographic ownership
  • Offline-capable writes that merge when connectivity is restored

Traditional distributed databases solve these problems with consensus protocols, leader election, or conflict resolution callbacks. The KV Database instead uses the mathematical properties of lattice merge to guarantee convergence without coordination.

Design Goals

  • Provide a familiar key-value API (GET, SET, DEL, HSET, SADD, INCR, etc.)
  • Support multiple data types with type-appropriate CRDT merge strategies
  • Enable per-database, per-node signed replicas for authentication
  • Integrate with the standard lattice ROOT structure for network replication
  • Maintain compatibility with the Lattice Cursor system

Specification

Lattice Path

The KV Database occupies the :kv path in the standard lattice ROOT:

ROOT {
:data → DataLattice
:fs → OwnerLattice → MapLattice → DLFSLattice
:kv → OwnerLattice → MapLattice → KVStoreLattice
}

The full path to a specific replica is:

:kv / <owner-key> → Signed({<db-name> → {key → KVEntry, ...}, ...})

Where:

  • owner-key — the owner identity (see Owner Types below)
  • Signed(...) — the owner's signed map of database names to KV store states
  • db-name — a string database name, scoped per owner

This structure means each owner has their own namespace of databases within their signed state, matching the :fs pattern. Different owners can independently create databases with the same name without conflict.

Owner Types

The OwnerLattice supports multiple owner identity types, each with its own verification scheme. See CAD038: Lattice Authentication for the full specification.

Owner TypeKey FormatVerificationUse Case
Public Key32-byte Ed25519 public keyDirect equality with signerIndividual nodes, simple deployments
Convex AddressAddress (#0, #1337, etc.)Account lookup for authorised keysOrganisations, multi-key accounts
DID IdentifierString ("did🔑...", "did:convex:...")DID resolutionCross-system identity, standards compliance

All three types ultimately verify against the Ed25519 public key embedded in the signed data. The OwnerLattice performs verification at O(delta) cost during merge — only entries that differ between the local and incoming maps are checked. See CAD038 for the two-layer verification model (owner authorisation + signature validity).

Lattice Composition

OwnerLattice           ← per-owner merge with auth (CAD038)
└── SignedLattice ← Ed25519 signature verification
└── MapLattice ← per-database-name merge
└── KVStoreLattice
└── per-key merge
└── per-type merge (LWW, structural, or PN-counter)

OwnerLattice at the top level merges per-owner, verifying that the signer is authorised for the owner identity and that Ed25519 signatures are valid before accepting values (CAD038).

MapLattice inside the signed value merges per-database-name, allowing each owner to maintain multiple named databases.

KVStoreLattice merges per-key using type-specific merge strategies for each KV entry.

KV Entries

Each owner signs a map of {db-name → {key → KVEntry, ...}}, where each key in a KV store maps to a KV Entry, a positional vector:

[value, type, utime, expire]
IndexFieldTypeDescription
0valueanyThe stored value (structure depends on type)
1typeintegerType tag (see Data Types below)
2utimeintegerLast modification timestamp (epoch millis)
3expireinteger / nilExpiry timestamp (nil = no expiry)

Tombstones

A tombstone is an entry with nil value and nil type. The timestamp is preserved.

Tombstones are required for lattice-compatible deletes: since lattice values can only grow monotonically, a delete is represented as a tombstone that wins over older live entries during merge.

Implementations SHOULD support garbage collection of expired entries and old tombstones.

Data Types

Type TagNameValue StructureMerge Strategy
0ValueAny valueLWW by timestamp
1Hash{field → [value, timestamp], ...}Per-field LWW
2Set{member-hash → [member, addTime, removeTime], ...}Max timestamps per member
3Sorted Set{member-hash → [member, score, addTime, removeTime], ...}Max timestamps; score from latest add
4ListVector of valuesLWW by timestamp
5Counter{replica-id → [positive, negative], ...}PN-Counter (max per replica per column)

Merge Semantics

KV entry merge follows these rules, evaluated in order:

  1. Equal entries — return own (identity)
  2. One side nil — return the other (with foreign value check)
  3. Same type, mergeable (hash, set, sorted set, counter) — structural merge with max timestamp
  4. Otherwise (value, list, or type conflict) — newer timestamp wins (LWW)
  5. Tombstone vs live — newer timestamp wins

These rules satisfy the lattice properties:

  • Commutative: merge(a, b) = merge(b, a)
  • Associative: merge(merge(a, b), c) = merge(a, merge(b, c))
  • Idempotent: merge(a, a) = a

Value/List Merge (LWW)

The entry with the greater timestamp wins. Equal timestamps favour the first operand for determinism.

Hash Merge

Each field is independently merged by LWW on its per-field timestamp. Fields present in only one side are included. Field tombstones (nil value with timestamp) propagate deletes.

Set Merge

Each member is tracked with add and remove timestamps. A member is present when addTime > removeTime. Merge takes the maximum of each timestamp independently, ensuring adds and removes from different replicas combine correctly (OR-Set semantics).

Counter Merge (PN-Counter)

Each replica maintains independent positive and negative accumulators identified by a replica ID. Merge takes the maximum of each accumulator per replica. The counter value is sum(positive) - sum(negative) across all replicas.

Replica "node-0": [positive=3, negative=1]
Replica "node-1": [positive=5, negative=0]
Counter value = (3 + 5) - (1 + 0) = 7

Sorted Set Merge

Combines set membership semantics with scores. Each member tracks add/remove timestamps and a score. The score from the entry with the latest add timestamp is used. Membership follows the same rule as sets.

TTL and Expiry

Entries MAY have an expiry timestamp at position 3.

  • nil means no expiry
  • An integer value is the absolute epoch millis at which the entry expires

Implementations SHOULD check expiry on read and return nil for expired entries. Implementations SHOULD provide a garbage collection operation to remove expired entries.

Ownership and Authentication

Each owner's data is signed with an Ed25519 key pair and verified during lattice merge by the OwnerLattice (CAD038). The signed state per owner is:

Signed({db-name → {key → KVEntry, ...}, ...})

The signed value is a map of database names to KV store states, allowing each owner to maintain multiple databases under a single signed envelope.

Verification during merge provides:

  • Authentication — only authorised signers can produce accepted values for an owner
  • Integrity — any tampering invalidates the signature
  • Flexible ownership — owners may be public keys, Convex addresses, or DID identifiers
  • Multi-key support — address and DID owners may authorise multiple signing keys (e.g. organisational accounts)

Replication Model

The KV Database uses a merge-on-write replication model:

  1. Each node maintains its own signed replica (a map of database names to KV stores)
  2. The node publishes its replica into the lattice at :kv (the OwnerLattice level)
  3. Lattice Nodes (CAD036) automatically propagate signed replicas to peers
  4. On the receiving side, the OwnerLattice merge combines signed entries from all owners
  5. The application reads the merged owner map and absorbs remote data into the local KV store, extracting the specific database by name from each owner's signed map
┌──────────┐         ┌──────────┐         ┌──────────┐
│ Node A │ │ Node B │ │ Node C │
│ │ │ │ │ │
│ KVDatabase │ KVDatabase │ KVDatabase│
│ key-a │ │ key-b │ │ key-c │
│ │ │ │ │ │
│ export() │ │ export() │ │ export() │
│ ↓ │ │ ↓ │ │ ↓ │
│ :kv/A │◄───────►│ :kv/B │◄───────►│ :kv/C │
│ │ Lattice │ │ Lattice │ │
│ │ Merge │ │ Merge │ │
└──────────┘ └──────────┘ └──────────┘
│ │ │
└────────────────────┼────────────────────┘

All converge to same owner map:
{ A→signed({db→storeA}),
B→signed({db→storeB}),
C→signed({db→storeC}) }

Selective Merge

Applications MAY filter which replicas to merge using a predicate on the owner identity. This enables:

  • Trusting only known peers
  • Ignoring revoked or untrusted keys
  • Implementing access control lists

Authentication

Lattice merge MUST validate both owner authorisation and cryptographic signatures before accepting incoming values (CAD038). Entries where the signer is not authorised for the claimed owner, or where the signature is invalid, MUST be rejected.

Operations

Core KV

OperationDescription
get(key)Get value for key (nil if absent or expired)
set(key, value)Set value
set(key, value, ttl)Set with TTL in milliseconds
del(key)Delete key (creates tombstone)
exists(key)Check if key exists and is not expired
keys()Return all live keys
type(key)Return type name of key's value
expire(key, ttl)Set expiry on existing key
ttl(key)Get remaining TTL (-1 = no expiry, -2 = not found)

Hash

OperationDescription
hset(key, field, value)Set hash field
hget(key, field)Get hash field value
hdel(key, field)Delete hash field
hexists(key, field)Check if hash field exists
hgetall(key)Get all fields and values
hlen(key)Get number of fields

Set

OperationDescription
sadd(key, members...)Add members to set
srem(key, members...)Remove members from set
sismember(key, member)Check membership
smembers(key)Get all members
scard(key)Get set cardinality

Counter (PN-Counter)

OperationDescription
incr(key)Increment by 1
incrby(key, amount)Increment by amount
decr(key)Decrement by 1
decrby(key, amount)Decrement by amount

Counter operations require a replica ID to identify the calling node. Each replica maintains independent accumulators.

Sorted Set

OperationDescription
zadd(key, score, member)Add member with score
zrem(key, members...)Remove members
zscore(key, member)Get member's score
zrange(key, start, stop)Get members by score range
zcard(key)Get cardinality

List

OperationDescription
lpush(key, values...)Prepend values
rpush(key, values...)Append values
lpop(key)Remove and return first element
rpop(key)Remove and return last element
lrange(key, start, stop)Get range of elements
llen(key)Get list length

Lists use LWW merge on the entire list. They are not CRDT-friendly for concurrent modification; applications requiring concurrent list operations SHOULD use sets or sorted sets instead.

Maintenance

OperationDescription
gc()Remove expired entries and old tombstones

Reference Implementation

A reference implementation is provided in the Convex convex-core and convex-peer modules (Java).

KV entries use the AVector<ACell> type for positional vectors, CVMLong for integer fields, and Index<AString, ...> for sorted string-keyed maps. Owner keys are represented as AccountKey, Address, or AString instances in the ACell hierarchy. After deserialisation, owner keys may appear as raw ABlob instances; the implementation resolves these to AccountKey via OwnerLattice.resolveKey().

Classes

Specification ConceptJava ClassPackage
KV Store LatticeKVStoreLatticeconvex.lattice.kv
KV Entry utilitiesKVEntryconvex.lattice.kv
KV Entry mergeKVEntryLatticeconvex.lattice.kv
KV API facadeLatticeKVconvex.lattice.kv
Database wrapperKVDatabaseconvex.lattice.kv
Hash operationsKVHashconvex.lattice.kv
Set operationsKVSetconvex.lattice.kv
Counter operationsKVCounterconvex.lattice.kv
Sorted set operationsKVSortedSetconvex.lattice.kv
List operationsKVListconvex.lattice.kv
Index lattice (generic)IndexLatticeconvex.lattice.generic

Example: Local KV Operations

// Create a KV database with signing key
AKeyPair keyPair = AKeyPair.generate();
KVDatabase db = KVDatabase.create("mydb", keyPair, "node-1");

// Value operations
db.kv().set("user:alice", Strings.create("Alice"));
db.kv().set("config:timeout", CVMLong.create(30000));

// Hash operations
db.kv().hset("user:1", "name", Strings.create("Alice"));
db.kv().hset("user:1", "email", Strings.create("alice@example.com"));

// Set operations
db.kv().sadd("tags", Strings.create("alpha"), Strings.create("beta"));
boolean isMember = db.kv().sismember("tags", Strings.create("alpha"));

// Counter (PN-Counter with replica ID)
db.kv().incr("page-views");
db.kv().incrby("page-views", 10);
long views = db.kv().incrby("page-views", 0); // read current value

// TTL
db.kv().set("session:abc", Strings.create("data"), 3600000); // 1 hour TTL
long remaining = db.kv().ttl("session:abc");

Example: Multi-Node Replication

// Create two nodes with different keys
AKeyPair keyA = AKeyPair.generate();
AKeyPair keyB = AKeyPair.generate();

KVDatabase dbA = KVDatabase.create("shared", keyA, "node-a");
KVDatabase dbB = KVDatabase.create("shared", keyB, "node-b");

// Each writes different data
dbA.kv().set("from-a", Strings.create("hello"));
dbA.kv().incr("counter");

dbB.kv().set("from-b", Strings.create("world"));
dbB.kv().incr("counter");

// Exchange signed replicas
dbA.mergeReplicas(dbB.exportReplica());
dbB.mergeReplicas(dbA.exportReplica());

// Both now see all data
dbA.kv().get("from-b"); // "world"
dbA.kv().incrby("counter", 0); // 2 (PN-counter merged)

Example: Network Replication via Lattice Nodes

// Create NodeServers with Lattice.ROOT
NodeServer<?> server1 = new NodeServer<>(Lattice.ROOT, store1, 19800);
NodeServer<?> server2 = new NodeServer<>(Lattice.ROOT, store2, 19801);
server1.launch();
server2.launch();

// Connect peers
server1.addPeer(ConvexRemote.connect(addr(19801)));
server2.addPeer(ConvexRemote.connect(addr(19800)));

// Create KV databases and write data
KVDatabase db1 = KVDatabase.create("shared", key1, "node-1");
db1.kv().set("key", Strings.create("value"));

// Publish signed replica to lattice at :kv (OwnerLattice level)
// exportReplica returns {ownerKey → signed({dbName → kvStore})}
AHashMap<ACell, ACell> replica = (AHashMap) db1.exportReplica();
server1.updateLocalPath(replica, Keywords.KV);

// Sync — LatticePropagator broadcasts automatically
server1.sync();

// Read merged owner map from lattice at :kv
AHashMap<?,?> ownerMap = (AHashMap) server1.getCursor().get(Keywords.KV);
db1.mergeReplicas(ownerMap);

See Also