Cursor Resume & Backpressure
Cursor Resume
How It Works
Every message carries a seq, a monotonically increasing number. On reconnection, pass the last received seq via query parameter:
ws://localhost:8443/v1/ws?resume_from=12345
Ring Buffer
| Property | Value |
|---|---|
| Capacity | 100,000 entries |
| Entry type | Pre-serialized JSON strings |
| Eviction | FIFO (oldest entries removed when full) |
| Memory | ~20-100 MB depending on average event size |
| Persistence | None — in-memory only, lost on server restart |
Important details:
- One entry = one broadcast message (an
Eventsbatch, aTPSupdate, aLifecycleupdate, etc.) - At ~5,000 events/second, the buffer holds ~20 seconds of history
- At ~1,000 events/second, the buffer holds ~100 seconds
- Entries are pre-serialized; the server replays the exact same JSON bytes, making replay zero-cost
- Replay entries are subscription-independent: if you reconnect with a different subscription, you still receive old entries that were in the buffer (they were serialized for the broadcast, not per-client)
Resume vs Snapshot
| Condition | Behavior |
|---|---|
resume_from specified and seq within buffer range | Resume: replay all messages with seq > resume_from |
resume_from specified but seq is stale | Snapshot: current state after the first Subscribe |
resume_from specified, buffer empty | Snapshot |
resume_from not specified | Snapshot (fresh connect) |
The server does not send an explicit Resume frame. You can determine the mode by the presence of replay messages before Hello.
Checking the Window
curl http://localhost:8443/v1/status
# {"oldest_seqno": 4321, "newest_seqno": 54321, ...}
If your_seq >= oldest_seqno, resume will succeed. Otherwise, snapshot.
Deduplication
During resume, there may be brief overlap with the live stream. The client must deduplicate by seq:
let lastSeq = 0;
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
if (msg.seq <= lastSeq) return; // skip duplicate
lastSeq = msg.seq;
// process...
};
Server Restart
seqcontinues from the event ring position (does not reset to 0)- Resume buffer is empty (in-memory)
- All clients will receive a snapshot on reconnection
seqmonotonicity is preserved across restarts
Backpressure
Architecture
Each WebSocket client has a bounded channel with a capacity of 4,096 messages. If the client cannot keep up, messages are dropped.
Drop Policy
| Condition | Action |
|---|---|
| Channel has space | Message is queued |
| Channel is full | Message is dropped, drop counter +1 |
| Every 1,000 drops | Warning frame sent to client (best-effort) |
| 10,000 drops | Client is disconnected |
Warning Frame
{"seq": 0, "Warning": {"type": "backpressure", "dropped": 1000, "drop_limit": 10000}}
Detecting Drops
Gaps in seq = dropped messages:
let lastSeq = 0;
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
if (lastSeq > 0 && msg.seq > lastSeq + 1) {
console.warn(`Gap: ${lastSeq} → ${msg.seq}`);
}
lastSeq = msg.seq;
};
Recommendations
- Process quickly: do not perform heavy operations in the message handler, offload to a queue
- Watch for Warning:
type: "backpressure"means you are falling behind - Subscribe narrowly: only the event types you need, use filters
- Use resume_from: on disconnect, reconnect with the last
seq
Full Client Pattern: Connect → Subscribe → Resume → Handle Errors
const WS_URL = "ws://localhost:8443/v1/ws";
let lastSeq = 0;
let ws;
function connect() {
const url = lastSeq > 0
? `${WS_URL}?resume_from=${lastSeq}`
: WS_URL;
ws = new WebSocket(url);
ws.onopen = () => {
// Subscribe immediately — server waits for this before sending Hello
ws.send(JSON.stringify({ subscribe: ["BlockStart", "TPS", "Lifecycle"] }));
// Send periodic Ping to stay alive (server checks liveness, doesn't send Ping)
const pingInterval = setInterval(() => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(""); // empty message counts as activity
} else {
clearInterval(pingInterval);
}
}, 25000); // every 25s (timeout is 60s)
};
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
// Skip control frames for cursor tracking
if (msg.seq > 0) {
// Deduplicate (possible during resume overlap)
if (msg.seq <= lastSeq) return;
lastSeq = msg.seq;
}
// Detect backpressure
if (msg.Warning) {
console.warn(`Backpressure: ${msg.Warning.dropped}/${msg.Warning.drop_limit} drops`);
return;
}
// Handle Hello
if (msg.Hello) {
console.log(`Connected: v${msg.Hello.server_version}, chain ${msg.Hello.chain_id}`);
return;
}
// Handle errors (subscription not changed)
if (msg.Error) {
console.error(`Subscribe error: ${msg.Error.type} — ${msg.Error.message}`);
return;
}
// Process events
if (msg.Events) {
for (const e of msg.Events) {
processEvent(e); // your business logic — keep this fast!
}
}
if (msg.TPS !== undefined) processTPS(msg.tps);
if (msg.lifecycle) processLifecycle(msg.lifecycle);
};
ws.onclose = () => {
console.log("Disconnected, reconnecting in 3s...");
setTimeout(connect, 3000); // reconnect with lastSeq for resume
};
ws.onerror = (err) => {
console.error("WebSocket error:", err);
ws.close();
};
}
connect();
Edge Cases
Resume with stale cursor
If your resume_from is older than the ring buffer's oldest_seqno, the server silently falls back to snapshot mode. You will receive Hello + current state but no indication of missed events. To detect this, check /v1/status before reconnecting:
curl http://localhost:8443/v1/status | jq '.oldest_seqno'
Server restart
seq continues from the event ring position (does not reset), but the resume buffer is empty. All reconnecting clients will receive snapshot mode. There is no data loss in seq monotonicity, just a gap in buffer availability.
Subscription change on reconnect
The ring buffer stores pre-serialized messages independently of subscriptions. If you reconnect with ?resume_from= but send a different subscription, you will still receive replayed messages from the old subscription's perspective (they were already serialized). After replay completes, only events matching your new subscription will arrive.