Skip to content

Client-side DO mental model

There are two directional flows you should keep straight:

  • Outbound: local collection mutation -> RPC call -> DO writes state.
  • Inbound: DO emits event -> subscription handler -> local collection updates.

The Durable Object is the source of truth. Local collections are optimistic and sync through events.

In both the inventory and currency collections, the pattern is the same:

  1. A collection mutation happens (insert, update, or delete).
  2. createMutationHandlers resolves the owning DO client by inventoryId.
  3. The mutation metadata includes an action describing the RPC call to make.
  4. The RPC method is called on the DO client.

This keeps UI logic simple: state changes are declared once, and the RPC call is derived from metadata.

The DO uses createRpcServer to emit events when RPC methods finish. In the client:

  1. createSubscriptionHandlers binds RPC events to collection updates.
  2. Each RPC event handler inserts, updates, or deletes local records.
  3. Transfers update multiple inventories by mutating multiple records.

Examples:

  • updateCurrency merges new currency totals into a single record keyed by inventoryId.
  • transferCurrency reduces the source record and increases the target record.
  • transferEntry updates an entry’s inventoryId and sortOrder in place.

The client never needs to poll because the DO broadcasts events for every write.

One important detail: events are only emitted when a mutation flows through the RPC server (WebSocket request or callAndEmit). Direct storage writes that do not pass through the RPC server will not broadcast events to subscribers.

The inventory collection keeps both sides minimal because the method name is the same on both sides.

Outbound (mutation -> RPC):

update: async ({ client, metadata }) => {
const action = (metadata as { action?: InventoryAction })?.action;
if (action?.type === 'updateEntry') {
await client.client.updateEntry(action.payload);
}
}

Inbound (event -> collection update):

updateEntry: {
update: async ({ payload, collection }) => {
const entry = payload.result;
if (!entry) {
return;
}
collection.update(entry.entryId, (draft) => ({
...draft,
...entry,
}));
},
}

Because the outbound call and inbound event share the same method name, you can reason about every mutation as a round trip with a single source of truth (the DO).

The transfer flow is similarly straightforward: the outbound call passes through, and the inbound event updates the local records that moved inventories.

Outbound (mutation -> RPC):

update: async ({ client, metadata }) => {
const action = (metadata as { action?: InventoryAction })?.action;
if (action?.type === 'transferEntries') {
await client.client.transferEntries(action.payload);
}
}

Inbound (event -> collection update):

transferEntries: {
update: async ({ payload, collection }) => {
const result = payload.result ?? payload.args[0];
if (!result) {
return;
}
result.results.forEach((entryResult) => {
const entry = collection.get(entryResult.entryId);
if (!entry) {
return;
}
collection.update(entryResult.entryId, (draft) => {
draft.inventoryId = entryResult.toInventoryId;
draft.sortOrder = entryResult.sortOrder;
draft.updatedAt = new Date().toISOString();
});
});
},
}
  • TanStack DB keeps a local, reactive cache so UI updates are immediate.
  • Durable Objects remain authoritative, and WebSocket events reconcile every client.
  • Shared RPC method names keep outbound actions and inbound events aligned.
  • Reconnect + resubscribe behavior means your local cache recovers without manual refetching.

Both collection files treat metadata.source === 'bootstrap' specially. This prevents a loop:

  1. Client seeds local state from a server snapshot.
  2. That insert/update would normally call the DO again.
  3. The bootstrap metadata short-circuits outbound RPC calls.

Treat it as a one-way import flag so local hydration does not re-trigger writes.

sequenceDiagram
  participant UI
  participant Collection as InventoryCollection
  participant Handlers as Mutation handlers
  participant Client as DO client
  participant OtherClient as Other client
  participant InventoryDO as Inventory DO

  UI->>Collection: collection.update(...)
  Collection->>Handlers: createMutationHandlers
  Handlers->>Client: updateEntry(payload)
  Client->>InventoryDO: WS message: updateEntry(...)
  InventoryDO-->>Client: WS message: response
  InventoryDO-->>OtherClient: WS message: event (updateEntry)
  Client-->>Collection: update local record
  OtherClient-->>OtherClient: update local record
  Collection-->>UI: re-render

The two flows share the same method names (updateEntry, transferCurrency, etc.), which keeps reasoning simple: outbound actions mirror inbound events.

Advanced: two-phase transfers with a coordinator DO

Section titled “Advanced: two-phase transfers with a coordinator DO”

Transfers cross two inventories, so a single DO cannot update both sides atomically. The pattern here is:

  • A coordinator DO (Hoarder) owns the transfer record and step progression.
  • Inventory DOs perform the actual state changes (add/remove entries).
  • Inventory DOs emit RPC events after their local mutation; the coordinator does not emit events.

Mermaid sequence diagram:

sequenceDiagram
  participant UI
  participant Collection as InventoryCollection
  participant Client as DO client
  participant OtherClient as Other client
  participant SourceInventoryDO as Source Inventory DO
  participant HoarderDO as Hoarder DO
  participant TargetInventoryDO as Target Inventory DO

  UI->>Collection: collection.update(...)
  Collection->>Client: transferEntries(payload)
  Client->>SourceInventoryDO: WS message: transferEntries(...)
  SourceInventoryDO->>HoarderDO: transferEntry(transferId,...)
  HoarderDO->>TargetInventoryDO: addEntry(entryPayload)
  HoarderDO->>SourceInventoryDO: removeEntry(entryId)
  SourceInventoryDO-->>Client: WS message: event (transferEntries result)
  SourceInventoryDO-->>OtherClient: WS message: event (transferEntries result)
  Client-->>Collection: update local records
  OtherClient-->>OtherClient: update local records
  Collection-->>UI: re-render

Inventory DO delegates to Hoarder and returns a payload that becomes the event data:

const result = await hoarderStub.transferEntry({
transferId,
campaignId,
fromCharacterId,
toCharacterId,
fromInventoryId,
entryId,
targetInventoryId: toInventoryId,
});
if (result.status !== 'completed') {
throw new Error(result.reason ?? 'Transfer failed');
}
return {
transferId,
fromInventoryId,
toInventoryId,
entryId,
sortOrder: entry.sortOrder,
};

The Hoarder DO keeps a durable transfer record with explicit phases, so retries are safe:

if (record.status === 'pending' || record.status === 'failed') {
await targetStub.addEntry({ entryId: entryPayload.entryId, ... });
this.updateTransfer(transferId, { status: 'target_added' });
}
if (record.status === 'target_added') {
await sourceStub.removeEntry({ entryId: entryPayload.entryId });
this.updateTransfer(transferId, { status: 'source_removed' });
}
this.updateTransfer(transferId, { status: 'completed' });

The client only sees a single transferEntries call and a matching event from the Inventory DO, but the coordinator ensures the multi-step change is restart-safe and idempotent without broadcasting.

  • Use DOs as the source of truth; collections are a cache that stays in sync through events.
  • Keep outbound RPC calls derived from mutation metadata to avoid duplicated logic.
  • Make inbound subscription handlers idempotent so repeated events are safe.