Introduction
I have spent the last eight years maintaining a real-time collaboration engine. Not for a text editor. Not for a 2D canvas. For a structured project workspace where the same JSONB tree renders as a Kanban board, a Gantt chart, a Mind Map, and four other views — simultaneously, for every connected user.
The question I get asked most at engineering meetups is always the same: "OT or CRDT?"
The answer in 2026 is the same answer it was in 2018: it depends on your architecture. But the landscape has shifted. CRDTs have matured dramatically. Eg-walker bridged OT and CRDT performance. And a new variable has entered the equation that nobody anticipated — AI agents editing documents at 1,000+ words per minute alongside human collaborators typing at 40 WPM.
TL;DR: OT and CRDT are both viable for multiplayer apps in 2026. OT wins for centralized, server-authoritative architectures with structured data and deterministic conflict resolution. CRDTs win for offline-first, peer-to-peer, and local-first apps. The new wildcard is AI agent velocity — agents generate operations 25-100x faster than humans, stressing both algorithms in ways neither was designed for. Try Taskade free →
This post compares OT and CRDT across eight dimensions, explains the AI velocity mismatch problem, and gives you a decision framework for choosing the right algorithm for your multiplayer app.

What Is Operational Transform (OT)?
Operational Transform is a concurrency control algorithm invented at Xerox PARC in 1989 by Ellis and Gibbs. The core idea is simple: when two users make concurrent edits, a central server transforms one operation against the other so both clients converge to the same state.
Here is the classic example. Alice inserts "X" at position 3. Bob deletes the character at position 1. If we apply both operations naively, the document corrupts. OT transforms Bob's delete so Alice's insert position adjusts correctly.
The key architectural requirement: OT needs a server. The server maintains a linear revision history and transforms incoming operations against any that arrived first. This is why Google Docs, Figma, and Taskade all use OT — they already have centralized servers.
OT satisfies two convergence properties:
- CP1 (Convergence Property 1): If two clients start from the same state and apply the same set of operations (possibly in different orders), they converge to the same final state.
- TP1 (Transformation Property 1): The transform function preserves intention when applied pairwise.
For centralized architectures, CP1 and TP1 are sufficient. You do not need CP2 or TP2 (which handle three-way and n-way concurrent transforms). This is a massive simplification that CRDT advocates rarely mention.
The V=AXY Client State Model
Most production OT systems use a three-part client state model inspired by Etherpad's easysync:
- A (Acknowledged): The last server-confirmed state. Both client and server agree on this.
- X (Submitted): Changes sent to the server but not yet acknowledged. Like a text message sent but awaiting the delivery receipt.
- Y (Local): Changes applied locally but not yet submitted. The user sees these immediately — zero perceived latency.
When the server acknowledges X, it merges into A. If concurrent operations arrived while X was in flight, Y gets transformed against them. The user never waits.
This model is why real-time collaboration feels instant even on high-latency connections. Your keystrokes apply immediately to Y. The server catches up asynchronously.
What Are CRDTs?
Conflict-free Replicated Data Types are data structures that guarantee convergence without a central coordinator. Instead of transforming operations on a server, CRDTs encode merge rules into the data structure itself. Every replica can accept edits independently, and when replicas sync, they merge deterministically.
The two main families:
- State-based CRDTs (CvRDTs): Replicas exchange full state and merge via a join-semilattice. Simple but bandwidth-heavy.
- Operation-based CRDTs (CmRDTs): Replicas exchange operations. More efficient but requires reliable causal delivery.
For text editing, the most common CRDT approaches are:
- RGA (Replicated Growable Array): Used by Automerge. Each character gets a unique ID (site ID + logical timestamp). Deletions create tombstones rather than removing elements.
- YATA (Yet Another Transformation Approach): Used by Yjs. Optimized for sequential inserts with a linked list structure.
- Fugue: A 2023 design from Weidner et al. that achieves optimal interleaving behavior.
The defining advantage of CRDTs: no server required. Two peers can edit offline for hours, sync when reconnected, and converge to the same state without any central authority. This is transformative for local-first software, peer-to-peer apps, and edge computing.
The defining cost: metadata overhead. Every character or element carries a unique ID, a causal ordering structure, and tombstones for deletions. Per TinyMCE's research, this adds 16-32 bytes per character for text CRDTs. For a 50,000-word document, that is 1.6-3.2 MB of metadata on top of the content itself.
OT vs CRDT: Head-to-Head Comparison
Here is how OT and CRDT compare across the eight dimensions that matter most for production multiplayer software:
| Dimension | OT | CRDT |
|---|---|---|
| Architecture | Centralized, server-authoritative | Decentralized, peer-to-peer capable |
| Consistency model | Strong (server-ordered) | Eventual (merge semantics) |
| Conflict resolution | Deterministic server transform | CRDT merge rules (last-writer-wins, sets, counters) |
| Memory overhead | Revision number + current state | Tombstones + unique IDs + vector clocks per element |
| Offline support | Limited (buffer and replay) | Native (designed for disconnected editing) |
| Implementation complexity | Transform functions per operation pair (O(n^2) pairs) | Data structure design + garbage collection |
| Latency | One round-trip to server per batch | Zero (local-first apply, sync async) |
| Best fit | SaaS with backend, structured data, typed fields | Local-first apps, P2P, offline-heavy workflows |
Neither algorithm is universally better. They solve different problems. OT assumes you have a server and optimizes for that constraint. CRDTs assume you might not have a server and pay the metadata cost for that freedom.
Memory Overhead: The Numbers
This is the dimension most teams underestimate. Let me put concrete numbers on it.
For a Taskade project with 10,000 nodes (a mid-size workspace), each node containing ~200 characters of text plus metadata:
- OT storage: Current state (~2 MB for nodes + text) + 1 revision counter (8 bytes). Total: ~2 MB.
- CRDT storage: Current state (
2 MB) + 16-32 bytes per character of tombstone/ID overhead (32-64 MB for 2M characters) + vector clocks. Total: ~34-66 MB.
That is a 17-33x difference in memory footprint. For a single document. Multiply by thousands of active documents in a workspace, and the operational cost diverges dramatically.
CRDTs can garbage collect tombstones when all replicas have seen the deletion, but this requires coordination — which partially negates the "no coordination needed" benefit.
The Architecture Decision: Centralized vs Decentralized
The OT-vs-CRDT debate is really an architecture debate. The algorithm follows from your infrastructure decision.
If you have a server (and most SaaS products do), OT gives you deterministic conflict resolution with minimal overhead. You satisfy CP1/TP1 and you are done. No tombstones. No vector clocks. No garbage collection coordination.
If you do not have a server — or your users need to work offline for extended periods, or you are building peer-to-peer software — CRDTs are the natural choice. The metadata cost is the price of decentralization, and it is worth paying when your users genuinely need it.
The mistake I see most often: teams choosing CRDTs for a centralized SaaS app because CRDTs feel more "modern." CRDTs solve a problem you do not have (decentralized consensus), and you pay for it in memory, complexity, and garbage collection logistics.
Why Centralized OT Is Simpler Than You Think
The academic OT literature is intimidating. Papers discuss CP2, TP2, n-way transforms, and the Jupiter protocol. Here is what you actually need for a production centralized system:
- CP1: All clients converge when they apply the same operations. The server ensures this by maintaining a single operation order.
- TP1: Pairwise transform correctness. If operation A and B are concurrent, transform(A, B) and transform(B, A) produce the same final state.
That is it. CP2 (three-way convergence) is only needed for decentralized systems where operations can arrive in arbitrary causal order. If you have a server, you have a total order. CP2 is irrelevant.
This is the simplification that made our real-time collaboration engine tractable. Eight operation types, each with a pairwise transform against every other operation type: 8 x 8 = 64 transform pairs. Manageable. Testable. Debuggable at 2 AM when something breaks.
OT for Structured Data: The Hard Problem Nobody Talks About
Most OT and CRDT blog posts focus on flat text. Insert character. Delete character. That is the solved problem.
The unsolved problem — the one I have been working on for eight years — is OT for tree-structured data with typed fields that renders across multiple views.
At Taskade, every document is a JSONB tree of nodes. Each node contains rich text (Quill Delta), metadata, custom fields (dates, statuses, priorities, assignees), and child references. The same tree renders across 7 project views — List, Board, Calendar, Table, Mind Map, Gantt, and Org Chart.
When a user drags a card from "In Progress" to "Done" in Board view, that single action must:
- Update the status field on the node (
set-node-format) - Potentially reorder children (
del-node-children+ins-node-children) - Propagate to the Gantt view (task completion affects timeline)
- Propagate to the Calendar view (if the done date changes)
- Propagate to the Mind Map view (visual state updates)
- Broadcast to every connected client across all views
And if another user simultaneously drags the same card to a different column? The server must transform these concurrent operations deterministically. Last-writer-wins on the status field. The second operation gets transformed against the first.
This is why OT wins for structured data. The server can enforce semantic rules: "status changes are last-writer-wins, but reparenting requires parent-child consistency checks." CRDTs encode merge rules into the data structure, but those rules must be generic — a CRDT does not know that moving a Kanban card should also update the Gantt timeline.
The Eight Operation Types
Every interaction across all seven views maps to a vocabulary of eight operations:
| Operation | What It Does | Example Interaction |
|---|---|---|
ins-node |
Insert a new node into the tree | Create a new task |
del-node |
Remove a node from the tree | Delete a task |
set-node-completed |
Toggle completion state | Check off a to-do item |
set-node-collapsed |
Toggle collapse/expand | Collapse a section in List view |
set-node-format |
Change metadata or custom fields | Drag a Kanban card, change a date, set priority |
ins-node-children |
Add children to a node | Create subtasks |
del-node-children |
Remove children from a node | Remove subtasks |
apply-node-text |
Apply Quill Delta to rich text | Type content into a task |
Eight operations. Every single user interaction across seven views decomposes into one or more of these eight primitives. A Kanban drag is del-node-children + ins-node-children + set-node-format. A Gantt bar drag is set-node-format on the date fields. Keeping the operation set small means the transform matrix is tractable: 64 pairwise transforms, each hand-written and tested.
Conflict Resolution: Three Real Scenarios
Theory is useful. Production scenarios are more useful. Here are three conflicts we handle daily.
Scenario 1: Simultaneous Kanban Drag
Alice drags a task to the "Done" column. Bob drags the same task to "In Progress." Both operations hit the server within milliseconds.
- Alice's
set-node-format(status="Done")arrives first. - Bob's
set-node-format(status="In Progress")arrives second. - The server transforms Bob's operation against Alice's. Same field, same node — last-writer-wins. Alice's change stands.
- Bob sees his card snap to "Done." Jarring, but deterministic. Both clients converge.
Scenario 2: Edit While Reparenting
Alice types in a task's description. Bob moves that task to a different project section. These operations touch different fields — text content vs. parent reference.
Both operations apply without conflict. Alice's text appears in the reparented location. The OT engine recognizes that apply-node-text and del-node-children + ins-node-children are independent and can coexist.
Scenario 3: Delete While Editing
Alice deletes a task. Bob is typing in that task. The server processes the delete first.
Bob's apply-node-text targets a node that no longer exists. The operation is dropped. Bob sees the task disappear and his unsaved text is lost.
This is an honest trade-off. We could tombstone deleted nodes and preserve pending edits, but that adds CRDT-like overhead and complicates the deletion model. We chose simplicity. Eight years of production data shows this conflict is rare — and when it happens, the user re-types a few words.
The AI Agent Velocity Mismatch
Here is the variable that was not in any OT or CRDT paper before 2024: AI agents editing documents alongside humans.
A human types at 40 words per minute. An AI agent generates text at 1,000-4,000 WPM. That is a 25-100x velocity mismatch hitting the same collaboration engine.
When we first connected AI to our OT engine in late 2022, the results were illuminating. An AI writer generating a project outline produced hundreds of operations per second — each one triggering a transform, a broadcast, a database write, and a render update on every connected client. Human cursors stuttered. The server's transform queue grew faster than it drained.
This problem affects both OT and CRDT:
- OT: The server becomes a bottleneck. Every AI-generated operation must be transformed against concurrent human operations and broadcast. At 100 ops/second from the AI plus 1 op/second from each human, the transform queue grows linearly with AI throughput.
- CRDT: The metadata overhead compounds. Each AI-generated character carries its unique ID and causal ordering. A 2,000-word AI response creates ~12,000 CRDT entries in seconds, each with 16-32 bytes of metadata. That is 192-384 KB of metadata for a single AI reply.
How We Handle It
Our approach — and this applies regardless of whether you use OT or CRDT — is operation batching at the agent layer:
- Batch AI operations: Instead of emitting one operation per character, the AI agent batches output into paragraph-level changesets. A 500-word paragraph becomes a single
ins-node+apply-node-textcompound operation, not 2,500 individual character inserts. - Rate-limit agent broadcasts: AI operations propagate to human clients at a human-readable cadence (every 200ms), even if the AI generates faster internally.
- Separate transform priority: Human operations get priority in the transform queue. If the server is busy transforming AI batches, human ops jump the queue to preserve responsiveness.
This is the kind of practical engineering that does not show up in algorithm comparisons. OT and CRDT are both viable foundations, but the integration layer determines the user experience when AI agents and humans collaborate in the same document.
Framework and Library Comparison (2026)
If you are building multiplayer today, here are the leading options:
| Framework | Algorithm | Language | Notable Users | Best For |
|---|---|---|---|---|
| Yjs | CRDT (YATA) | TypeScript | Notion, Jupyter, Cargo | Text-heavy collaborative editors |
| Automerge | CRDT (RGA) | Rust + JS bindings | Ink & Switch, Muse | Local-first research apps |
| Diamond Types | CRDT | Rust | — | Performance-critical CRDT systems |
| Eg-walker | OT/CRDT hybrid | Research | — | Academic benchmark (Gentle & Kleppmann 2024) |
| ShareDB | OT (JSON0) | Node.js | Various startups | Quick MVP with JSON OT |
| Liveblocks | CRDT (hosted) | TypeScript SDK | Various SaaS | Managed collaboration infra |
| PartyKit | CRDT (Yjs-backed) | TypeScript | — | Edge-deployed real-time sync |
| Custom OT | OT | Varies | Google Docs, Figma, Taskade | Teams needing full control |
A few observations:
Yjs dominates the CRDT ecosystem. It is battle-tested, well-documented, and integrates with most editors (ProseMirror, TipTap, CodeMirror, Monaco). If you want CRDTs, start with Yjs.
Eg-walker is the most interesting academic work. Gentle and Kleppmann showed in 2024 that you can build a CRDT that matches OT's performance characteristics by replaying operations in causal order — essentially an OT algorithm wearing a CRDT interface. It validates the intuition that the algorithms are converging.
Custom OT is still the choice for complex data models. Google Docs, Figma, and Taskade all built custom OT engines because off-the-shelf solutions could not handle their specific data structures (rich text with formatting, 2D canvas objects, JSONB trees with typed fields). If your data model is more complex than plain text, expect to build custom.
Taskade's Approach: OT for the Entire Workspace
At Taskade, we chose OT in 2017 before CRDTs were trendy. Eight years later, I would make the same decision.
Our architecture is centralized. Our data model is a typed JSONB tree. Our conflict resolution needs are deterministic (status fields, date fields, completion states). CRDTs would give us offline-first capability we do not need (our users are almost always online) while costing us 17-33x more memory per document and significantly more implementation complexity for typed field resolution.
| Aspect | Our Approach |
|---|---|
| Algorithm | Custom OT, inspired by Etherpad easysync |
| Client state model | V=AXY (Acknowledged, Submitted, Local) |
| Operation types | 8 typed operations for tree manipulation |
| Transport | WebSocket for real-time delivery |
| Convergence | CP1 + TP1 (sufficient for centralized) |
| Rich text | Quill Delta OT (separate domain from tree OT) |
| Views from one tree | 7 — List, Board, Calendar, Table, Mind Map, Gantt, Org Chart |
| AI agent integration | Batched operations, rate-limited broadcast, priority queue |
The AI agent system connects through the same OT engine. When an AI agent builds a project outline, generates subtasks, or edits descriptions, those operations flow through the same transform pipeline as human keystrokes. The agent is just another client in the V=AXY model — with much faster Y generation.
You can see this in action in the Community Gallery where users share AI-generated apps built on top of this collaboration infrastructure. Every one of those apps renders multiple views from a single OT-synced tree.
Pricing and Access
Taskade Genesis starts at $6/month (Starter, annual billing), with Pro at $16/month for up to 10 users and Business at $40/month for unlimited seats. Real-time collaboration and AI agents are available across all plans. See the pricing page for full plan details.
Transport Architecture: Why Three Systems
A common question when comparing OT and CRDT implementations: how do you actually move operations between clients?
For OT, you need three things:
- Real-time delivery: Operations must reach the server and return to clients within milliseconds. WebSockets handle this.
- Multi-instance fanout: If your server runs multiple instances behind a load balancer, operations arriving at instance A must reach clients connected to instance B. A pub/sub layer handles this.
- Durable side effects: Operations trigger secondary effects — search index updates, notification delivery, automation workflows, activity feeds. These need ordering guarantees and durability. A message queue handles this.
For CRDTs in a P2P architecture, the transport is simpler (any reliable channel between peers) but the sync protocol is more complex (causal ordering, state/operation merging, anti-entropy).
Neither approach is simpler end-to-end. OT has a simpler data model but a more complex transport. CRDTs have a more complex data model but a simpler transport. The total system complexity is roughly equivalent — it just lives in different layers.
Lessons from Eight Years in Production
I have maintained this OT engine through three major redesigns, the addition of five new view types, the introduction of AI editing, and millions of documents. Here is what I have learned.
1. Start Simple, Stay Simple
Our OT is not theoretically elegant. The transform functions are hand-written, case-by-case implementations for each operation pair. No generic transform framework. No algebraic formalization. Just 64 functions that handle every combination of our eight operation types.
Evan Wallace at Figma said the same thing: "It doesn't need to be a general-purpose OT library. It needs to work for your specific data model." I agree completely.
2. The Multi-View Problem Compounds
Every new view type we add must work with all eight operation types. When we went from 4 views to 7, the testing surface did not grow linearly — it grew multiplicatively. Each view has unique rendering logic that interprets the tree differently, and each must handle every operation type correctly.
Calendar view interprets set-node-format on a date field as an event move. Gantt view interprets the same operation as a timeline shift. Board view interprets a status field change as a card drag between columns. Same operation, seven interpretations, zero inconsistencies allowed.
3. Undo/Redo Is the Hardest Problem
In single-player, undo reverses the last operation. In multiplayer, "undo" means "undo MY last change, not everyone's changes." This requires tracking operation provenance — which operations belong to which user — and computing inverse transforms that only affect the current user's changes while preserving everyone else's.
We got this wrong twice before getting it right. The naive approach (invert the last operation) corrupts the document when concurrent operations have been applied between the original operation and the undo.
4. Testing OT Is Brutally Hard
You need to test every pair of concurrent operations across every operation type. That is 8 x 8 = 64 transform pairs minimum. But each pair has multiple sub-cases: same node vs. different nodes, parent-child relationships, overlapping text ranges, identical field modifications.
In practice, our OT test suite covers ~400 scenarios. Every production bug we have ever found maps to a missing test scenario. We add the scenario, fix the transform, and the bug class disappears forever. The test suite is the real product.
5. Rich Text Is a Separate OT Domain
We do not transform rich text ourselves. Node-level operations (insert, delete, reparent, set fields) use our custom OT. Text within nodes uses Quill Delta's built-in OT. These are two independent transform domains that compose cleanly: tree operations transform at the node level, text operations transform at the character level within a single node.
This separation is critical. Mixing tree transforms with text transforms would create a combinatorial explosion of edge cases. Keep them separate. Let Quill Delta handle text. Handle the tree yourself.
6. Convergence Bugs Are Data Corruption
In most software, a bug means a wrong UI state, a crash, or a bad response. In OT, a convergence bug means every client sees a different document. And they do not know it. The documents silently diverge, and users discover the inconsistency minutes or hours later when they compare screens.
This is why I agree with Jake Nations's framing: you cannot ship OT code you do not understand. A transform bug corrupts every document on the platform. There is no AI that can debug convergence at 2 AM. You need to understand every transform.
When to Choose OT
Choose OT when:
- You have a centralized server and do not need peer-to-peer sync
- Your data model has typed fields that need deterministic conflict resolution (statuses, dates, priorities)
- You want minimal memory overhead and can trade offline capability for it
- You are building a SaaS product where users are almost always connected
- You need to integrate AI agents that generate operations at high velocity (the server can rate-limit and batch)
- Your data is tree-structured (task hierarchies, outlines, org charts) rather than flat text
Production examples: Google Docs, Figma, Taskade
When to Choose CRDTs
Choose CRDTs when:
- Users need to work offline for extended periods (hours or days) and sync later
- You are building peer-to-peer software without a central server
- Your data model is primarily plain text or simple key-value pairs
- You want edge deployment where each node operates independently
- You are building a local-first app where data lives on the user's device
- Your conflict resolution can use generic merge semantics (last-writer-wins, sets, counters) without domain-specific rules
Production examples: Notion (Yjs-based), Linear (custom sync), Obsidian (local-first)
The 2026 Landscape: Convergence of OT and CRDT
The most interesting development in the last two years is the convergence of OT and CRDT as distinct paradigms.
Eg-walker (Gentle and Kleppmann, EuroSys 2024) showed that you can build a system that is technically a CRDT but replays operations in a way that produces OT-equivalent transforms. The performance characteristics match OT for common editing patterns while maintaining CRDT's decentralized guarantees.
Matthew Weidner's work on collaborative text editing without CRDTs or OT pushes further — arguing that the dichotomy itself is a false choice, and that hybrid approaches can take the best properties of both.
Meanwhile, practical production systems continue to use straightforward OT or CRDT implementations. Google Docs has not migrated to CRDTs. Figma has not migrated to CRDTs. Notion uses Yjs but augments it with server-side validation. The academic frontier and the production frontier are on different timelines.
My prediction: by 2028, the OT-vs-CRDT debate will be as relevant as the REST-vs-SOAP debate is today. The abstractions will merge into general-purpose "sync engines" that handle centralized and decentralized modes transparently. Liveblocks and PartyKit are early examples of this convergence in the tooling layer.
But we are not there yet. In 2026, if you are building a centralized SaaS product with structured data and AI agents, OT remains the pragmatic choice. If you are building a local-first app for offline-heavy workflows, CRDTs are the natural fit. Choose the algorithm that matches your architecture, not the one that matches the Hacker News zeitgeist.
Getting Started: Build Multiplayer with Taskade Genesis
If you want to experience a production OT system without building one from scratch, Taskade Genesis lets you build live apps, dashboards, and project management workflows with real-time collaboration across all 7 views. AI agents can edit, create, and automate tasks alongside human collaborators — all powered by the OT engine described in this post.
Every app in the Community Gallery runs on this infrastructure. Templates, AI agents, and automation workflows are all built on the same JSONB tree, synced via the same OT pipeline, rendered across the same 7 views.
Start building at taskade.com/create. Plans start at $6/month (Starter) with Pro at $16/month for teams up to 10.
Further Reading
If you want to go deeper on OT, CRDTs, and the AI multiplayer problem:
- Figma: How Figma's Multiplayer Technology Works — the canonical OT post for 2D canvas
- Figma: Realtime Editing of Ordered Sequences — fractional indexing deep dive
- Eg-walker (Gentle & Kleppmann, 2024) — OT/CRDT hybrid with benchmark data
- Matthew Weidner: Collaborative Text Editing without CRDTs or OT — post-paradigm thinking
- Liveblocks: How Figma, Linear, and Google Docs Work — sync engine comparison
- TinyMCE: OT vs CRDT — CRDT memory overhead data
- Hex: A Pragmatic Approach to Live Collaboration — another OT shop's perspective
- Google Wave retrospective — why the first mainstream OT product failed
- Multiplayer software history — from games to collaboration tools
- Real-time multiplayer indicators — presence systems that layer on top of OT
- Visual collaboration guide — Mind Maps, Kanban, and outlines in practice
Frequently Asked Questions
What is the difference between OT and CRDT?
Operational Transform (OT) uses a central server to transform concurrent operations into a consistent order. Conflict-free Replicated Data Types (CRDTs) encode merge rules into the data structure itself so replicas converge without coordination. OT requires a server but gives deterministic conflict resolution. CRDTs work peer-to-peer but carry higher memory overhead from tombstones and vector clocks.
Which is better for real-time collaboration, OT or CRDT?
It depends on your architecture. OT is better for centralized, server-authoritative apps like Google Docs or Taskade where you want deterministic conflict resolution and low memory overhead. CRDTs are better for offline-first and peer-to-peer apps like local-first note-taking tools where users need to work without a server.
Why did Google Docs choose OT instead of CRDT?
Google Docs uses OT because it runs a centralized server model. With a server as the single source of truth, OT only needs to satisfy CP1 and TP1 convergence properties, which is simpler than the full decentralized convergence that CRDTs solve. The server orders all operations, eliminating the need for tombstones, vector clocks, or conflict-free merge semantics.
Do CRDTs use more memory than OT?
Yes. CRDTs store metadata per character or per element — typically 16 to 32 bytes of overhead per character for text CRDTs (tombstones, unique IDs, vector clocks). OT stores only the current document state plus a revision number. For a 10,000-node project, a CRDT can use 10 to 50 times more memory than OT depending on edit history.
Can CRDTs work offline while OT cannot?
CRDTs are designed for offline and peer-to-peer use cases. Each replica carries enough metadata to merge independently when reconnected. OT traditionally requires a server to transform operations, making pure offline use harder. However, modern OT systems buffer local changes and replay them on reconnection, providing a practical offline experience for most SaaS collaboration tools.
What is the AI agent velocity problem in multiplayer documents?
AI agents generate text at 1,000 to 4,000 words per minute — 25 to 100 times faster than a human typist at 40 WPM. This velocity mismatch floods OT servers with operations and can starve human cursors of bandwidth. Both OT and CRDT systems must implement operation batching, rate limiting, or dedicated agent channels to handle AI-speed edits without degrading the human editing experience.
What are the main CRDT frameworks available in 2026?
The leading CRDT frameworks in 2026 are Yjs (most popular, used by Notion and Jupyter), Automerge (academic roots from Kleppmann's research), and Diamond Types (Rust-based, high performance). Eg-walker from Gentle and Kleppmann is a 2024 hybrid that bridges OT and CRDT performance characteristics. Liveblocks and PartyKit offer hosted CRDT infrastructure as a service.
Is OT outdated in 2026?
No. OT powers Google Docs, Figma, Taskade, and most production collaboration software in 2026. The narrative that CRDTs replaced OT is misleading. CRDTs solved a different problem — decentralized sync — that most SaaS products do not need. OT remains the pragmatic choice for centralized, server-authoritative architectures where deterministic conflict resolution and low memory overhead matter.
How does Taskade handle real-time collaboration across 7 project views?
Taskade stores every document as a JSONB node tree. The same tree renders across 7 views — List, Board, Calendar, Table, Mind Map, Gantt, and Org Chart. OT operations transform against concurrent edits on the server, then broadcast to all clients. A card drag in Board view updates the Gantt timeline and Mind Map branch simultaneously because all views read from the same synced tree.
Should I use OT or CRDT for my new multiplayer app?
Use OT if you have a centralized server, need deterministic conflict resolution, and want minimal memory overhead. Use CRDTs if you need offline-first, peer-to-peer sync, or your users work disconnected for extended periods. For most SaaS apps with a backend, OT is simpler to implement and cheaper to operate. For local-first desktop apps, CRDTs are the natural fit.




