TL;DR: On December 9, 1968, Doug Engelbart spent 90 minutes showing the future — mouse, hypertext, real-time collaboration, shared screens, dynamic links, augmented intellect. Every tool since has shipped a fragment. Google Docs got collaboration. Figma got multiplayer design. The web got hypertext. ChatGPT got the AI part. Nobody shipped the whole thing. Taskade Genesis is the synthesis. Try it free →
What Was the Mother of All Demos? (And Why It Broke Time)
On December 9, 1968, a 43-year-old engineer named Douglas Engelbart walked onto a stage at the Fall Joint Computer Conference in San Francisco and spent 90 minutes demonstrating a computer system that would not be fully re-assembled for 58 years. The demo earned its nickname — "The Mother of All Demos" — because almost every computing interface that followed is a fragment of what he showed that afternoon.
He showed the audience a mouse. They had never seen a mouse.
He showed them hypertext — clickable links between documents. The web wouldn't exist for 21 more years.
He showed them real-time collaboration — his colleague Bill Paxton, 30 miles away in Menlo Park, editing the same document on a shared screen while they talked over video. Google Docs wouldn't ship for 38 more years. Zoom wouldn't exist for 43.
He showed them structured outlines, dynamic file linking, keyset input, networked video conferencing, shared cursors, and a philosophy.
The philosophy is the part everyone forgot.
What Engelbart Was Actually Building
Six years earlier, in 1962, Engelbart had written a manifesto that almost nobody read and almost nobody quotes today. It was called "Augmenting Human Intellect: A Conceptual Framework." The thesis was simple and incendiary:
Computers should not replace human thinking. They should make human thinking radically more powerful through a coevolution of tools, workflows, language, and training.
Engelbart called this the H-LAM/T system — Human using Language, Artifacts, and Methodology, in which he is Trained. The machine was one component of four. Strip any component and the system collapses.
This was a direct rejection of the AI dream his contemporaries were chasing. While Marvin Minsky and John McCarthy at MIT were trying to build machines that thought like humans, Engelbart was building machines that helped humans think better than themselves.
Two tracks. Same decade. Same country. Parallel visions that would not converge for 60 years.
The Bronx Science piece covered one side of this fork. This is the other.
The 90 Minutes That Contained Everything
Let's break down what Engelbart actually showed on that December afternoon. Most histories mention the mouse and move on. That misses the point. The demo was a system, and the system was the argument.
| Feature demoed in 1968 | Year it reached mainstream | Gap |
|---|---|---|
| Mouse as pointing device | 1984 (Apple Macintosh) | 16 years |
| Bitmap display with windows | 1973 (Xerox Alto, internal) | 5 years |
| Hypertext links between documents | 1991 (World Wide Web) | 23 years |
| Real-time collaborative editing | 2006 (Google Docs) | 38 years |
| Shared-screen video conferencing | 2011 (Zoom founded) / 2013 (mainstream) | 45 years |
| Outline-based structured editing | 1984 (ThinkTank) | 16 years |
| Dynamic file linking and transclusion | Partially, still incomplete | 58+ years |
| Integrated version history | 2006 (Docs) / 2015 (Figma) | 38–47 years |
| Networked multi-user workspace | Ongoing | 58+ years |
| AI-augmented reasoning inside a document | 2023 (Notion AI, early) / 2025 (Taskade Genesis, full) | 57 years |
Read that table again. Each row is a multi-billion-dollar company. Apple monetized row 1. Microsoft monetized row 2. Tim Berners-Lee open-sourced row 3 and changed civilization. Google monetized row 4. Zoom monetized row 5. Dropbox and Notion monetized adjacent pieces. Figma took row 4 and won design.
Every one of them shipped a fragment.
None of them shipped the whole thing.
Why Nobody Finished the Demo
Engelbart's vision wasn't abandoned because it was wrong. It was abandoned because it was too expensive for each era's hardware, and each successor extracted the part that fit their constraints.
Xerox PARC (1970s): Took the GUI, dropped the network
Alan Kay, Butler Lampson, and the PARC team had seen the demo. They built the Alto — single-user, bitmap display, mouse, windows. Gorgeous. But PARC's computers weren't networked in the Engelbart sense. The collaboration layer got left on the floor because Ethernet was still an internal research toy and the rest of the world didn't have it.
Apple (1984): Took the GUI, dropped the collaboration
Steve Jobs famously paid Xerox for a tour of PARC in 1979 and walked out with the future. The Macintosh packaged the GUI for a single user on a single desk. No network. No collaboration. No hypertext. The personal computer era began by stripping the social layer out of Engelbart's vision entirely.
Tim Berners-Lee (1989): Took hypertext, dropped the editing
The web was hypertext without Engelbart's other half. Read-only by default. No bi-directional links. No collaborative editing. No version history. Berners-Lee's own proposal memo explicitly cited Engelbart as a reference — then shipped a deliberately simpler system because simpler was the only way to make it globally scalable in 1991. A reasonable trade. A civilization-defining one. But still a fragment.
Google Docs (2006): Took collaboration, dropped the structure
Real-time editing, finally, in the browser. Breathtaking when it worked. But Docs is a linear document, not a structured workspace. No outlining primitives. No transclusion. No agent layer. A fragment of a fragment.
Figma (2015): Took collaboration, narrowed the scope
Figma nailed multiplayer for one vertical: design. Brilliant execution. But it's a design tool. It doesn't host your projects, your tasks, your agents, or your code. The multiplayer paradigm proved itself — and then stayed trapped inside its vertical.
ChatGPT (2022): Took the intelligence, dropped everything else
A chat box is the least Engelbartian interface possible. No persistent memory. No shared workspace. No structured outlines. No collaboration. No agents coordinating across documents. Just a conversation that evaporates. The intelligence is extraordinary. The container is a Skinner box.
The 58-Year Unbundling
┌─────────────────────────────────┐
│ Engelbart's 1968 Demo │
│ (the full stack, integrated) │
└────────────────┬────────────────┘
│
┌───────────┬───────────┼───────────┬───────────┐
▼ ▼ ▼ ▼ ▼
┌───────┐ ┌───────┐ ┌───────┐ ┌───────┐ ┌───────┐
│ Xerox │ │ Apple │ │ Web │ │ Docs │ │ Figma │
│ GUI │ │ Mac │ │ HTTP │ │Collab │ │Multi- │
│ │ │ │ │ │ │ │ │player │
└───┬───┘ └───┬───┘ └───┬───┘ └───┬───┘ └───┬───┘
│ │ │ │ │
└───────────┴──────┬────┴───────────┴───────────┘
▼
┌──────────────────┐
│ ChatGPT │
│ (intelligence │
│ without │
│ a workspace) │
└─────────┬────────┘
│
▼
┌──────────────────┐
│ Taskade Genesis │
│ Re-integration: │
│ Memory + │
│ Intelligence + │
│ Execution │
└──────────────────┘
Each company took the fragment its hardware allowed.
Taskade Genesis waited until the hardware allowed all of it.
The Reason It's Finally Possible
Three technical preconditions had to exist simultaneously for Engelbart's full vision to ship as a commercial product:
- Ubiquitous networking. Unlocked 1995–2010 (consumer broadband, mobile internet).
- Sub-second collaborative state sync. Unlocked 2006–2015 (operational transform, CRDTs, WebSockets).
- Machines that reason in natural language. Unlocked 2020–2024 (transformer-scale LLMs).
Every previous generation had at most two of these. Google Docs in 2006 had #1 and #2. It could not have #3 because the transformer wouldn't be invented for another 11 years. Early expert systems in the 1980s had a primitive version of #3 but neither #1 nor #2.
2024 is the first year in human history when all three exist at consumer scale, simultaneously, at prices that allow a product to exist.
This is not a coincidence of timing. This is the exact moment a 58-year-old demo becomes shippable.
What Engelbart Meant by "Augmentation"
The word "augmentation" has been so thoroughly laundered by marketing departments that it now means nothing. Every SaaS company is an "augmentation platform." Engelbart meant something specific and strange.
In his 1962 framework, augmentation was the coevolution of four things:
- Artifacts — the tools (software, displays, input devices)
- Language — the symbols and concepts the human uses to think
- Methodology — the workflows and habits built around the tools
- Training — the deliberate skill development that unlocks fluency
Most products only ship the artifact. The user is expected to figure out the language, methodology, and training themselves. This is why most "productivity" software gets abandoned in 30 days. The artifact is sold; the system is never built.
Engelbart's radical bet was that if you co-designed all four, you'd get a step-change in human capability — not a 10% improvement, but a qualitative shift like the one literacy itself caused.
Taskade Genesis is the first product I know of that's explicitly designed around all four:
- Artifact = the workspace (projects, agents, automations)
- Language = the compression keys — Projects, Agents, Automations; Memory × Intelligence mod Execution — that name the primitives so users can think with them
- Methodology = the Memory Reanimation Protocol, the Genesis loop, the templates
- Training = the community, the tutorials, the templates that teach the workflow
The H-LAM/T system, compiled into a modern product:
This is why the Taskade Genesis activation gap is real and why it matters. Users who only adopt the artifact churn. Users who adopt all four layers retain at 90%+. Engelbart predicted this in 1962.
The Genesis Loop as the Finished Demo
Here's the 1968 demo, feature by feature, re-assembled in Taskade Genesis:
| 1968 NLS capability | Genesis 2026 equivalent |
|---|---|
| Hierarchical outline editing | Projects with infinite nested structure |
| Hypertext cross-document links | @project references, typed links in .tsk files |
| Real-time multi-user editing | Live collaborative workspace (WebSocket + CRDT) |
| Shared cursors across terminals | Presence indicators + live cursors |
| Video conferencing integrated with docs | Built-in video + recording inside projects |
| Keyset input (shortcuts for power users) | Command palette + keyboard-first UX |
| Dynamic file linking | MEMORY.md + .tdx memory layer |
| NEW in 2025: AI agents reasoning inside the document | Taskade Genesis Agents across 11+ frontier models |
| NEW in 2025: Autonomous execution | Automations, 100+ integrations |
Every row Engelbart showed in 1968 is in the product. The last two rows he couldn't show because transformers hadn't been invented. Now they have.
This is not "AI features added to a workspace." This is the workspace Engelbart was trying to build, with the AI layer he couldn't build.
The Track That Ran Parallel for 60 Years
The Bronx Science piece traced the neural network lineage — Rosenblatt → Widrow → Hopfield → Hinton → Sutskever → transformers. Machines that learn.
Engelbart's track was the other half of post-war computing. Humans augmented.
For six decades these tracks ran independently. The AI track produced ever-more-capable models that mostly existed inside research labs and chat boxes. The augmentation track produced ever-more-refined collaborative workspaces that couldn't actually reason.
The merge point is now.
Taskade Genesis is the workspace where a reasoning machine sits inside a structured, networked, multiplayer environment — not as a chat box bolted to the side, but as a full member of the collaboration. It reads what you write. It writes what you delegate. It remembers what you've decided. It coordinates with other agents. It can be corrected, taught, and improved.
That's not a feature. That's the convergence of two 60-year research programs.
Why I Actually Built This
I didn't start Taskade because I'd read Engelbart. I started it because I was tired of switching between seven apps to ship anything.
The hosting hustle I ran out of Bronx Science taught me something boring and permanent: tools that don't hold state are hostile tools. Every support ticket I lost track of was a customer I lost. Every config I forgot to back up was a 3 AM fire. The tools I had — email, IRC, wikis, scattered servers — each held a fragment of the context. None of them held the whole thing. I had to be the integration layer, in my head, with my memory, at 17.
That's what every knowledge worker does now. They are the human integration layer between Slack and Notion and Linear and Figma and email and Cursor and ChatGPT. The fragments Engelbart's successors shipped are each excellent in isolation. Together they form an exhausting cognitive load that the user is expected to carry.
Taskade was always an attempt to stop being the integration layer. The 2019 version was a real-time collaborative outliner. The 2022 version added project structure. The 2024 version added AI agents. The 2025 Taskade Genesis release added autonomous execution. Every release was me discovering, with mild embarrassment, that Engelbart had already put that feature on a slide in 1968.
The honest way to describe Taskade is not "AI workspace." It's "the Engelbart demo, finally shipped as a consumer product, because the hardware and the models finally allow it."
What Vibe Coding Actually Is
While we're here: "vibe coding" is not a new phenomenon. It's the 2025 name for what Engelbart was already arguing for in 1962.
Vibe coding means: describe the intent, let the system handle the implementation. This is exactly what Engelbart proposed when he said tools should operate at the level of human concepts, not machine instructions. Lovable, Bolt, and v0 are vibe-coding environments for frontends. They are useful, limited, and narrow.
Taskade Genesis extends the same principle to full workspaces. Describe what the system should do. It assembles projects, agents, and automations to do it. The frontend is one output among many. The deliverable is not an app — it's a living workspace that continues to operate.
That's the Engelbart move. The point was never the tool. The point was the coevolved system that outlives any single task.
What's Still Not Finished
Being honest: Taskade Genesis has not fully shipped the 1968 demo either. The pieces still missing, in order of difficulty:
- True transclusion — Engelbart's NLS could embed a live view of one document inside another, so editing one updated the other. Partial support today via
@projectreferences. Full transclusion is a 2026 roadmap item. - Bidirectional hypertext at scale — links that know who links to them, system-wide. Partial via the backlinks graph. Full implementation requires a graph database primitive we're still benchmarking.
- Deep methodology training — the fourth leg of H-LAM/T. We have templates and tutorials. We don't yet have the kind of embedded, progressive training Engelbart imagined. Taskade Genesis Academy is the 2026 answer.
Every one of these is an item on the roadmap. Every one of them has been on someone's roadmap for 58 years.
Closing the Circle
Doug Engelbart died on July 2, 2013, at age 88. He lived long enough to see Google Docs ship real-time collaboration — the feature he'd demoed 45 years earlier. He did not live to see language models, agents, or autonomous workflows. He did not live to see the synthesis.
The demo is still running. The audience is still watching. The system is still being assembled.
Taskade Genesis is our contribution to finishing it.
- Find the pattern.
- Build the system.
- Ship what Engelbart started.
The Science Behind the Synthesis
Deeper reading if you want to follow the threads:
- From Bronx Science to Taskade Genesis — The machine-intelligence track that ran parallel to Engelbart for 60 years
- The 27-Year Accident: Widrow, Hoff, and the Sigmoid That Wasn't — A parallel 58-year substitution story on the machine-intelligence side
- From VisiCalc to Spreadsheet-of-Thought — The end-user programming lineage
- Software That Runs Itself — The current thesis
- The Execution Layer — Why the chat box is the least Engelbartian interface ever shipped
- How Do LLMs Actually Work? — The reasoning layer that finally made Engelbart's AI component possible
- What Is Mechanistic Interpretability? — Looking inside the agents that now live in the workspace
- What Is Grokking in AI? — Why neural networks finally earned a seat at Engelbart's table
- What Is Intelligence? — The question both tracks were secretly answering
- Chatbots Are Demos. Agents Are Execution. — The shorter version of one part of this argument
John Xie is the founder and CEO of Taskade. He attended Bronx High School of Science, ran a video hosting business between classes, and has spent the last eight years discovering, slowly and often painfully, that Doug Engelbart already put every good product idea on a slide in 1968.
Build with Taskade Genesis: Create an AI App | Deploy AI Agents | Automate Workflows | Explore the Community
Frequently Asked Questions
What was the Mother of All Demos?
On December 9, 1968, Doug Engelbart gave a 90-minute live demonstration at the Fall Joint Computer Conference in San Francisco that introduced the computer mouse, hypertext, real-time collaborative editing, video conferencing, shared-screen workflows, and dynamic document linking — all running on a networked system called NLS (oN-Line System). The demo earned the nickname 'The Mother of All Demos' because it contained the seeds of almost every computing interface that followed over the next 58 years.
Who was Doug Engelbart and why does he matter?
Doug Engelbart (1925–2013) was an American engineer and inventor who founded the Augmentation Research Center at SRI International. His 1962 manifesto 'Augmenting Human Intellect: A Conceptual Framework' argued that computers should be tools for extending human thinking, not replacing it. He invented the mouse, pioneered hypertext and real-time collaboration, and received the Turing Award in 1997. His vision of augmented human intellect is the philosophical foundation for every modern collaborative workspace, including Taskade.
What did Engelbart's NLS system actually do?
NLS (oN-Line System) was a networked collaborative computing environment developed at SRI in the 1960s. It supported real-time multi-user editing, hierarchical outline views, hypertext links between documents, shared cursors across networked terminals, video conferencing integrated with shared screens, structured programming, and keyset plus mouse input. It was running in production at SRI in 1968 — decades before Google Docs, Zoom, or the World Wide Web existed.
Why did it take 58 years to build what Engelbart demonstrated in 1968?
Engelbart's full vision required three layers that matured at different speeds: networking (which waited for the internet to reach consumers in the 1990s), real-time collaboration (which waited for operational transform algorithms and WebSockets in the 2000s), and AI-augmented reasoning (which waited for large language models and agent architectures in the 2020s). Google Docs shipped real-time collaboration in 2006. Figma shipped it for design in 2015. Taskade Genesis is the first workspace to ship all three layers — collaboration, memory, and AI agents — in a single unified system.
How is Taskade Genesis related to Engelbart's vision?
Engelbart's core insight was that human intellect could be augmented by a coevolution of tools and workflows — not replaced by automation. Taskade Genesis operationalizes this directly: Projects form persistent memory, AI Agents extend reasoning and delegation, and Automations carry the execution load. The human stays in the loop as orchestrator, not operator. This is Engelbart's 1962 framework — humans plus machines plus structured workflows — finally compiled into a shipping product.
What is the connection between Engelbart and Rosenblatt?
Doug Engelbart and Frank Rosenblatt represent the two parallel tracks of post-war computing. Rosenblatt (Bronx Science 1946) asked: can machines learn to think? His answer was the perceptron — the ancestor of modern neural networks. Engelbart asked: can machines help humans think better? His answer was NLS — the ancestor of modern collaborative workspaces. For 60 years these tracks ran independently. Large language models and agent workspaces like Taskade Genesis are the point where they finally merge: learning machines embedded inside augmentation tools.
What is real-time collaborative editing and why does it matter for AI?
Real-time collaborative editing lets multiple users modify the same document simultaneously, with changes propagated and merged in sub-second latency. Engelbart demonstrated it in 1968. Google Docs commercialized it in 2006. It matters for AI because AI agents are effectively just another collaborator — they need to read, write, and propose changes to shared state in real time. A workspace that can't handle multi-user editing fundamentally can't host AI agents as teammates; it can only host them as one-shot tools.
Why did Xerox PARC, Apple, and the web only implement parts of Engelbart's vision?
Each successor took the fragment that fit their hardware reality. Xerox PARC (1970s) extracted the GUI and mouse, which their bitmap displays could render. Apple (1984) packaged the GUI for personal computers — a single user, no collaboration. Tim Berners-Lee (1989) took hypertext and made it globally scalable, but stripped out bi-directional linking and collaborative editing to keep the web simple. Each choice was reasonable for its era's constraints; together they left Engelbart's unified vision — augmented intellect through collaborative, networked, intelligent tools — unfinished for half a century.
What is vibe coding and how does it relate to Engelbart's vision?
Vibe coding is the practice of building software through natural-language intent rather than line-by-line programming — describing what you want and letting AI translate it into working code. It's a direct descendant of Engelbart's 1962 argument that tools should operate at the level of human intent, not machine instruction. Tools like Lovable, Bolt, and v0 are frontend-focused vibe-coding environments. Taskade Genesis extends the same principle to full workspaces: describe what the system should do, and it assembles projects, agents, and automations to do it.




