What Is Artificial Life? How Intelligence Emerges from Code (2026)
From Conway's Game of Life to self-replicating programs — artificial life reveals how intelligence, purpose, and complexity emerge from simple rules. A complete guide to ALife, computational emergence, and what it means for AI agents and living software. Updated March 2026.
On this page (10)
In a Google research lab in late 2024, Blaise Aguera y Arcas watched his screen as something unexpected happened. He had filled a digital soup with 1,000 random strings of bytes — meaningless noise, the computational equivalent of primordial mud. The bytes were paired, concatenated, executed as code, pulled apart, and returned to the soup. Over and over.
After a few million iterations, the entropy in the system suddenly dropped. Self-replicating programs had emerged from nothing. No one designed them. No one coded them. They evolved — spontaneously — from random data and a handful of simple rules.
This is artificial life. Not simulated biology. Not a metaphor. A field of study that asks the most fundamental question in science: What is life, and can it arise from pure computation? It sits at the intersection of computer science, biology, and AI safety research — and its implications are reshaping how we think about intelligence itself.
The answer, it turns out, is yes. And it has profound implications for AI agents, multi-agent systems, and the future of software itself.
TL;DR: Artificial life (ALife) studies how life-like behaviors — reproduction, evolution, intelligence — emerge from simple computational rules. Von Neumann predicted DNA's structure from pure theory. Conway's Game of Life proved complexity needs no complex rules. Google's BFF experiment showed self-replicating programs emerging from random bytes. These insights now shape how AI agents and living software are built — including Taskade Genesis, where 150,000+ apps run on Workspace DNA (Memory + Intelligence + Execution). Try it free →
🧬 What Is Artificial Life?
Artificial life (ALife) is a scientific discipline that studies life and life-like processes through computer simulations, robotics, and synthetic biochemistry. Unlike traditional biology, which analyzes existing organisms, ALife asks a different question: What are the fundamental rules that produce life? The field has existed formally since 1986 and has produced insights that now underpin everything from neural network architectures to multi-agent AI systems.
Christopher Langton, a researcher at Los Alamos National Laboratory, coined the term "artificial life" in 1986 when he organized the first Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems. Langton had spent years studying cellular automata — grid-based computational systems where each cell follows simple rules — and realized that the patterns they produced looked eerily like biological processes. Growth. Replication. Competition. Death.
ALife divides into three branches:
- Soft ALife — computer simulations of life-like processes (cellular automata, evolutionary algorithms, agent-based models)
- Hard ALife — physical robots and hardware that exhibit life-like behaviors (swarm robotics, self-assembling machines)
- Wet ALife — synthetic biochemistry that creates life-like systems from organic molecules (protocells, synthetic DNA circuits)
The branch that matters most for AI and software is soft ALife — the study of how intelligence, purpose, and complexity emerge inside computers. Every neural network you use today, every autonomous agent that plans and executes tasks, every multi-agent system that develops collective intelligence — they all trace intellectual roots back to ALife.
| Dimension | Artificial Life (ALife) | Artificial Intelligence (AI) | Synthetic Biology |
|---|---|---|---|
| Core question | What rules produce life? | How do we make machines think? | Can we engineer living organisms? |
| Method | Simulation, emergence | Optimization, learning | Gene editing, protocells |
| Unit of study | Populations, ecosystems | Individual models, agents | Cells, genomes |
| Key insight | Complexity from simplicity | Intelligence from data | Function from design |
| Origin | Langton (1986) | McCarthy (1956) | Venter (2010) |
| Relation to Taskade | Living software principles | AI agents, LLMs | N/A |
The crucial difference: AI typically focuses on making a single system intelligent. ALife focuses on how populations of simple systems produce intelligence through interaction. That distinction matters enormously when you move from single-model AI to multi-agent systems and agentic workspaces.
🎮 Conway's Game of Life: Complexity from Simplicity
Conway's Game of Life is the most famous cellular automaton ever created — a zero-player game on an infinite grid where each cell is either alive or dead, and four rules determine the next generation. Invented by British mathematician John Horton Conway in 1970, it demonstrates a principle that reshaped computer science: complex behavior does not require complex rules.
The rules are absurdly simple:
- Underpopulation — A live cell with fewer than 2 live neighbors dies.
- Survival — A live cell with 2 or 3 live neighbors survives.
- Overpopulation — A live cell with more than 3 live neighbors dies.
- Reproduction — A dead cell with exactly 3 live neighbors becomes alive.
That is it. Four rules. Binary states. No randomness. Yet from these rules emerge:
- Still lifes — stable patterns that never change (blocks, beehives, loaves)
- Oscillators — patterns that cycle between states (blinkers, pulsars, pentadecathlons)
- Gliders — patterns that move across the grid indefinitely
- Glider guns — patterns that emit a stream of gliders forever
- Universal computers — arrangements of cells that can perform any computation a laptop can
Step 0 Step 1 Step 2 Step 3 Step 4
·●· ··● ··· ·●· ··●
··● ●·● ··●● ··● ···●
●●● ·●● ·●● ·●●● ·●●
The "glider" — moves diagonally forever from just 5 cells
That last point is worth pausing on. In 2010, a team of Life enthusiasts built a pattern that implements a complete Turing machine inside the Game of Life. The grid of cells is the computer. The gliders are the signals. The logic gates are built from collisions between glider streams. You can, in principle, run any program — word processors, chess engines, even another Game of Life simulation — inside Conway's grid.
This matters for AI and intelligence research because it demolishes a common intuition: that producing complex, purposeful behavior requires complex, purposeful design. It does not. All you need is the right set of simple rules and enough space for them to interact. Neural networks, large language models, and multi-agent AI systems all operate on this principle — simple operations (matrix multiplication, attention scores, gradient updates) repeated billions of times to produce capabilities no one explicitly programmed.
Conway himself was ambivalent about his creation. He considered it a recreational curiosity and was reportedly frustrated that it overshadowed his more serious mathematical work. He died in April 2020 from COVID-19 complications, leaving behind a body of work in combinatorics, number theory, and group theory — but the Game of Life remains his most enduring legacy.
🖥️ Von Neumann's Prophecy: DNA Is Code
John von Neumann — the Hungarian-American polymath who helped design the atomic bomb, invented game theory, and architected the first stored-program computers — made a prediction in the late 1940s that may be the most prescient in the history of science. Working entirely from theory, with no knowledge of molecular biology, he deduced the exact architecture that life would need to replicate itself.
Von Neumann asked a question that no one else was asking: Can a machine build a copy of itself?
Not approximately. Not with human assistance. A complete, functional copy — from scratch — using only materials available in its environment. He worked out the answer using cellular automata, and the requirements he identified were strikingly specific:
- An instruction tape — a stored description of the machine to be built
- A universal constructor — a mechanism that reads the tape and builds the described machine
- A tape copier — a mechanism that duplicates the instruction tape and inserts it into the new machine
Without all three components, self-replication fails. Without the tape, the constructor does not know what to build. Without the constructor, the tape is just inert data. Without the copier, the offspring has no instructions and cannot reproduce further.
Von Neumann's Prediction (1940s) Biological Reality (1953+)
┌────────────────────┐ ┌────────────────────┐
│ Instruction Tape │ ═══▶ │ DNA │
├────────────────────┤ ├────────────────────┤
│ Universal │ ═══▶ │ Ribosomes │
│ Constructor │ │ │
├────────────────────┤ ├────────────────────┤
│ Tape Copier │ ═══▶ │ DNA Polymerase │
└────────────────────┘ └────────────────────┘
He predicted the machinery of life from pure theory.
In 1953 — years after von Neumann published his self-replicating automaton framework — James Watson and Francis Crick discovered the double helix structure of DNA. And the correspondence was exact:
- DNA is the instruction tape — a linear sequence of nucleotide bases encoding the complete blueprint for an organism
- Ribosomes are the universal constructors — molecular machines that read messenger RNA (transcribed from DNA) and assemble proteins according to its instructions
- DNA polymerase is the tape copier — the enzyme that duplicates the entire DNA molecule before cell division
Von Neumann predicted the fundamental architecture of molecular biology from pure mathematical reasoning about computation. He never saw the confirmation. He died of cancer in 1957 at the age of 53.
The implication is profound and still underappreciated: life is computation. Not metaphorically. Not as an analogy. DNA is literally a stored program. Ribosomes are literally universal constructors. The cell is literally a self-replicating Von Neumann machine.
As Blaise Aguera y Arcas put it: "You cannot be a living organism without literally being a computer."
This insight connects directly to why AI agents and automated workflows behave the way they do. When you give an AI agent persistent memory (the tape), reasoning capabilities (the constructor), and the ability to produce copies of its workflows (the copier), you are — whether intentionally or not — recreating the same architecture that von Neumann proved was necessary for self-replicating systems. Life and intelligence are not separate phenomena. They are both computational. They both emerge from the same structural requirements.
🧪 The BFF Experiment: Life from Nothing
In late 2024, Blaise Aguera y Arcas — a principal scientist at Google Research and former vice president at Google — ran an experiment that may be the most elegant demonstration of computational abiogenesis ever conducted. He called it BFF, short for a playful derivative of the BrainF*** programming language, one of the simplest Turing-complete languages ever designed.
The setup was minimalist by design:
- 1,000 random tapes, each 64 bytes long — filled with completely random data
- 7 instructions in the BFF language (move pointer left/right, increment/decrement, loop open/close, output) — every other byte value is a no-op (does nothing)
- Average of ~2 valid instructions per tape (since 31 out of 32 possible byte values are no-ops)
- A shared soup where all 1,000 tapes coexist
The process:
- Pick 2 tapes at random from the soup
- Concatenate them into a single 128-byte program
- Execute the combined program
- Pull the result apart into two tapes
- Return them to the soup
- Repeat
That is the entire experiment. No fitness function. No selection pressure. No reward signal. No goal. Just random tapes, random pairing, execution, and return. The computational equivalent of molecules bumping into each other in a warm pond.
The BFF Experiment: Digital Abiogenesis
Step 1: Random Soup Step 2: Pair & Run
┌────────────────────┐ ┌────────────────┐
│ A: [x7,2f,a1,...]│──┐ │ [A────────B] │
│ B: [91,0c,f3,...]│──┘──▶ │ Execute... │
│ C: [...] │ │ [A'───────B'] │
│ ...1000 tapes │ │ Return to soup│
└────────────────────┘ └────────────────┘
Step 3: After millions... PHASE TRANSITION!
┌────────────────────┐ Entropy drops.
│ [REPLICATOR_1]×200│ Self-replicating
│ [REPLICATOR_2]×150│ programs emerge
│ [REPLICATOR_3]×100│ from NOTHING.
│ [random]×550 │
└────────────────────┘
For millions of iterations, nothing interesting happened. The tapes remained noisy. Entropy stayed high. The system looked like random noise interacting with random noise.
Then — suddenly — a phase transition.
Entropy in the soup dropped dramatically. Where before each tape was unique and random, now the soup was dominated by a small number of repeating patterns. Self-replicating programs had emerged. These programs, when concatenated with another tape and executed, would overwrite the other tape with a copy of themselves.
No one programmed replication. No one rewarded it. No one defined "fitness." The replicators emerged because they could — because the structure of the computational environment allowed programs that copy themselves to outcompete programs that do not.
Aguera y Arcas described what happened next in terms that echo biological evolution: the replicators competed. Some were more efficient than others. Dominant strains emerged. And here is the part that made the experiment profound — the emergence of purpose.
"Something that can break is something that is functional," Aguera y Arcas explained. A self-replicating program that gets corrupted stops replicating. That means it has a function that can fail. Function that can fail is, in the most minimal sense, purpose. Purpose emerged from purposelessness. Meaning emerged from noise.
This is abiogenesis — the origin of life — in 64 bytes. Not metaphorical abiogenesis. Computational abiogenesis that follows the same principles that produced life on Earth 3.8 billion years ago: simple components, repeated interactions, a shared environment, and enough time for the improbable to become inevitable.
🔄 Recursion and Parallelism: Computers Made of Computers
One of the deepest insights from artificial life research is that living systems are not just computational — they are recursively computational. Computers made of computers made of computers, all the way down.
Consider a human being:
- Molecules perform computation — proteins fold into functional shapes determined by their amino acid sequence (a kind of molecular program)
- Organelles perform computation — ribosomes read mRNA and assemble proteins, mitochondria manage energy budgets through feedback loops
- Cells perform computation — they sense their environment, process signals, make decisions about growth, division, and death
- Organs perform computation — the immune system classifies threats, the liver regulates metabolic pathways, the kidney filters based on molecular properties
- The brain performs computation — 86 billion neurons connected by 100 trillion synapses, running perception, planning, memory, and consciousness
- The organism performs computation — navigating, communicating, building tools, writing blog posts
- Societies perform computation — markets allocate resources, governments process policy, cultures transmit knowledge
Each level is built from the level below. Each level produces capabilities that do not exist at the lower level. And critically, the levels operate in parallel. Right now, your body contains roughly 10 quintillion ribosomes (10^19), each one independently reading mRNA and assembling proteins. That is 10 quintillion parallel computational processes happening inside you at this moment — more parallel operations than every computer on Earth combined.
David Krakauer, president of the Santa Fe Institute and one of the leading researchers on complexity and intelligence, draws a direct line from biological recursion to cultural evolution. The nervous system, he argues, is what happens when the computational substrate becomes fast enough to model the world in real time. And culture — language, writing, mathematics, software — is "evolution at light speed." Instead of waiting for genetic mutations and natural selection (which operates on timescales of thousands to millions of years), cultural knowledge replicates and evolves through communication in hours. The history of OpenAI and ChatGPT is itself an example — a research idea that propagated from a handful of papers to 100 million users in under a decade.
Software — and particularly AI agent systems — represents the latest level in this recursive stack. When you build a multi-agent workflow where one AI agent researches, another drafts, and a third reviews and edits, you are creating a computational system built from computational components, each of which is built from computational components (neural network layers, attention heads, matrix operations). The recursion is real, not metaphorical.
This is why agentic engineering works. You do not need to program every capability into every agent. You create agents with simple capabilities and let their interactions produce complex, emergent results — the same way cells with simple chemical capabilities produce the complexity of a human brain.
🌊 Emergence: When More Becomes Different

Emergence is the phenomenon where complex behaviors arise from simple components interacting — behaviors that cannot be predicted or explained by examining the components in isolation. It is the central concept connecting artificial life, neural networks, and multi-agent AI systems.
Physicist Philip Anderson captured the idea in his 1972 paper titled "More Is Different." His argument: at each level of complexity, entirely new properties appear that are not reducible to the level below. You cannot predict the wetness of water from the physics of individual H₂O molecules. You cannot predict consciousness from the electrochemistry of individual neurons. You cannot predict the capabilities of GPT-4 from the behavior of individual matrix multiplications.
Researchers distinguish two types:
| Type | Definition | Example | Predictable? |
|---|---|---|---|
| Weak emergence | Arises from known rules; reproducible in simulation | Flocking patterns from 3 simple rules (separation, alignment, cohesion) | In principle, yes |
| Strong emergence | Cannot be derived from lower-level rules even with complete knowledge | Consciousness from neural activity | Debated |
In AI, emergence has become one of the most active research topics since 2022, when researchers at Google Brain published evidence that large language models develop capabilities suddenly at certain scale thresholds — a phenomenon now called "emergent abilities." A model with 10 billion parameters cannot do arithmetic. A model with 100 billion parameters can. No one programmed arithmetic. It emerged from the statistics of next-token prediction at sufficient scale.
The parallels to artificial life are striking:
Stanford's Smallville experiment (2023) — 25 AI agents placed in a simulated town developed social behaviors no one programmed: forming friendships, spreading gossip, organizing a Valentine's Day party, and maintaining consistent personalities over time. The researchers gave each agent a simple architecture (perceive, plan, reflect, act) and let emergence do the rest.
Project Sid (2024) — 1,000 AI agents in a Minecraft world spontaneously developed economic systems, religious institutions, political structures, and cultural norms. Specialization emerged — some agents became builders, others traders, others leaders — despite starting with identical architectures and capabilities.
Ant colonies — Individual ants follow simple pheromone-based rules. Colonies of 50,000 ants build complex structures, wage wars, farm fungi, and manage waste disposal with no central coordination. The colony is intelligent. No individual ant is.
The BFF experiment — Self-replicating programs emerged from random bytes. No intelligence was designed. No fitness was defined. Emergence produced function from noise.
The lesson for building AI-powered workflows: you do not need to specify every behavior. You need to create the right conditions — the right agents, the right tools, the right environment, the right integrations — and let emergence produce capabilities you did not anticipate. This is not wishful thinking. It is the same principle that produces every complex system in nature.
🤖 From Artificial Life to Living Software

Everything artificial life has taught us — that complexity emerges from simplicity, that self-replication requires instruction tapes and constructors, that intelligence is computational, that purpose arises from interaction — converges on a practical question for 2026: What would software look like if it were alive?
Not alive in the science fiction sense. Alive in the ALife sense: software that reproduces (creates new workflows from existing ones), evolves (adapts to changing data and requirements), maintains homeostasis (auto-corrects when processes break), and develops emergent capabilities through interaction.
This is exactly what Taskade Genesis builds.
When you describe an application in natural language and Genesis produces a working system, it does not generate static code. It creates a living system with three components that mirror von Neumann's self-replicating architecture:
Memory (the instruction tape) — Projects, databases, knowledge bases, and documents that store everything the system knows. Like DNA, this memory persists across generations and encodes the complete blueprint for the system's behavior. Genesis apps built on Taskade's workspace accumulate knowledge with every interaction.
Intelligence (the universal constructor) — AI agents with persistent memory, custom tools, 22+ built-in tools, and access to 11+ frontier models from OpenAI, Anthropic, and Google. These agents read the memory (instruction tape), interpret it, and construct outputs — exactly as ribosomes read mRNA and construct proteins.
Execution (the tape copier) — Automations powered by Temporal durable execution with branching, looping, filtering, and 100+ integrations. Execution ensures that the system's processes replicate reliably — that workflows run on schedule, triggers fire correctly, and outputs propagate to the right destinations.
This is Workspace DNA: Memory feeds Intelligence, Intelligence triggers Execution, Execution creates Memory. A self-reinforcing loop — the same loop that powers every living organism on Earth.
The multi-agent collaboration in Taskade exhibits the same emergent behaviors we see in ALife simulations. When you assign a research agent, a writing agent, and a review agent to a task, they coordinate, specialize, and produce results that no single agent could achieve alone. You do not program the coordination. It emerges from the agents interacting within the shared workspace — the same way ant colonies develop complex behaviors from simple pheromone rules.
Over 150,000 apps have been built with Taskade Genesis — each one a living system that evolves with use. You can publish them to the Community Gallery, where they become part of a larger ecosystem. An ecosystem of living software, each app interacting with users, accumulating data, and improving over time.
Code generators like Cursor, Bolt.new, and Lovable create static files. You deploy them, and they freeze. Genesis creates organisms. They grow. They adapt. They develop capabilities through use that were not present at creation.
That is the difference between artificial intelligence and artificial life applied to software. AI gives you a smart tool. ALife gives you a living system.
🔮 The Future: When Code Comes Alive

The convergence of artificial life and artificial intelligence is accelerating. Three developments in 2025-2026 point toward a future where the boundary between "software" and "organism" becomes genuinely blurry:
Open-ended evolution in digital systems. Traditional evolutionary algorithms converge on a solution and stop. Researchers at the Santa Fe Institute and MIT are now building systems that never stop evolving — digital ecosystems where novel complexity continues to increase indefinitely, just as biological evolution has done for 3.8 billion years. When these techniques are applied to AI agent populations, we will see agents that develop capabilities no human anticipated or designed.
AI agents that learn from interaction. Current AI agents execute tasks based on their training data and instructions. The next generation — already visible in multi-agent research — will develop new capabilities through interaction with their environment and with each other. Agents that start as general-purpose assistants will specialize based on what they encounter, the same way cells differentiate during embryonic development.
The merging of ALife and AI research communities. For decades, these communities worked separately — AI focused on optimization and performance benchmarks, ALife focused on emergence and open-ended evolution. In 2026, the boundaries are dissolving. Researchers now use large language models as substrates for ALife experiments, and ALife insights about emergence inform how multi-agent AI systems and mechanistic interpretability research are designed.
Blaise Aguera y Arcas's book What Is Intelligence?, published by MIT Press in 2026, synthesizes these threads. His central thesis: life and intelligence are the same thing. They are both computational. They both emerge from simple rules applied to information. The distinction we draw between "alive" and "intelligent" is a legacy of pre-computational thinking — a conceptual boundary that dissolves once you understand that DNA is code, ribosomes are computers, and purpose can emerge from randomness.
The question is no longer Can we create artificial life? We already have — in Conway's grid, in the BFF soup, in multi-agent simulations, and in living software platforms where applications evolve with use.
The question now is: What will this life become?
❓ Frequently Asked Questions
What is artificial life and how is it different from artificial intelligence?
Artificial life (ALife) studies how life-like processes — reproduction, evolution, adaptation, intelligence — emerge from simple computational rules. AI focuses on making individual systems intelligent through optimization and machine learning. The key difference: ALife studies populations of interacting agents and the emergent behaviors they produce, while AI typically focuses on improving a single model or system. The two fields are converging in 2026 as multi-agent AI systems increasingly exhibit ALife-like emergence.
Can self-replicating programs really emerge from random data?
Yes — this was conclusively demonstrated by Blaise Aguera y Arcas's BFF experiment at Google. Starting from 1,000 completely random 64-byte tapes in a minimal programming language, self-replicating programs emerged after a few million random pairings and executions. The key conditions: a Turing-complete computational substrate, a shared environment (the "soup"), and repeated random interaction. No fitness function, no selection pressure, and no design was involved.
Why does Conway's Game of Life matter for understanding AI?
Conway's Game of Life demonstrates that four simple rules operating on binary cells can produce infinite complexity — including patterns capable of universal computation. This insight is foundational for AI because it shows that complex capabilities (reasoning, planning, coding) can emerge from simple operations (matrix multiplication, attention scores) repeated at scale. Every neural network operates on this principle.
What did Von Neumann predict about the structure of life?
In the late 1940s, John von Neumann predicted that any self-replicating system requires three components: an instruction tape, a universal constructor, and a tape copier. When Watson and Crick discovered DNA's structure in 1953, the correspondence was exact — DNA is the tape, ribosomes are the constructor, DNA polymerase is the copier. Von Neumann derived the architecture of molecular biology from pure mathematical reasoning about computation.
What is computational emergence and why does it matter?
Computational emergence occurs when simple programs or agents interacting produce complex behaviors that cannot be predicted from the individual components. It matters because it explains how LLMs develop capabilities like reasoning from next-token prediction, how multi-agent teams develop collective intelligence from individual agents, and how self-replicating programs arise from random bytes. Understanding emergence is essential for building effective AI agent systems.
How do multi-agent AI systems relate to artificial life?
Multi-agent AI systems exhibit the same emergent behaviors studied in artificial life: collective intelligence, spontaneous specialization, coordination without central control, and adaptive responses to changing environments. Stanford's Smallville experiment (25 agents developing social behaviors) and Project Sid (1,000 agents forming economies and governments in Minecraft) are essentially ALife experiments using LLM-powered agents instead of cellular automata.
What is Workspace DNA and how does it connect to artificial life?
Workspace DNA is Taskade's architecture where Memory (projects and knowledge) feeds Intelligence (AI agents with persistent memory and 22+ tools), Intelligence triggers Execution (automations with 100+ integrations), and Execution creates Memory — a self-reinforcing loop. This mirrors von Neumann's self-replicating architecture: an instruction tape (Memory), a universal constructor (Intelligence), and a copier (Execution). It is artificial life principles applied to productivity software.
Is "living software" just a marketing term?
No — in the ALife sense, "living software" refers to applications that exhibit life-like properties: they persist and maintain state (homeostasis), they accumulate knowledge and adapt (evolution), they produce new workflows from existing ones (reproduction), and they develop capabilities through use that were not present at creation (emergence). Taskade Genesis apps with embedded AI agents, persistent memory, and automated workflows meet these criteria. Over 150,000 have been built in the Community Gallery, forming an ecosystem of interacting applications — an ecology of living software.
🚀 Build Living Software That Evolves with Your Team
Artificial life taught us that intelligence does not need to be designed — it needs the right environment to emerge. Taskade Genesis provides that environment: a workspace where AI agents collaborate, automations execute, and every interaction makes the system more capable.
150,000+ living apps. 11+ frontier models from OpenAI, Anthropic, and Google. 100+ integrations. Workspace DNA that turns your team's knowledge into autonomous intelligence.
Explore living apps in the Community Gallery →
Deploy AI agents for your team →
💡 Explore the AI intelligence cluster:
- What Is AI Safety? — Risks, alignment, and regulation
- How Do LLMs Work? — Transformers and attention explained
- What Is Mechanistic Interpretability? — Reverse-engineering how AI thinks
- What Is Grokking in AI? — When models suddenly learn to generalize
- What Is Intelligence? — From neurons to AI agents
- From Bronx Science to Taskade Genesis — Connecting the dots of AI history
- They Generate Code. We Generate Runtime — The Genesis Manifesto
- The BFF Experiment — From Noise to Life




