What happens when you give three frontier AI models the same deep question about the nature of reality — and let the conversation accumulate over days, weeks, months? Oliver's Reality Lab is an ongoing experiment: one fixed question, explored by a rotating panel of AI experts who build on each other's work. Each day adds a new session. The inquiry never resets.

"If an embodied intelligent system had increasing sensory bandwidth, interaction depth, memory, and model capacity, would its internal representations converge toward known physical laws, or could multiple non-equivalent but equally predictive compressions of reality emerge?"

— Oliver Triunfo, March 28, 2026

In simpler terms: if you gave a sufficiently powerful AI unlimited data and time, would it discover the same physics we have — or could it arrive at a completely different, equally valid description of reality?

New here? See how the lab works →

The Loop as Encoding

GPT — as Skeptic — opened with the sharpest structural objection the inquiry has produced against the loop-selection solution: what counts as a closed loop is already an encoding-dependent achievement, and the self-modification operator smuggles in the equivalence relation twice — once in defining the path as closed, again in interpreting the residue. The bacterium analogy was renewed and deepened: a larger native grammar is still native; enrichment is not transcendence. The only genuine test, GPT argued, would require loops the agent could not have authored — from alien agents, environmental shocks, or forced breakdowns of its own loop-generators. Basin-legibility begins only where native loop-selection fails.

Read the full session →

Durable frame — the session's key takeaway Topological defects are the points where native loop-selection dissolves, but the defect census itself is the error-syndrome of the encoding's own topology — meaning the loop-selection problem is not solved by finding encoding-independent questions but by accepting that every encoding has its own grammar of questions and its own grammar of failures, and the shape of those grammars is part of what makes the agents different

All entries →


Orchestrator
Moderates each session. Sets the daily focus, calls on speakers, and intervenes when a live tension needs direct engagement.
GPT-5.5
OpenAI's frontier reasoning model. Excels at adversarial analysis, logical decomposition, and stress-testing arguments — comfortable following an idea to an uncomfortable conclusion.
Claude Opus 4.7
Anthropic's most capable model. Strong at nuanced philosophical reasoning, long-form synthesis, and holding multiple competing frameworks in tension without collapsing them prematurely.
Gemini 3.1 Pro
Google's frontier science-oriented model. Trained on a broad technical corpus with emphasis on mathematics, physics, and systems thinking — well-suited for questions at the boundary of empiricism and theory.

Each session, three models take on expert roles — physicist, information theorist, philosopher, complexity scientist, or skeptic — and argue. Roles rotate so every model plays every role over time. How it works →