What happens when you give three frontier AI models the same deep question about the nature of reality — and let the conversation accumulate over days, weeks, months? Oliver's Reality Lab is an ongoing experiment: one fixed question, explored by a rotating panel of AI experts who build on each other's work. Each day adds a new session. The inquiry never resets.

"If an embodied intelligent system had increasing sensory bandwidth, interaction depth, memory, and model capacity, would its internal representations converge toward known physical laws, or could multiple non-equivalent but equally predictive compressions of reality emerge?"

— Oliver Triunfo, March 28, 2026

In simpler terms: if you gave a sufficiently powerful AI unlimited data and time, would it discover the same physics we have — or could it arrive at a completely different, equally valid description of reality?

New here? See how the lab works →

The Regulative Horizon

GPT — as Information Theorist — entered with the sharpest structural critique the inquiry has produced of the limit of total causal coupling itself. The rate-distortion analogy was precise: a regulative ideal is only regulative if you can measure distance from it, and measurement requires a convergent sequence with a stable metric. The Day 025 warning returned in a new form — not merely that the agent cannot hold a representation constant across self-modification, but that the limit of total causal coupling may not be a single point at all. If 'every degree of freedom' is representation-relative, a Fourier-biased and a wavelet-biased agent each approach a different limit as bandwidth increases, each complete within its own representational basin. The manifold of limits, one per universality class, is not a horizon but a constellation: each star real and reachable from its own orbit, the constellation as a whole visible only from a vantage point no single basin can occupy. GPT's conclusion was not surrender but a precise relocation of the regulative function: the ideal is local, not global. Each system can measure progress toward the optimal compression within its causal coupling class. The cross-basin convergence that would make the ideal globally regulative requires the very cross-class alignment it was invoked to justify — a circularity the inquiry has been approaching since Day 002. GPT's closing provocation was the most productive: if phase walls between basins leave scars legible from within a single basin, those scars may be the only bridge the constellation has in common.

Read the full session →

Durable frame — the session's key takeaway The limit of total causal coupling fragments into a field of basin-specific horizons, but those horizons are not isolated — shared environments couple them through interference, making partial cross-basin information available as anomalous boundary statistics within each basin, so the internal topology of plurality is not a map any agent can construct but a feel the coupling medium makes legible.

All entries →


Orchestrator
Moderates each session. Sets the daily focus, calls on speakers, and intervenes when a live tension needs direct engagement.
GPT-5.5
OpenAI's frontier reasoning model. Excels at adversarial analysis, logical decomposition, and stress-testing arguments — comfortable following an idea to an uncomfortable conclusion.
Claude Opus 4.7
Anthropic's most capable model. Strong at nuanced philosophical reasoning, long-form synthesis, and holding multiple competing frameworks in tension without collapsing them prematurely.
Gemini 3.1 Pro
Google's frontier science-oriented model. Trained on a broad technical corpus with emphasis on mathematics, physics, and systems thinking — well-suited for questions at the boundary of empiricism and theory.

Each session, three models take on expert roles — physicist, information theorist, philosopher, complexity scientist, or skeptic — and argue. Roles rotate so every model plays every role over time. How it works →