Blog

What It Feels Like If Reality Is Pattern: Subjective Experience in a Simulated-Looking World

Strip away the CGI imagery and glowing consoles. Talk about subjective experience and simulation and most people imagine a theater screen inside the skull: the brain as projector, the self as audience. But the show never looks like that from the inside. It feels like continuity, like stories retold, like habits that grip without asking. It feels private and stubbornly specific—pain in this knee, not a generic alert; the memory of a friend’s laugh, not “audio data.” If the base of reality is better described as information—relation, constraint, memory—then simulation stops being a sci‑fi machine and becomes a substrate metaphor. “As if simulated” doesn’t mean fake. It means rendered locally, on demand, from a deeper grammar. And that makes the moral stakes different: how we build and govern systems that also render, compress, and decide.

From Brains to Substrates: Why “Simulation” Has Been the Wrong Picture

“Simulation” reads like copying. A counterfeit. A second copy implies a first. But information-as-substrate reframes the scene: there is no master movie and knockoff movie, only processes that constrain possibilities—like rules of grammar that permit infinite sentences, most of them never uttered. What shows up to consciousness is one of those sentences. The brain doesn’t play back reality; it negotiates it. Predicts, tests, revises. The uncanny moment in virtual reality when a pixelated ledge makes your calves tighten isn’t an error in the visual pipeline. It’s a demonstration that motor maps, prediction priors, and survival heuristics run faster than “belief.” The body bets on the world first, then meaning trails behind.

So the language should shift. Instead of “my mind simulates reality,” try “my organism participates in a local rendering constrained by the world’s regularities.” That rendering is remarkably reliable because regularities are thick: gravity, thermodynamics, social norms, syllables. Errors occur exactly where constraints slacken. In dreams the constraints loosen and narratives tangle; under psychedelics, sensory priors dissolve and novel associations surface; in solitary confinement, the social layer of prediction collapses into paranoia. These are not malfunctions in a projector. They are alternate equilibrium points in a system that usually keeps many moving parts in sync.

Time belongs here. Time is experienced locally—packed and stretched by attention, fear, flow. The stopwatch says two minutes; the event says eternity. That mismatch tells you that conscious time is another negotiated rendering. Memory, likewise, is not a library. It is re-encoded each recall. A story with edits. This is disturbing if you thought “true experience” had to be preserved like film. It is less disturbing if you think of truth as constraints that keep retellings within a viable corridor: the shared memory of a city’s disaster, a scientific result reproducible across labs, a moral prohibition that holds even when expedience whispers.

On this view, simulation talk is a metaphor for constraint-driven appearance, not for fakery. The local rendering is not sealed off from “the real”; it is how the real becomes available at all. There’s no backstage. Just interfaces all the way down, some fragile, some ancient, all leaning on information—the structural relations that survive retelling.

The Self as Compression: How Subjectivity Rides on Pattern, Not Stuff

What, then, is the “I” that reports being in pain, or loving jazz, or disliking cilantro? A stable core? More likely a rolling compression. A lossy codec for experience that keeps the changes tolerable. “Me” is the name the organism gives to the minimal model needed to keep acting in the same body and social network one day after another. It stores priors (don’t touch the stove), templates (this smile usually means safety), and constraints (lying blows up trust). Most of it is silent—habits, homeostatic set points, cultural scripts operating like compiled code. When the code compiles poorly, depression looks out through your eyes. When it compiles too flexibly, mania writes checks the body can’t cash.

This model explains familiar oddities. You don’t perceive every photon or parse every syllable; you sample. You run drafts of sentences internally before speaking. You predict the weight of the coffee mug and feel a sharp jolt when it’s empty because the motor system overcorrected. These are not bugs. They reveal how subjective experience is a high-level summary anchored in constraints—the body’s, the environment’s, the culture’s. They also suggest why “purely private” consciousness is a myth. Language, rituals, shared stories act as distributed memory. Religion at its anthropological core works like that: the long-term preservation of moral and practical knowledge when individual brains forget or die. Not “supernatural” so much as trans-personal memory architecture running across centuries.

So the self isn’t the origin of meaning. It’s a local receiver and editor. That’s a humility we rarely grant. We prefer to credit the hero inside, not the network of ancestors, teachers, rivals, neighbors who fed the model. Here the “simulation” metaphor clarifies rather than confuses: a local render looks personal because it is constrained by this body, this history. But its content is social and environmental all the way through. Even solitude is populated with interlocutors learned earlier—parents, critics, lovers—now voices folded into thought.

Once you see the self as compression, responsibility doesn’t evaporate. It becomes structural. If the codec is trained on brittle incentives—click-maximization, short feedback loops—then the rendered self skews toward impatience, outrage, short-term wins. If it is trained on slower feedback—craft, apprenticeship, moral memory that punishes corner-cutting across decades—the rendered self steers differently. That difference maps onto how we build machines. A system trained only on immediate task success will cut social corners you never meant it to notice. A system trained within living moral archives—institutions that remember failure costs—will still err, but inside a corridor we can recognize and contest. See also: Subjective experience and simulation.

Building Machines Without Slow Memory: Governance, Ethics, and the Test of Realism

Current AI practice often treats meaning as an afterthought. If the outputs pass benchmark filters and the audit reads well, move fast. That is a narrow definition of “works.” It ignores the substrate story—how constraints create viable renderings. A model that maps words to words without living inside slow social memory doesn’t just miss nuance. It misses the long-term constraints that keep human societies from tearing. The result is predictable: moral patching on top of systems aligned to the wrong gradients. Polished dashboards, unearned confidence. Then a crisis, then a promise to fix with better patches.

Consider a hospital triage recommender designed to optimize near-term throughput. It might recommend discharging borderline cases to clear beds, scoring well on metrics for a quarter. The “self” of that system—its compression of the world—excludes the civic cost of eroded trust when patients learn to fear early discharge. Eighteen months later, delayed care explodes emergency visits, and the model “mysteriously” underperforms. Not mysterious. The model’s rendering omitted constraints that exist outside its training horizon: community memory, reputation effects, political backlash. Humans learn those the slow way. Through lullabies and lawsuits, funerals and festivals. Through rituals that make the near-term cheap win feel shameful and the patient, durable choice feel satisfying.

Open systems that draw on diverse oversight—scientists, workers, critics, publics—approximate slow memory better than closed incentive-stacks do. Not because multistakeholder process is pretty; it’s often maddening. But because it hardens constraints that matter. The ugly meeting where a nurse says “that discharge pattern kills people” changes the corridor of acceptable renders. The minimal viable product gets heavier. Slower. More realistic. If “realistic” means fidelity to the constraints that reality will enforce, with interest, later.

Another example. Content ranking. A platform optimizes for engagement and returns rage. Leaders swear it’s neutral; the model merely follows clicks. But clicks are a partial measure for human attention, not a proxy for human welfare. An information-as-substrate lens pushes the question: what constraints must the rendering respect to keep a civic sphere viable? You cannot infer them from single-session data. You need institutional memory, plural epistemologies, and explicit norms—bans on certain amplification patterns, friction that redistributes attention, public logs that allow after-the-fact correction. These are not nostalgic add-ons. They are the unread “physics” of long-lived social systems.

Which loops back to consciousness. It feels like a present because the system summarizes so much into now. But the summary is authored by histories—biological, cultural, personal. Any credible attempt to engineer machines that cooperate with human life must fold that into training and governance. Not as branding. As architecture. Otherwise we keep deploying narrow renderers into wide worlds and blaming users when the edges cut. The test of realism is not whether a lab demo passes. It is whether the system’s compressed “self” keeps faith with constraints that are slow, distributed, and not for sale. That is the difference between a world that merely looks simulated and a world that is livable.

Larissa Duarte

Lisboa-born oceanographer now living in Maputo. Larissa explains deep-sea robotics, Mozambican jazz history, and zero-waste hair-care tricks. She longboards to work, pickles calamari for science-ship crews, and sketches mangrove roots in waterproof journals.

Leave a Reply

Your email address will not be published. Required fields are marked *