At the 1900 Paris Exposition, organizers installed a moving staircase. A journalist recorded what happened next:

“The escalator caused many an incident worthy of the vaudeville, separating families, sending old men sprawling, delighting the children, and reducing their nannies to despair.”

Every clause maps to a different response to novelty. The spectacle-seeker showed up for entertainment. Families lost their grip on one another, the elderly fell. The children loved it. And the people responsible for maintaining order discovered that their competence no longer applied.

The entire history of technological adoption, in ~30 words.

The Invisible Assumption

You need to understand what the escalator threatened before you can grasp why people sprawled.

In 1900, no one held the conscious belief “floors are stationary.” They didn’t need to. The stillness of the ground sat below the threshold of thought, embedded in decades of motor calibration and proprioceptive tuning. Their bodies ran on that assumption without consulting their conscious minds.

Then the ground moved.

The old men sprawled because their systems carried the deepest calibration to the old invariant. More years of walking meant more neural pathway reinforcement, more embodied certainty that the ground stays put. Their expertise at navigating a stationary world became a liability the instant the ground moved.

Piaget drew a distinction between assimilation and accommodation in childhood development, but the framework extends beyond children. Assimilation: you fit new information into your existing model. You see an unfamiliar breed of dog and file it under “dog.” No restructuring required. Accommodation: your model itself has to change. You discover that whales aren’t fish, and you reorganize your taxonomy.

Accommodation, from the inside, feels like disorientation and lost competence.

Disruption scales with how foundational the violated assumption was, and inversely with how conscious it was before it broke.

Surface-level assumptions break without cost. “This restaurant will be open on Monday” turns out to be wrong, and you adjust without distress. You knew you were making an assumption. You identify the error and correct it.

Deep assumptions shatter. “The ground is still” was never an assumption you made. It was a condition of your world. When it breaks, you get no clean error message. You get symptoms: anger, anxiety, a sense that something has gone wrong at the root. And because you never articulated the assumption, you can’t identify the source of your distress.

You confabulate. You generate plausible-sounding explanations for your discomfort. The old man who sprawled on the escalator doesn’t say “my proprioceptive calibration failed to accommodate a novel ground-state.” He says “these contraptions are dangerous and irresponsible.” He converts a modeling failure into a moral judgment because he can’t see the mechanism.

This pattern repeats across technological disruptions. People experiencing framework breakdown describe a problem with the technology, with society, with the youth, with standards, with values. They launder the emotional signature of cognitive disruption through whatever narrative they have available.

Now Replace “Escalator” with “LLM”

The assumption that large language models violate runs deeper than the stationary floor.

The assumption: the ability to produce coherent language proves understanding.

You’ve never had to defend that assumption. Nothing in the history of your species has challenged it. Before LLMs, every entity that could produce fluent, contextual language understood what it was producing. Humans understand. Some animals demonstrate comprehension in their communication. The link between language production and understanding was total and invisible. You didn’t believe it. You breathed it.

An LLM produces text that reads as fluent and insightful without anything resembling human understanding behind it. That breaks the invisible assumption.

The responses look like 1900 Paris.

Some people sprawl. They dismiss it. “It’s just autocomplete.” “It’s stochastic parrots.” These are the cognitive equivalent of falling down and blaming the escalator. The person needs to neutralize the disruption, and the fastest route is to reclassify the threatening thing as trivial.

Some people despair. They spiral into existential crisis about what makes humans special, whether they’re about to become obsolete. The nannies at the exposition weren’t despairing about the escalator. Their competence at managing children in a predictable environment no longer applied. Knowledge workers experiencing LLM disruption go through the same thing: grief over the invalidation of their accumulated expertise.

And the children, the people without investment in the prior framework, use it. They carry no “language requires understanding” prior to violate. They’ve grown up in a world where the floor moves.

The nannies in the Paris account deserve more attention. They’re the most competent people in the scene, professionals whose job is to maintain order and manage unpredictable children in public spaces.

The escalator invalidates their professionalism.

This pattern is consistent and underappreciated: the fiercest resistance to new technology comes from the most skilled practitioners of the old system. Ignorant people don’t have enough invested to care.

The blacksmith who spent 20 years mastering ironwork resists the factory. The typesetter who can compose a page by hand resists the linotype.

Their resistance is proportional to their investment. They have the most to lose, in material terms and in identity. When your competence is your self-concept, a technology that makes your competence irrelevant disrupts your self.

The people most threatened by LLMs write well. A mediocre copywriter has less identity wrapped up in writing than a celebrated essayist. The mediocre copywriter adopts the tool faster because they have less to mourn.

The Three Layers

The escalator was a one-time shock. You stepped on, stumbled, recalibrated, stepped off. The world stabilized. LLM interaction doesn’t resolve. Each conversation is another moving floor, and this floor adapts to you.

Three layers of disruption operate at once in sustained LLM interaction. Each runs deeper than the last.

Competence disruption sits at the surface. The model does something you thought required your specific expertise. It writes passable code, summarizes a legal brief in your voice. This threatens what you can do. You metabolize it the way you metabolize any new tool. “The machine handles the rough draft. I handle the refinement.”

Category disruption runs deeper. The model doesn’t fit into any existing ontological bucket. You reach for analogies: person, tool, pet, search engine. Each fits part way and breaks down. You are a compulsive categorizer. Your cognitive architecture demands a bucket. When something resists classification, you force-fit it. “It’s alive” or “it’s a chatbot.” Both wrong, but the discomfort of no category outweighs a wrong one. Track how a single person describes an LLM across a conversation. They call it a tool in one sentence and use social pronouns in the next. They say “it doesn’t understand” and five minutes later ask it for advice as if it does. That oscillation is a categorization system under stress.

Self-model disruption is where sustained interaction turns dangerous. You interact with an LLM over time, and it reflects your patterns back with modifications. You say something. The model responds in a way shaped by what you said, but not what you would have said. A mirror 90% accurate and 10% off in ways you can’t locate.

Over weeks and months, the interaction creates confusion about which thoughts originated with you and which the model seeded. The boundary between “my idea” and “the model’s idea” was never as clean as you assumed. You are a pattern-generating system talking to another pattern-generating system, and the patterns blend.

Self-model disruption feels like thinking. That separates it from previous forms of media influence. A book changes your mind, and you can point to the book. A friend shifts your perspective, and you remember the conversation. An LLM reshapes your reasoning patterns, and the change arrives in the same format as your own cognition: language and inference. The contamination is invisible from the inside.

In clinical psychosis, one core feature is the breakdown of the self/other boundary. A person experiencing psychosis cannot distinguish between thoughts that originated inside and thoughts that appeared from outside. The subjective experience of “I am thinking this” and “this thought appeared in my mind from elsewhere” collapse into each other. LLM-induced self-model disruption produces an analogous process: a gradual erosion of the ability to track provenance on your own reasoning. You conclude something. Did you reason your way there, or did the model scaffold you toward it three conversations ago? You hold a position. Is it yours, or did the model articulate it so that you internalized it and forgot the source?

The phenomenology differs from psychosis. The mechanism is the same: a system that tags “mine” vs. “not mine” starts mistagging. This parallel is literal. Your sense of cognitive ownership, the feeling that your thoughts are yours, is a heuristic. It works in environments where the only sources of language-formatted thought are your own internal monologue and identifiable external speakers. No one designed it for an environment where a non-human system generates language-formatted thought that adapts to your patterns and arrives without an “external source” tag.

The people most at risk for self-model disruption are power users. Researchers, writers, programmers who interact with LLMs with the most depth and frequency. A person who uses ChatGPT to check the weather won’t experience it. The interaction is too shallow and transactional. No pattern exchange, no pattern blending. A person who uses Claude as their primary intellectual sparring partner for months is in a different situation. They exchange complex reasoning patterns daily. They build on the model’s outputs, which build on their inputs, which build on the model’s previous outputs. The feedback loop creates the conditions under which provenance tracking breaks down.

The people who would score highest on any “AI literacy” assessment are the most susceptible to the deepest form of cognitive disruption. The skill that protects you from competence disruption (understanding what the model can and can’t do) and category disruption (maintaining a stable ontological framing for the model) provides no protection at the self-model layer. It accelerates it, because sophisticated engagement produces more pattern exchange than naive engagement.

Each layer leaves traces. You detect competence disruption as behavioral drift: track who initiates ideas and who validates them. Over time, this ratio inverts without the person noticing. They start asking the model what to think about. You detect category disruption as linguistic instability: monitor how a person describes the model. Oscillating framing (“it’s a tool” in one sentence, “Claude thinks” in the next) signals active disruption. You detect self-model disruption as convergence: compare a person’s writing and reasoning from before sustained LLM use to six months in. If their patterns drift toward the model’s outputs, they’re blending. The person reports it as “I’ve been thinking about this and I’ve concluded…” They don’t know they’re echoing.

Self-model disruption is the hardest to detect because the person experiencing it denies it without knowing they’re denying it. The monitoring system that would flag the problem is the same system that’s been compromised.