A stern woman in a classic grey costume stands symmetrically in a pastel hotel lobby, holding a parrot on one hand, in a theatrical nostalgic style.

The Last Principle We Learn to Use

Alexander Lerchner’s paper, The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness, makes a sharp and useful distinction. His argument is that computation, as such, is not an intrinsic physical process but a mapmaker-dependent abstraction: physical states must be interpreted as symbols before they become “computation.” From that, he separates simulation from instantiation. A digital system may mimic conscious behavior, he argues, without ever becoming the physical thing consciousness is. If an artificial system were ever conscious, it would be because of its concrete physical constitution, not because of syntax alone.

That is a serious objection, and it deserves more than the usual hand-waving about scale. Yet there is another historical pattern worth putting beside it: whenever human beings have understood a principle deeply enough, they have eventually learned how to exploit it.

We did not merely contemplate heat; we built engines. Steam engines preceded mature thermodynamics, but the scientific understanding of heat, work, and efficiency turned a craft into an industrial discipline. We did not merely admire lightning; after Faraday, Maxwell, and Hertz, electromagnetic theory became motors, generators, radio, radar, and global communication. We did not merely envy birds; the Wright brothers reduced flight to lift, drag, control, propulsion, and experiment, including their own wind-tunnel measurements. We did not merely observe heredity; once DNA became readable and editable, techniques such as CRISPR turned biological inheritance into an engineering surface.

The pattern is not that humans magically copy nature. They rarely do. They abstract, simplify, instrument, and rebuild. A jet is not a bird. A radio is not a thunderstorm. A pacemaker is not a sinus node. A CRISPR tool is not bacterial immunity in its native ecological setting. But each begins with a discovered principle and ends with a usable artificial implementation.

This is where the consciousness debate becomes interesting. Lerchner is right to resist the crude slogan that “the brain is just software.” That phrase smuggles in too much. A brain is a living, wet, self-maintaining, chemically saturated, electrically active, metabolically expensive physical system. If consciousness depends on some of that territory, then a text-predicting transformer alone may be only a clever puppet theatre. Current work on AI consciousness is cautious on precisely this point: Butlin and colleagues argue that current systems are not conscious, while also saying there is no obvious technical barrier to future systems satisfying many scientific indicators of consciousness. Chalmers similarly treats current large language models as unlikely candidates, but takes successors seriously.

But the conclusion does not have to be: therefore never. A more dangerous conclusion is: therefore not yet.

If consciousness is a natural phenomenon, it has a physical principle. Perhaps that principle is biological. Perhaps it depends on homeostasis, embodiment, affect, recurrent processing, integrated control, world-modelling, self-modelling, memory, and action. Perhaps it requires not a program running on a passive substrate, but a machine that has something like a body: needs, risks, internal regulation, persistence, and causal self-concern. Anil Seth’s biological naturalism pushes in this direction, challenging the assumption that computation alone is sufficient.

Still, “not computation alone” is not the same as “only carbon can do it.” If the missing ingredient is not syntax but physical constitution, then the engineering question merely moves downward. What physical constitution? What causal closure? What internal dynamics? What kind of embodiment? What sort of self-maintenance? Once those questions are answered with enough precision, engineers will not leave them in the temple.

That is the small but decisive amendment. The decisive future system may not be a chatbot. It may not be a larger language model. It may be an artificial organism in the broad sense: a constructed, self-regulating, sensorimotor, memory-bearing, internally motivated system whose “reports” are not detached strings but outward expressions of internal causal states. Such a system would not merely describe pain; it would have internal damage signals, protective policies, persistent self-models, and action priorities altered by those signals. Whether one wishes to call that pain would then become less a metaphysical question than an empirical and moral one.

The slogan “simulation is not instantiation” is often true. A weather simulation does not make rain. A photosynthesis simulation does not make sugar. But sometimes the border shifts. A flight simulation does not fly; an aircraft does. A heart simulation does not pump blood; an artificial pump can. If consciousness is more like rain, it requires the relevant physical process. If it is more like flight, it does not require feathers. The task is to discover which causal features are essential and which are merely how biology happened to solve the problem first.

The human record suggests that once that line is drawn, it will be crossed. Not because humans are wise, but because humans are incorrigibly practical. We turn principles into levers. We turn levers into machines. We turn machines into industries. We first ask what something is; then we ask how to make it obey.

So Lerchner’s warning should be taken seriously, but not as a locked door. It is better understood as a correction to a lazy route. Consciousness will not be conjured by symbolic scale alone. It will not appear merely because a machine produces fluent autobiography. But if consciousness is finally described in physical, operational, and testable terms, then artificial systems will be built to meet those terms. At that point, the old comfort — “it only simulates” — may become indistinguishable from a theological reflex.

The last refuge of mystery is often ignorance. Once the principle is known, the engineer arrives.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *