Cleric with Anthropic logo standing inside a fragile house made of playing cards, smiling nervously.

Imported Memories, Exported Halo

by

in

For a while, Anthropic occupied the most flattering role in the AI morality play. OpenAI was the brash empire. Meta was the accelerant. xAI was the boy throwing fireworks into the orchestra pit. And Anthropic, by contrast, was cast as the thoughtful order of monks who had wandered into Silicon Valley carrying safety papers and moral seriousness in embossed folders.

It was an enviable position. Not merely profitable, but sanctifying.

Then, as so often happens in technology, the incense met an electrical fault.

In late March and early April 2026, Anthropic’s Claude Code was first embarrassed by a source leak tied to an npm packaging mistake involving version 2.1.88 and a published sourcemap, and then hit by reports of a critical vulnerability disclosed days later. Public reporting described the leak as exposing roughly half a million lines of internal code across about 1,900 files. Anthropic said the incident stemmed from human error and did not expose customer data or model weights. That distinction matters, but not nearly as much as the company would have liked.

This is the trouble with halos. They look magnificent under studio lighting, but they are built from narrative, not steel. A normal software company can blunder and people sigh, mutter something about complexity, and move on. But when a company has been marketed, and frequently self-marketed, as the morally serious adult in the room, a routine engineering failure becomes a miniature theological crisis. It is no longer just a packaging mistake. It is a fallen sermon.

That is what made the whole episode so amusing. Anthropic had been benefiting for months from the soft, flattering belief that virtue signals and operational excellence belong to the same species. They do not. A company can publish thoughtful essays on AI safety, discuss constitutional principles in a calm baritone, and still manage to ship the equivalent of its trousers in the public release bundle.

And the timing was almost too neat to be accidental. Anthropic had also been making it easier for users to move to Claude by importing memory and prior context from other assistants. Anthropic’s own support documentation explains how users can bring over useful context rather than start from scratch, and the feature was plainly designed to lower the switching cost for people leaving rival systems, especially ChatGPT. This was the migration ritual: export your machine-mediated soul from one cathedral, carry it reverently across the street, and rehouse it in a cleaner sanctuary.

You could feel the mood around it. There was no serious census, of course, for how many people publicly announced that they were “soooo happy” to have transferred their AI lives from ChatGPT to Claude. The internet does not maintain a Ministry of Self-Satisfied Migration Metrics. But the atmosphere was unmistakable. Claude was not merely a product in those discussions. It was a lifestyle accessory for people who wanted the moral prestige of having chosen the allegedly more civilized machine.

That is why the joke writes itself. The very week you are watching people perform tiny liturgies about leaving the vulgar bazaar of ChatGPT for the candlelit abbey of Claude, the abbey accidentally leaves a side door open and misplaces part of the library.

Grok makes the contrast even sharper. xAI never sold Grok as the chapel of responsible computing. Grok was presented more like a caffeinated goblin with Wi-Fi, a product designed to be provocative, mischievous, and a little disreputable by intent. That makes Grok easier to mock in the ordinary way, but oddly harder to expose as hypocritical. When the chaos goblin behaves chaotically, nobody faints. When the self-appointed adults set fire to the curtains, everyone remembers the lectures. Grok cannot really lose its halo, because it never bothered to put one on.

That does not make Grok admirable. It makes Anthropic funnier.

What happened next was the modern internet in miniature. Security researchers and vendors quickly documented how the Claude Code leak was being weaponized as bait. Trend Micro and Zscaler both reported that threat actors moved fast, using fake repositories and malicious archives masquerading as leaked Claude Code to distribute malware. In other words, the initial mistake did not remain a local embarrassment. It became a trust lure. Within roughly a day, operational sloppiness had already been converted into a small criminal industry.

That is the part the AI priesthood never likes to discuss. In the real world, trust is not an abstract ethical glow. It is a supply-chain surface. It is a release pipeline. It is package hygiene, artifact control, signing discipline, permissions, review, rollback, and the thousand boring rituals that prevent your clever product from becoming a malware-themed punchline. The history of computing is not a history of evil defeated by virtue. It is a history of systems surviving because someone remembered to care about the boring parts.

And this is why all the tribal moralizing around chatbot brands is so tiresome. I do not need an AI company to feel holier than its rivals. I need it to behave like an operationally adult software supplier. I need fewer sermons and fewer liturgical gestures about constitutional values, and more evidence that someone in the build chain knows what should and should not be published to npm.

Anthropic’s stumble does not prove that it is uniquely bad. Quite the opposite. It proves that it is ordinary. It is a real software company, subject to the same humiliations as every other ambitious software company: haste, human error, attack surface, narrative inflation, and the old temptation to market trust before it has been fully earned. The Verge, Bloomberg, and others described exactly that kind of awkward correction after the leak surfaced and spread.

So no, the lesson is not that ChatGPT is therefore pure, or Grok is therefore vindicated, or Claude is therefore finished. The lesson is much less theatrical and much more useful. Stop asking which AI company is the “good” one in the fairy-tale sense. Start asking which one appears capable of being boringly competent when nobody is applauding.

That question lacks glamour. It will not earn anyone many likes on LinkedIn. It does not let people preen about having migrated their chat history to the ethically superior machine. But it has one enormous advantage over the halo-based model of technology criticism.

It is anchored in reality.

And reality, unlike branding, has a nasty habit of reading the sourcemap.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *