Anthropic has unveiled Project Glasswing with the solemnity of a company announcing not merely a product, but a mission from a higher plane. The premise is wonderfully grand. Its unreleased model, Claude Mythos Preview, is said to be so capable at finding severe software vulnerabilities that the public cannot be trusted with it yet. Instead, selected partners will use it to hunt flaws in critical systems before the barbarians discover them. Anthropic says the model has already found thousands of serious vulnerabilities, including bugs in major operating systems, browsers, and other foundational software, and says it does not plan to make Mythos generally available for now.
One must admire the theatre. A secret model called Mythos, guarded by a select priesthood, deployed under a project named Glasswing. It sounds less like a security program than a lost chapter from a Gnostic startup gospel. Somewhere between the branding workshop and the press release, somebody clearly decided that ordinary software engineering would no longer do. We are in the age of technological eschatology now. Not “we built a useful tool.” No, this is “we alone may descend into the underworld, wrestle the zero-days from darkness, and emerge bearing salvation for civilization.”
There is only one small comic difficulty. This revelation arrives just after Anthropic made headlines for rather earthly reasons. In late March, unpublished material about Mythos itself was reportedly exposed through a publicly accessible content-management setup, which Anthropic attributed to human error in CMS configuration. Days later, Anthropic also confirmed that a Claude Code release had leaked internal source code because of what it again described as human error, a packaging issue rather than a breach.
And that is where the story ceases to be merely funny and becomes art.
It is hard to resist the image. A company appears at the city gates declaring that it possesses a dragon so powerful that only its own order can ride it responsibly. The dragon will protect the kingdom from fire. The announcement is made while smoke is still rising from the stable, because a stable hand left the side door open and another one accidentally published the dragon manual.
This does not make Project Glasswing worthless. On the contrary, the underlying idea is serious and plausible. If frontier models really can identify exploitable weaknesses faster than human teams, then defensive access for maintainers of critical infrastructure may be one of the few sane moves available. Anthropic’s official position is that Glasswing partners will use Mythos Preview to find and fix vulnerabilities in foundational systems, and that Anthropic will publish what it can about lessons learned within 90 days. That is sensible as far as it goes.
But sensible ideas are often wrapped in absurd corporate self-mythology, and this one arrives gift-wrapped.
The first absurdity is moral posture. Anthropic has long cultivated the image of the careful lab, the conscientious adult in a room full of excitable accelerationists. It prefers the language of restraint, evaluations, safeguards, responsibility, measured release. Yet every institution that speaks too often about its own virtue risks becoming a parody of virtue. Once you start presenting yourself not as one company among many, but as the custodian of civilization’s conscience, the public is entitled to notice when your own cupboard door swings open and spills the silverware into the street.
The second absurdity is the contradiction between scale and humility. Glasswing asks us to believe two things at once. First, that Mythos is already powerful enough to discover severe flaws across vast swathes of the modern software stack. Second, that the organization controlling this capability is still prone to the same old human sloppiness that afflicts every ordinary firm with a CMS, a release pipeline, and a deadline. Both claims can be true. In fact, that is precisely the problem.
The mythology of AI safety often flatters itself with the fantasy that sufficiently advanced intelligence will rise above mundane failure modes. But systems do not become holy because their benchmark numbers improve. A model scoring better on coding tasks and reasoning exams does not abolish the intern with the wrong permission setting, the engineer who ships the wrong artifact, the manager who confuses security culture with security branding. Anthropic itself frames Mythos as a model so strong that even non-experts can use it to find and exploit sophisticated vulnerabilities. That should not inspire mystical awe. It should inspire a very old-fashioned respect for operational discipline.
And there lies the real satire of the moment. The AI industry increasingly wants to be judged on cosmic terms. Its leaders speak of alignment, extinction risk, digital superintelligence, the future of civilization. Yet the world still breaks in the old ways. Misconfigured systems. Leaked assets. Packaging mistakes. Debug artifacts. Public-by-default storage. The apocalypse, when you inspect it closely, often turns out to be a DevOps problem.
Project Glasswing may do some real good. There is no reason to dismiss the possibility that elite models will become powerful defensive tools, especially for open source maintainers and overstretched security teams. Reuters reports Anthropic is extending the effort beyond launch partners to dozens of other organizations responsible for critical software infrastructure, backed by substantial usage credits and donations. That is constructive.
Still, one should resist the temptation to mistake access control for sainthood. A closed beta is not a moral philosophy. A restricted model is not a sacrament. And a company that has just suffered embarrassing exposure events should perhaps avoid sounding like the final
Crystal stage doors open on a puppet dragon marionette in a small theatrical scene. guardian of software civilization descending from the mountain with tablets of law engraved in CUDA.
If anything, the Mythos episode teaches the opposite lesson. The future of software security will not be saved by rhetoric about responsibility alone, nor by baroque names that sound as though they were generated by an AI trained exclusively on fantasy novels, biotech startups, and prestige defense contractors. It will be saved, if at all, by the least glamorous virtues in technical life: careful defaults, hard boundaries, limited trust, reproducible processes, boring reviews, suspicion of magic, and institutional modesty.
Those virtues do not photograph well. They do not headline nicely. They do not sound like Glasswing or Mythos.
But unlike myth, they occasionally work.

Leave a Reply