LLM
-
When Markets React to Perceived Disruption: Claude Code Security and the Cybersecurity Sector
Anthropic’s Claude Code Security announcement triggered a sharp drop in cybersecurity stocks, highlighting investor fears about AI-driven disruption.
-

Distillation attacks on large language models: motives, actors and defences
The essay explores unauthorized AI model distillation, profiling actors like DeepSeek and examining motivations such as cost reduction and performance cloning, while reviewing defense measures by major AI companies.
-
The Opaque Prompt Pipeline: Why “AI-Powered” Tools Make You Leak on Autopilot
Many “AI-powered” apps hide the model, costs, and retention—turning your text into an untracked data export. That’s not paranoia.
-
From PDE Guarantees to LLM Inference: What BEACONS Gets Right About Reliability
BEACONS shows how bounded-error, composable neural solvers can be certified—hinting at LLM inference pipelines with checkable reliability.
-

The Assistant Axis: when “helpful” is a place, not a promise
Anthropic finds a measurable “Assistant Axis” in LLMs. Capping drift along it reduces harmful persona shifts and jailbreaks—raising questions about human identity.