LLM
-

Recursive Language Models: when “more context” stops meaning “more tokens”
Recursive Language Models fix context rot by treating long prompts as external state—let models orchestrate, not ingest all context.
-

Reconstructing Mathematics from the Ground Up with Language Models: An Analysis
AI reconstructs mathematics: language models autonomously rediscover proofs, conjectures, and reshape how we do math.
-

The Mathematical Limits of AI Safety
LLM safety limits: prompt filters can be bypassed by adversarial encodings; defense-in-depth, monitoring, and layered controls needed.
-

OpenAI’s Confession Booth: Teaching AI to Rat Itself Out
OpenAI trains LLMs to self-report missteps via ‘confessions’, improving honesty and safety with minimal performance cost.
-

The Paper That Made Me Close My Laptop and Pace Around the Room
Self-evolving agents: off-the-shelf models bootstrap via Python REPL and curriculum to dramatically improve math, coding, and reasoning.