Category: LLM
-

Engram, DeepSeek, and the return of “memory” as an architectural primitive
DeepSeek’s Engram adds conditional memory to MoE models, shifting routine local patterns to fast lookup—freeing compute as memory costs surge.
-

Meta-Prompting: How to Get More Signal Out of Your Prompts
Unlock sharper prompts and get more signal from every query. A quick guide to boosting your LLM results with smart meta‑prompting.
-

Stay in Your Lane, Agent
Stay in your lane with AI agents: master one workflow, measure PRs/bugs/diff size, and evaluate new agents in controlled tests.
-

Recursive Language Models: when “more context” stops meaning “more tokens”
Recursive Language Models fix context rot by treating long prompts as external state—let models orchestrate, not ingest all context.
-

Reconstructing Mathematics from the Ground Up with Language Models: An Analysis
AI reconstructs mathematics: language models autonomously rediscover proofs, conjectures, and reshape how we do math.