LLM
-

The Neural Junk-Food Hypothesis
LLM ‘brain rot’: training on junk short, high-engagement posts erodes reasoning, safety, and behavior—data quality wins.
-

“Personality” in a Machine: What Do We Mean?
LLM coding personalities: recognize archetypes, biases, and failure modes—calibrate prompts and reviews to harness strengths.
-

Small Models, Big Brains: Why Less Might Be the Future of AI Reasoning
Tiny Recursive Model (TRM) outperforms huge LLMs on reasoning tasks — efficient, sustainable AI for mobile and resource‑limited devices.
-

Two Protocols, Two Futures: OpenAI’s ACP vs. Anthropic’s MCP
Protocol wars: OpenAI’s commerce-focused ACP vs Anthropic’s integration-first MCP — will AI become buyer, connector, or a hybrid?
-

An LLM Made of Redstone Bricks: What CraftGPT Really Teaches Us
CraftGPT: an LLM built from Minecraft Redstone — a physical, mind-bending demo of latency, quantization, and engineering tradeoffs.