LLM
-

Bridging Context Engineering in AI with Requirements Engineering
How AI-driven context engineering can transform requirements: dynamic, multimodal scenario generation and proactive need inference.
-

Transformers Are Injective: Why Your LLM Could Remember Everything (But Doesn’t)
Transformers may be injective and invertible: hidden activations can reconstruct inputs—big gains for interpretability, major privacy risks.
-

LLM-Guided Image Editing: Embracing Mistakes for Smarter Photo Edits
Apple’s MGIE uses LLM-guided text editing that learns from imperfect edits, making photo retouching conversational, faster and more creative.
-

The Neural Junk-Food Hypothesis
LLM ‘brain rot’: training on junk short, high-engagement posts erodes reasoning, safety, and behavior—data quality wins.
-

“Personality” in a Machine: What Do We Mean?
LLM coding personalities: recognize archetypes, biases, and failure modes—calibrate prompts and reviews to harness strengths.