Tag: Transformers

  • Transformers Are Injective: Why Your LLM Could Remember Everything (But Doesn’t)

    Transformers Are Injective: Why Your LLM Could Remember Everything (But Doesn’t)

    by

    in ,

    The authors of “Language Models are Injective and Hence Invertible”, https://arxiv.org/abs/2510.15511, address a foundational question about transformer-based language models: do they lose information in the mapping from an input text sequence to their internal hidden activations? In more formal terms: is the model’s mapping injective (distinct inputs → distinct representations), and therefore potentially invertible (one…