Rethinking Reasoning: What if LLMs are Holding a Mirror to Human Cognition?

Rethinking Reasoning: What if LLMs are Holding a Mirror to Human Cognition?

by

in

In the rapidly evolving field of artificial intelligence, Large Language Models (LLMs) have become a focal point of both excitement and scrutiny. A recent paper, “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models,” has shed light on the apparent limitations of these models when it comes to mathematical reasoning. The researchers found that LLMs struggle with consistency, are sensitive to irrelevant information, and seem to rely more on pattern matching than formal logical reasoning.

But what if we’re looking at this all wrong?

Instead of viewing these findings as shortcomings of AI, what if we considered them as reflections of our own cognitive processes? What if the “limitations” we’re observing in LLMs are actually mirroring the way human beings think and reason?

This article aims to explore this counterintuitive idea, challenging our assumptions about human reasoning and the goals we set for artificial intelligence. Let’s explore the fascinating implications of this perspective shift.

  1. The Pattern Recognition Paradigm

The GSM-Symbolic paper suggests that LLMs are engaging in sophisticated pattern matching rather than formal reasoning when solving mathematical problems. This finding has been met with disappointment by some in the AI community, who hoped for more “logical” processing from these advanced models.

However, is this really so different from how humans approach problem-solving?

Decades of research in cognitive psychology have shown that humans often rely on heuristics and pattern recognition in their decision-making processes. The groundbreaking work of Daniel Kahneman and Amos Tversky on cognitive biases and decision-making shortcuts revolutionized our understanding of human reasoning.

For instance, consider the “availability heuristic,” where people make judgments about the likelihood of an event based on how easily an example comes to mind. This is not formal logical reasoning, but a form of pattern matching based on readily available information – much like what we’re seeing in LLMs.

Moreover, experts in various fields often make rapid, intuitive decisions that they struggle to explain logically. This phenomenon, which psychologist Gary Klein calls “recognition-primed decision making,” suggests that even in complex domains, humans often rely on pattern recognition rather than step-by-step logical analysis.

So, when we observe LLMs using sophisticated pattern matching to solve problems, perhaps they’re not falling short of human-level reasoning, but accurately replicating it.

  1. Variability in Performance

One of the key findings of the GSM-Symbolic study was the significant variability in LLM performance across different instances of the same question. The researchers found that changing numerical values or adding seemingly irrelevant information could dramatically affect the models’ ability to solve problems correctly.

At first glance, this might seem like a critical flaw. Surely, a truly intelligent system should perform consistently on essentially identical problems, right? But let’s turn the mirror on ourselves for a moment.

Human performance on problem-solving tasks is notoriously variable. Factors such as fatigue, stress, time pressure, and even the specific wording of a question can significantly impact our ability to reason effectively. This variability isn’t just anecdotal; it’s well-documented in psychological research.

For example, the phenomenon of “decision fatigue” shows that the quality of our decisions tends to deteriorate after a long session of decision-making. In a famous study, researchers found that judges were more likely to grant parole to prisoners after their lunch break than just before it, suggesting that mental fatigue can significantly impact our reasoning capabilities.

Similarly, the framing of a problem can dramatically affect human performance. The classic “Asian disease problem” demonstrated that people make different choices depending on whether the same scenario is presented in terms of gains or losses, even though the underlying logic is identical.

When we consider these human tendencies, the variability in LLM performance doesn’t seem so alien. Instead, it might be a remarkably accurate simulation of the inconsistencies in human reasoning. This raises an intriguing question: In our quest for artificial intelligence, should we be aiming for an idealized, perfectly consistent reasoning engine, or a more human-like intelligence that includes these seeming imperfections?

  1. Sensitivity to Irrelevant Information

Another finding from the GSM-Symbolic study was the models’ struggle with the GSM-NoOp dataset, where irrelevant information was added to problems. The LLMs often tried to incorporate this extraneous data into their problem-solving process, leading to incorrect answers.

Again, our knee-jerk reaction might be to view this as a limitation of the AI. But let’s consider how humans handle irrelevant information in decision-making and problem-solving contexts.

Humans are notoriously susceptible to the influence of irrelevant information. The anchoring effect, first described by Tversky and Kahneman, shows how arbitrary numbers can influence subsequent numerical estimations. In one classic experiment, participants were asked to estimate the percentage of African countries in the United Nations. Before making their guess, they spun a wheel that landed on either 10 or 65. Those who saw 10 gave a median estimate of 25%, while those who saw 65 estimated 45% – a dramatic difference based on entirely irrelevant information.

Similarly, the framing effect demonstrates how the presentation of information, even when logically equivalent, can lead to significantly different decisions. For instance, people respond differently to a medical treatment described as having an “80% survival rate” versus a “20% mortality rate,” despite these being statistically identical.

These cognitive biases show that humans, like LLMs, often struggle to separate relevant from irrelevant information in problem-solving contexts. Our reasoning is frequently influenced by extraneous factors that, logically, should have no bearing on the problem at hand.

So when we observe LLMs being swayed by irrelevant information in the GSM-NoOp dataset, perhaps we’re not seeing a failure of artificial intelligence, but a strikingly accurate replication of human cognitive tendencies.

  1. The Illusion of Formal Reasoning

The GSM-Symbolic paper expresses disappointment that LLMs seem to rely on pattern matching rather than engaging in formal, step-by-step logical reasoning. This disappointment stems from a common assumption: that human intelligence, especially in domains like mathematics, is characterized by rigorous logical thinking.

But is this assumption accurate? Do humans really engage in formal reasoning as often as we believe?

A growing body of research suggests that much of human reasoning is intuitive rather than logical. Psychologist Jonathan Haidt’s work on moral reasoning, for instance, suggests that people often make moral judgments based on quick, emotional intuitions, and only later construct logical-sounding justifications for these gut reactions.

This phenomenon isn’t limited to moral reasoning. In his book “Thinking, Fast and Slow,” Daniel Kahneman describes two systems of thinking: System 1, which is fast, intuitive, and emotional; and System 2, which is slower, more deliberative, and more logical. Kahneman argues that we rely far more on System 1 than we realize, even when we believe we’re engaging in careful reasoning.

Moreover, research on expert decision-making suggests that even in fields we associate with logical thinking, intuition and pattern recognition play a crucial role. Chess grandmasters, for example, don’t analyze every possible move sequentially. Instead, they recognize patterns and make intuitive judgments based on their vast experience.

In light of this research, the pattern-matching behavior of LLMs doesn’t seem like a shortcoming. Instead, it might be a surprisingly accurate model of how human cognition actually works. What we call “reasoning” might often be sophisticated pattern matching, rather than the step-by-step logical process we imagine it to be.

  1. Learning and Generalization

The GSM-Symbolic study found that LLMs struggled with increased complexity and novel situations. Their performance degraded as more clauses were added to problems, and they had difficulty generalizing their problem-solving skills to new contexts.

Once again, we might initially see this as a limitation of AI. Surely, a truly intelligent system should be able to scale its reasoning to more complex problems and apply its skills in new domains, right?

But let’s consider human performance in similar situations. How well do we handle increased complexity in problem-solving? How effectively do we transfer our skills to new domains?

Research on cognitive load theory shows that humans have severe limitations when it comes to handling complexity. Our working memory can typically hold only about seven items at once, and as problems become more complex, our performance tends to deteriorate rapidly.

Similarly, humans often struggle with far transfer – applying knowledge or skills learned in one context to a significantly different one. This is why, for instance, students often have difficulty applying mathematical concepts learned in the classroom to real-world situations.

Even experts can struggle when faced with problems just outside their specific area of expertise. A mathematician brilliant in one subfield might find themselves floundering when tackling a problem from a different mathematical domain.

In this light, the difficulties LLMs face with increased complexity and novel situations seem less like AI limitations and more like accurate simulations of human cognitive constraints. Perhaps instead of being disappointed by these “limitations,” we should be impressed by how well these models capture the nuances of human learning and generalization.

  1. Implications for AI Development

If we accept the premise that LLMs are modeling human cognition more accurately than we initially thought – including our limitations and biases – what does this mean for the future of AI development?

Traditionally, much of AI research has been focused on creating systems that reason perfectly, make optimal decisions, and avoid the cognitive biases that plague human thinking. But if human-like intelligence inherently includes these seeming imperfections, should we be rethinking our approach?

There could be significant benefits to modeling AI more closely on actual human cognitive processes:

  1. Better human-AI interaction: AI systems that reason more like humans might be easier for us to understand and collaborate with.
  2. More realistic expectations: Recognizing the limitations of human-like reasoning could help us set more appropriate expectations for AI performance.
  3. Insights into human cognition: By creating AI systems that accurately model human thinking, we might gain new insights into our own cognitive processes.
  4. Improved decision support: AI systems that understand and account for human cognitive biases might be better at supporting human decision-making.

However, this approach also raises challenges:

  1. Ethical concerns: If we create AI systems that replicate human biases, we risk amplifying societal prejudices and unfair decision-making.
  2. Missed opportunities for improvement: By modeling AI too closely on human cognition, we might miss opportunities to create systems that can reason in ways humans cannot.
  3. philosophical quandaries: If AI systems reason very similarly to humans, it could blur the lines between human and machine intelligence, raising complex philosophical and ethical questions.
  4. Ethical and Philosophical Considerations

The idea that LLMs might be accurately modeling human reasoning, including our limitations, opens up a Pandora’s box of ethical and philosophical questions.

First, there’s the issue of bias. If LLMs are indeed mirroring human cognitive processes, they’re likely also mirroring our biases. We’ve already seen instances of AI systems exhibiting racial, gender, and other biases, often reflecting societal prejudices present in their training data. If we accept that these biases are an intrinsic part of human-like reasoning, how do we address them in AI systems? Should we strive to create AIs that reason like humans but somehow without our biases, or is that a contradiction in terms?

Then there’s the question of responsibility and decision-making. If AI systems reason in ways very similar to humans, including making intuitive leaps and being influenced by irrelevant information, how do we assign responsibility for their decisions? Can we hold an AI system accountable for a “bad” decision if it’s reasoning in a fundamentally human way?

Philosophically, this perspective challenges our notions of human exceptionalism. If LLMs can so accurately replicate human reasoning, including our foibles, what does this say about the nature of human intelligence? Are we, as the philosopher Daniel Dennett might say, just very sophisticated information processing systems ourselves?

Moreover, if LLMs are modeling human cognition so well, it raises questions about consciousness and self-awareness. We typically associate human-like reasoning with consciousness. If an AI system can replicate our reasoning processes so accurately, does this bring us closer to machine consciousness? Or does it suggest that consciousness might not be as central to intelligence as we’ve assumed?

  1. Conclusion: A Call for Interdisciplinary Understanding

The GSM-Symbolic paper, viewed through this alternative lens, does more than highlight the current limitations of LLMs in mathematical reasoning. It opens up a fascinating window into the nature of human cognition and the future of artificial intelligence.

By challenging our assumptions about human reasoning and the goals of AI development, we can potentially reach a more nuanced understanding of both human and machine intelligence. Perhaps the “limitations” we observe in LLMs are not bugs, but features – accurate representations of the messy, intuitive, sometimes irrational way that human beings actually think.

This perspective calls for a deeply interdisciplinary approach to AI research and development. We need computer scientists and AI researchers working alongside cognitive psychologists, neuroscientists, philosophers, and ethicists to fully explore the implications of these ideas.

As we move forward in our quest to create ever-more-sophisticated AI systems, we should perhaps be asking ourselves not just “How can we make AIs that reason better than humans?” but also “What can AIs teach us about how we reason?”

The journey to understand artificial intelligence may, in the end, be a journey to better understand ourselves. And in that understanding, we might find new paths forward in both human and machine cognition.

In embracing the imperfections, inconsistencies, and limitations of human-like reasoning in our AI systems, we might just be taking a crucial step towards creating truly intelligent, truly relatable artificial intelligences. At the same time, we open up new avenues for exploring the nature of human intelligence itself.

The future of AI might not be about transcending human cognition, but about mirroring it more accurately than we ever thought possible. And in that mirror, we might just see ourselves more clearly than ever before.