The “WOW” Factor: Human Cognition in the Age of Algorithmic Summarisation

Today, I want to briefly explore a quieter, more psychological dimension of the rise of AI in judicial systems — specifically, how the human mind responds to algorithms that can now summarize vast amounts of data faster, and in many cases more coherently, than we can.

I’d like to focus not just on how AI is changing judicial work, but on something more subtle — how it’s changing the way we feel and think when we interact with algorithmic outputs. Specifically, I want to talk about the “wow factor”— a psychological response we increasingly see when humans are presented with highly fluent, complex, or fast outputs generated by AI system.

Framing the Issue

In the judicial and arbitral contexts, AI tools are increasingly forming part of the toolkit to summarise case law, extract patterns, and surface relevant precedents —more quickly and comprehensively than any human could. Judges and legal professionals are now being presented with beautifully compressed, coherent narratives drawn from terabytes of information. The problem is not whether these tools are accurate — it’s that they are impressive. And being impressed affects cognition.

The “Wow Factor”: What Is It?

Psychologically, the “wow factor” refers to a cognitive-emotional bias that arises when we are confronted by outputs that improve upon our own capacity — especially when they come from non-human agents like AI.

There’s growing research showing that humans have a default tendency to attribute authority and even epistemic superiority to machines that present information with high fluency and speed — especially if that information appears comprehensive.

One study by Logg, Minson, and Moore (2019) coined the term “algorithm appreciation” — showing that people are more likely to trust algorithmic forecasts than human ones, even when both are demonstrably flawed. The effect is stronger when the algorithm uses data complexity or summarisation techniques that humans perceive as inaccessible.

Cognitive Science Behind the Reaction

Why does this happen?
A few mechanisms are at play:

1. Processing Fluency Bias

Research in cognitive psychology shows that the more fluently information is presented — in language, layout, or timing — the more likely we are to judge it as true. AI summaries are often more fluently structured than human ones, which increases their perceived validity.

2. Cognitive Offloading

Studies in neuroscience and behavioral psychology (e.g., Risen & Critcher, 2011) show that when we encounter a highly competent system, we tend to offload mental effort. Our critical faculties reduce because we assume the machine has “done the thinking” for us.

3. Anthropomorphic Trust

AI that uses natural language — especially in tools like GPT or summarisation engines — triggers social cognition. The brain reacts as if engaging with an intelligent peer, not a tool. That activates areas of the brain tied to trust and empathy, further reducing skepticism.

Implications for Judicial Decision-Making

What does all this mean for judicial processes?

A. Epistemic Deference to Algorithms

Judges may begin to treat AI outputs as more than summaries — as inherently correct interpretations of complex cases. This is dangerous, especially when AI models are trained on biased data, or when summarisation hides nuance critical to legal fairness.

B. Erosion of Cognitive Confidence

Consistently being outperformed in data analysis and recall may lead judges to doubt their own judgment, especially under time pressure. This can result in what some psychologists call “decision paralysis” — the inability or unwillingness to override algorithmic outputs even when human intuition signals caution.

C. The Illusion of Comprehension

Fluent, compressed AI outputs may give legal professionals the false sense that they understand a case more deeply than they do. This is a well-documented bias: fluency enhances perceived comprehension, even when real understanding is shallow (Alter & Oppenheimer, 2009).

D. Accountability Without Insight

When human actors are ultimately responsible for decisions made using AI tools they cannot fully interrogate — due to the speed, opacity, or scope of the summarisation — they may experience ethical strain. There is a growing psychological burden in being held accountable for a decision shaped by something you can’t fully understand.

Toward Ethical Cognitive Design

If AI is to become a sustainable part of the legal decision-making process, we must acknowledge not just its technical power, but its psychological influence. The goal is not to eliminate AI, but to design systems that work with — rather than exploit — human cognition.

Some suggestions:

1. Design for Skepticism

Present AI outputs in a way that invites scrutiny, not blind acceptance. Highlight areas of uncertainty, data limitations, and what was left out.

2. Cognitive Literacy Training

Judges and legal staff should be trained not just in how AI tools work, but in how these tools affect their thinking — including known biases like fluency and automation trust.

3. Transparent Summarisation Techniques

AI-generated summaries should show their work: what data were prioritized, what methods were used, and why certain exclusions were made. This enables contestability.

4. Human-AI Collaboration Models

Rather than replacing human judgment, AI systems should be designed to augment reasoning, much like a junior clerk or researcher — always subordinate to critical reflection.

Next
Next

Embracing AI in Practice: The 2025 AAA-ICDR’s Future Dispute Resolution Conference