
Why Humans Struggle with Decisions Made by AI

We seek accountability
When a human makes a bad call, we can challenge it, appeal it, or at least talk to someone. But when an AI system makes a mistake, like calling a ball out that was clearly in, there’s no face to confront. As seen at Wimbledon, when the automated system failed during a key point, players and fans were left in limbo. Who do you argue with when the voice saying “fault” is pre-recorded?
Emotion Matters More Than We Admit
Sport is emotional. So is life. When a decision affects us such as a job rejection, a denied loan, or a bad pit call for an F1 driver, we want to believe someone felt the weight of that moment. AI doesn’t feel. It doesn’t understand pressure, context, or nuance. That disconnect creates a sense of alienation, even when the decision is technically correct.
We Don’t Trust What We Can’t Understand
AI systems are often black boxes. At Wimbledon, players like Emma Raducanu and Jack Draper voiced frustration not just at the calls, but at the opacity of the system. Why was that ball out? Why didn’t the system catch it? Without transparency, trust erodes. The stats might prove the system is statistically more accurate than humans, but it’s about more than facts.
We’re Wired for Human Connection
There’s a reason people miss line judges. Their presence adds drama, personality, and a sense of shared experience. When a machine replaces that with a flat, robotic “out,” it feels sterile. In high-stakes moments, we want to connect.
Errors Feel Colder When They’re Made by Machines
Ironically, we’re more forgiving of human error than machine error. A line judge blinking at the wrong moment is frustrating, but understandable. A machine making the same mistake feels like a betrayal of its promise. At Wimbledon, when the system was accidentally turned off mid-match, the fallout was intense. Not because humans never error, but because we expect machines not to.
AI may be efficient, but it’s not empathetic. And when decisions are made without a human touch, we feel the absence. With Agentic AI it’s easy to fall into the same trap. Agents, and companies, can hide behind the “computer says no”. It might look good on a spreadsheet, but longer term it will diminish the customer experience.