© 2024 Milwaukee Public Media is a service of UW-Milwaukee's College of Letters & Science
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Why We Judge Algorithmic Mistakes More Harshly Than Human Mistakes

DAVID GREENE, HOST:

This feels like a time - doesn't it? - when we might want to consider how much computers direct our lives. Computer programs, or algorithms, decide so much for us already. An algorithm is basically this set of instructions for a computer, kind of like the one in your thermostat. If the temperature falls below, say, 68, turn on the heat. And they can be more complex. If you make a wrong turn, your GPS redirects you. If you're sick, an algorithm might tell you whether to have surgery. Well, one very important decision remains, and that's whether to trust these algorithms. NPR's social science correspondent Shankar Vedantam spoke to my colleague Steve Inskeep.

STEVE INSKEEP, HOST:

Hi, Shankar.

SHANKAR VEDANTAM, BYLINE: Hi, Steve.

INSKEEP: This is a decision that makes a lot of people uncomfortable.

VEDANTAM: It does, Steve. And that's not just anecdotal. There's actually evidence for that. Let's say you have a human making a tough decision alongside an algorithm - researchers recently asked whether we judge algorithmic mistakes more harshly than we judge human mistakes. Berkeley Dietvorst at the Wharton School at the University of Pennsylvania has just finished a study. He has volunteers judge mistakes made by humans and mistakes made by algorithms, and he finds something very interesting.

BERKELEY DIETVORST: People failed to use the algorithm after they'd seen the algorithm perform and make mistakes, even though they typically saw the algorithm outperform the human. In our studies, the algorithms outperform people by 25 to 90 percent.

INSKEEP: Wait a minute. We're talking about this computer program that might give you directions were to drive the car or even whether to cut with the scalpel here. And the algorithm statistically does a better. But we believe in them less?

VEDANTAM: That's exactly right. And there's lots of ways to try and understand this, Steve. We have fears about the machines - you know, from "Frankenstein" to movies like "The Matrix" - machines we imagine can become animated and conscious and take over our world. And we're uncomfortable about that. But it's also more than that. Dietvorst thinks that our bias in favor of humans - he called it algorithmic aversion - stems from the fact that volunteers believe that humans can learn from their mistakes and correct them. So we feel that algorithms are stuck in their ways. The ironic thing is that we fail to see how doing the same thing the same way can also be a virtue.

DIETVORST: Algorithms are consistent. If you give algorithms the same information to make a forecast, they'll produce the same forecast every single time, where humans are not reliable. Humans will make different forecasts given the same information.

INSKEEP: So the very thing that we fear about computers is actually, you're saying, the thing that makes them good?

VEDANTAM: That's right, Steve. And increasingly, computers are going to run more and more of our life. So imagine a day when it's not just the GPS in your car, but the car itself is a self-driving car. The first time one of these cars causes a major crash, we're going to say, what genius thought that we could trust an algorithm to drive a car? The problem is we're making a false comparison between a world where machines make mistakes and a world where mistakes never happen. The real comparison ought to be between how often self-driving cars make mistakes and how often human drivers make mistakes.

INSKEEP: I wonder if there's another factor that comes into play here because you said one other reason that we trust humans is we presume that humans can learn. Aren't we entering a world in which the algorithms themselves will be learning more and more?

VEDANTAM: In fact, I think we've already entered that world, Steve. Algorithms not only learn, but they learn very well. So one of the things that computers are very good at doing is quickly learning how much weight to attach to the different components of a decision. So if you have students applying to law school, the computer might be able to say, here's how much attention you pay to their grades in college, to their LSAT scores, to their recommendation letters. Now, of course, many of us are comfortable with algorithms making decisions for other people. We just don't like it when algorithms make decisions about us.

INSKEEP: (Laughter) Don't make a decision about my law school application.

VEDANTAM: Precisely. Because we think we're unique and special, how can it possibly be that an algorithm can judge us?

INSKEEP: Shankar, thanks very much.

VEDANTAM: Thank you, Steve.

INSKEEP: That's NPR's Shankar Vedantam. You can follow him on Twitter @hiddenbrain, follow this program @MorningEdition and @NPRInskeep. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Shankar Vedantam is the host and creator of Hidden Brain. The Hidden Brain podcast receives more than three million downloads per week. The Hidden Brain radio show is distributed by NPR and featured on nearly 400 public radio stations around the United States.