© 2024 Milwaukee Public Media is a service of UW-Milwaukee's College of Letters & Science
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Microsoft President Brad Smith Discusses The Ethics Of Artificial Intelligence

AUDIE CORNISH, HOST:

Just because we can use it, should we? That's the question more and more people are asking about face recognition technology, software that's already in our phones and our social media feeds and many security systems. San Francisco leaders have voted to ban the police from using it, and even some in the tech industry say there should be limits.

BRAD SMITH: It's the kind of technology that can do a lot of good for a lot of people, but it can be misused. It can be abused. It can be used in ways that lead to discrimination and bias. It can really make possible the kind of mass surveillance that in the past has always just been the realm of science fiction.

CORNISH: That's Brad Smith. He's president of Microsoft. The company makes facial recognition software and other tools that raise some ethical dilemmas, and he's our guest on this week's All Tech Considered.

(SOUNDBITE OF ULRICH SCHNAUSS' "NOTHING HAPPENS IN JUNE")

CORNISH: Brad Smith told me Microsoft has turned down deals to sell its facial recognition software to buyers it didn't trust to use it properly.

SMITH: What facial recognition services enables somebody to do is look at your image on a camera and run it against a database and identify who you are. One deal we turned down was with a law enforcement division in California. They wanted to put this in all their police cars so that if you were pulled over for any purpose, they would look for a match. And if you matched a suspect's face, you would be taken downtown for further questioning.

Our concern is that the technology, regardless of where you buy it from, is just not ready for that. It will lead to bias. It will lead to false identifications. People will be put in the back of police cars when they've done nothing wrong. It seemed to us to be a step too far.

CORNISH: Now, on the other hand, Microsoft has provided the technology to a prison. Microsoft researchers have worked with a Chinese military-run university on AI research that some people fear could be used against China's minorities. So to you, where is the line when it comes to helping governments gain and use these kinds of tools?

SMITH: We need to look constantly at the technology that's at issue and how it's going to be used. We were comfortable providing facial recognition within a prison because the sample size of people involved is actually relatively small. We could be confident that people would be identified correctly. And there was a societally beneficial goal, namely to actually help keep prisoners safe by knowing who was where and at what time.

More broadly, we've supported basic research, advances in the fundamental frontiers of knowledge. You know, we're not working, for example, with authorities in China to deploy facial recognition services for surveillance. But when you get to the bottom foundation for all artificial intelligence, advances in machine learning and the like, we believe that that's where there are real benefits for people able to work on advancing scientific knowledge more generally.

CORNISH: Now, something that bothered some Microsoft employees earlier this year enough to write an open letter about it was Microsoft's contract with the U.S. Army. And you're making augmented reality headsets for soldiers. These employees said, look; we didn't sign up to develop weapons. You still have that contract. How did you square that? I mean, did the resistance by employees change your thinking or affect your thinking?

SMITH: First, we believe we have a responsibility - even a patriotic responsibility - to provide our technology to the people who defend our country and keep us safe. So we said we'll provide artificial intelligence and our other technology products to the U.S. military. Second, we recognize that certain employees may not want to work on these projects. And we've said that we'll work to enable people to work on other projects. That's one of the benefits of being a large company. And so far we've been able to accommodate everybody's interests and respect their views.

But third, we've said we'll focus on the issues. We do want the U.S. military - every military and every country - to really address in a thoughtful way the new ethical and broader public policy issues that artificial intelligence is creating.

CORNISH: The motto for tech companies for so long was move fast, and break things. Are you all hitting up against the reality of what that could mean?

SMITH: Well, I've never been a great fan myself of that motto. I think our motto needs to be, don't move faster than the speed of thought. Put guardrails in place, and think about society. There's still lots of room to move fast. We will need to move fast. But we actually need to be more thoughtful. And what we also need is we need governments.

CORNISH: But do you get pushback from that? I mean, that seems like that is completely at odds not just with the motto but with the tech industry saying, look; we are first in the world in the U.S. because we are aggressive, because we push boundaries.

SMITH: There are lots of different voices in the tech sector. There are days when I get pushback. There are more days in recent months when I see heads nod. It doesn't mean that the whole industry has moved to a different place. But I do think that we've all learned that technology plays such a pervasive role in the world that it just can't afford to break everything around it. There's just too much that will end up broken.

CORNISH: You are, finally, calling for government regulation. What is the role for government in this situation? And I ask because having covered Congress, they don't always seem like they're all that caught up (laughter) on the technology based on some of the questioning. What would tougher rules or regulations look like?

SMITH: We live in a country where the government has learned generally well how to manage and regulate complicated products, whether it's an automobile or an airplane or a pharmaceutical product or even a drug.

CORNISH: Although when you listen to industry, there's lots of complaints about regulation. It doesn't always sound like, you know, people feel that way.

SMITH: Yeah, I - and the complaints are often valid. And to some degree, I think that people who work in business complain about regulation the way college students complain about the food in the cafeteria. It just is part of everyday life. And you can understand it. But a world in which important products are subject to the rule of law and rules of public safety, in my view, is better than a world where there are no rules in place.

Rather than complaining about what regulation may bring or even ridiculing people who may ask the wrong question or may ask the right question the wrong way - I think we're all far too quick to criticize even our legislators if they don't ask a question precisely right. That's not going to get us where we need to go.

CORNISH: Brad Smith, Microsoft's president, thank you so much for speaking with ALL THINGS CONSIDERED.

SMITH: Thank you. Transcript provided by NPR, Copyright NPR.