'Coded Bias' Film Explores How Artificial Intelligence Perpetuates Discrimination
Shalini Kantayya describes herself as a filmmaker who’s fascinated with disruptive technologies and the good or harm they create. In a data-driven and increasingly automated world, there’s a question of how to protect our civil liberties as artificial intelligence grows by the day.
MIT researcher Joy Buolamwini discovered that most facial recognition technology does not see dark-skinned faces and women’s faces accurately. This led to an investigation of how the technology we typically see as objective can actually encode racism and sexism.
Buolamwini, and others working to change technology for the better around the globe, are featured in Kantayya’s documentary Coded Bias. The film looks at the bigger implications of unchecked artificial intelligence (AI) and big data, and how bias in algorithms impact everyone.
“I think before making this film, everything I knew about AI, machine learning, algorithms was sort of from the imagination of Stephen Speilberg. And I think in the making of this film, I began to understand the ways in which these [technologies are] increasingly becoming this gatekeeper of opportunity in our society and all the ways we are outsourcing our decision making to machines,” says Kantayya.
As college admissions boards and hiring committees increasingly rely on technology to screen through applications, Kantayya says trust is being placed in technology that has not been properly vetted for racial or gender biases.
“Algorithms have this kind of crude logic and they’re sort of sorting us into categories all the time, and some of those can be stereotypes quite frankly,” she says.
Kantayya says it’s important to remember that algorithms do not create objective truths and need to be questioned.
It’s not just algorithms that need to be examined, she says, but also the ways in which companies are acquiring personal data to feed algorithms.
“Not only are we not aware of all the ways that we are being invasively surveyed all the time, but we don’t have any rights around it,” she says.
Experiences like searching for new shoes on Google and then every ad for the next few weeks are for shoes are very common. And, there aren’t always ways around giving up that data.
“We are essentially living in a world like it was for the automobile without seatbelts or car seats or any kind of safety regulations in place,” Kantayya says.
Concerns have also been raised about the sale of facial recognition technology, especially to law enforcement. Opponents say that this technology could lead to increased racial profiling and further discrimination.
She says some major companies — Amazon, IBM and Microsoft — in the facial recognition market have addressed concerns by either announcing they will not be selling products to law enforcement or that they would pause any sales.
Kantayya says lawmakers have fallen behind on understanding these technological advances, and to get any meaningful legislation passed to help protect people on the internet, there needs to be education given to members of Congress.
“We have to educate our lawmakers about how these technologies work, and I think that will only come from we the people putting pressure on our policy makers to understand what’s happening,” she says.