© 2024 Milwaukee Public Media is a service of UW-Milwaukee's College of Letters & Science
Play Live Radio
Next Up:
0:00 0:00
Available On Air Stations

Facebook, Twitter Issue Policy Changes To Manage Fake News And Hate Speech


With billions of people spending billions of hours on social media today, the ability for Facebook and Twitter to control fake news and hate speech continues to be a challenge. But this past week, these major platforms announced new ways to prevent misinformation and violent rants from landing in people's news feeds.

Here to speak with us is Kerry Flynn, a reporter from Mashable covering the story. Kerry, welcome to the program.

KERRY FLYNN: Thanks for having me on.

SUAREZ: Well, this is hardly the first time Facebook and Twitter have tried to manage these challenges. What's different about these new updates?

FLYNN: So with Facebook, we're starting to see them crack down on fake news. They've noticed that they're spreading what's misinformation. With Twitter, for a long time, they, again, have let anyone join the network. That means that those people on there could be violent people, these people who actually want to incite violence and spread their message out there. And that doesn't make other people feel welcomed. And therefore, maybe it drives those people away from the platform.

SUAREZ: Give us an idea what it will look like. Let's say you go to your Facebook feed - what would a fake news story that comes from a shop in Russia or from a boiler room somewhere in the Balkans look like? Will it look the same? Will it have some sort of warning?

FLYNN: One way that Facebook tried to address this - and this was only about a year ago - is add these, what they called, quote, unquote, "disputed" flags. But what Facebook admitted this week is while they tried that for a year, these red flag icons weren't actually doing that job.

One important note, they said - there were four reasons why they decided to get rid of that program. And one of them is that red actually can enforce a message - as in, I'm reading something, and I'll remember it more 'cause there is a red label next to it. That's clearly not what they would want for someone to hope if they're reading something that's fake news.

And instead, what they're pushing for is something called related articles. So maybe when you see what has become a fake news story, below that is going to be anywhere from two to three other Facebook posts that are about that same type of topic. So in the end, hopefully you're understanding what the right narrative of that story is.

SUAREZ: Now you noted at the very beginning that Twitter and Facebook are taking different approaches. What is Twitter trying to do in the area of hate speech?

FLYNN: Yeah, Twitter is trying to deal with that, too. But they don't do it as much. And really, their stance there is that if there's a fake news story out there, maybe more people will retweet it and say this is wrong. And they're hoping that more people see that.

What they are really trying to do is make sure that those people - whether they're saying something is right or wrong - are not having people who promote and incite violence, whether they do that on platform or off platform.

SUAREZ: So Kerry, in the final analysis, aren't the two platforms taking two very different approaches? While Facebook is adding more context to get any individual consumer to read more, think harder, take more onboard, Twitter's just taking content out. It's censoring it and saying - sorry, can't say that on our place.

FLYNN: It's an important point for sure, yes. Facebook isn't necessarily taking down particular users or particular pieces of content, no. Like, if there is a fake news story, it can still be shared. But with that if you do not buy their (ph) standards either on or off the platform, you can be out. And for a lot of people, that's what's really scary about Twitter right now is they don't really know whether they're out because these processes are slowly rolling out. And even Twitter said perhaps they'll make mistakes.

SUAREZ: Kerry Flynn has been covering efforts to police content on social media platforms for Mashable, where she's a reporter.

Thanks a lot for talking to us.

FLYNN: Thanks for having me on. Transcript provided by NPR, Copyright NPR.