Wednesday, October 7, 2020

Who at Facebook Is Deleting Donald Trump's Posts?

Tuesday morning, Facebook deleted a post by Donald Trump that shared disinformation about the flu and the coronavirus. By the time the post was deleted, hours after it had initially been made, it had been shared thousands of times and seen by tens of thousands or hundreds of thousands of people. We have no idea who at Facebook made the decision to delete the post, how it became aware of the post in the first place, how it decided to delete the post, or what its protocols for moderating the posts of the most powerful man in the world and the single most important source of coronavirus misinformation.

Facebook declined to answer a series of specific questions from Motherboard about how it polices Donald Trump’s content, and why it took so long to delete this post. In an email, Facebook spokesperson Andrea Vallone said that the company has “teams around the globe that deal with content questions, so coverage across all time zones, and we have our Elections Operations Center in the US up and running. The president is treated the same as other politicians.”

This answer—that Trump is treated like other politicians— doesn’t fit with everything we know about how Facebook’s content moderation team works, and contradicts reporting from The Washington Post, which claimed Facebook executives pleaded with the White House to delete a Trump post about sending the military to the George Floyd protests in Minnesota. Facebook CEO Mark Zuckerberg has said there is no "deal" with Trump about his posts, but there doesn't have to be. A platform like Facebook cannot and does not approach moderating the president of the United States the same way it approaches moderating every other politician.

Trump’s posts should not be treated the same as you’d treat a small town mayor's or even a congressperson’s. Trump’s history of sowing disinformation, his status as the source of many dangerous conspiracy theories, and the fact that he is the sitting president of the United States necessarily demand a different approach to content moderation than you’d take with ordinary users or lower level politicians.

It is not believable that a low-level Facebook content moderator is making the decision to unilaterally delete the posts of the president of the United States. A Facebook executive, or team of executives, are likely making these decisions on a case-by-case basis, but Facebook will not say who is making these decisions or by what process they are being made. 

When I visited Facebook’s headquarters in 2018 to meet with its content moderation policy team, the thing that was stressed more than any other is that it is very difficult to delete all of the bad content on Facebook. There are simply too many users making too many posts to capture everything in real time, and some things slip through. What Facebook aims to do, it says, is to limit harm, and to limit the spread of dangerous content. 

This “scale” argument goes out the window when you are talking about the leader of the free world: If it is truly Facebook’s intention to limit harm and spread of disinformation, it would have a team of people specifically and proactively dedicated to reading Trump’s posts as they are posted and deciding whether or not they abide by Facebook’s rules. In this case, there are billions of people making billions of posts, there is one person, making a few posts. When it takes Facebook hours to delete one of Trump’s posts, it’s not because people at Facebook aren’t aware of the posts. It’s because Facebook is deliberating about what to do, waiting for someone to make a call, or, in the case of the George Floyd post, is pleading with someone at the White House to take it down or edit it.

There is a long history of social media employees abusing their access to celebrities’ accounts and causing havoc. And there’s also a long history of Facebook posts by high-profile people or accounts being mistakenly deleted by low-level employees when they didn’t actually break the rules. A Facebook moderator mistakenly deleted a “Chick-Fil-A Appreciation Day” page in 2012; Ted Cruz is still talking about this as an example of Facebook apparent anti-conservative bias (this incident was also cited in a House GOP report published Tuesday as an example of anti-conservative bias). In 2017, a Twitter employee "inadvertently" deactivated Trump's Twitter account for 11 minutes. Mistakenly deleting a Trump post would cause havoc, or would at the very least be a giant PR nightmare for Facebook.

Facebook executives have told me that both Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg have made specific, high-stakes content moderation calls in the past. We also know that Facebook content moderators “escalate” tough calls to higher-level employees, and that difficult calls can sometimes be made by a team called the “Risk and Response” team as well as the policy team (which makes the rules) and Facebook PR. 

While Facebook has said over and over that it cares about disinformation and that it cares about election integrity and that it has an "Elections Operations Center" that is doing a host of things to apparently protect America, the company steadfastly refuses to explain how it is preventing the single biggest spreader of disinformation from spreading that disinformation. It is also failing to prevent him from doing so in a timely fashion, before harm is done. This abdication of responsibility and lack of transparency is unacceptable, and it continues because Facebook has yet to meet meaningful consequences for its inability to moderate its platform.. It is a failure of policy, of practice, and of communication.



from VICE US https://ift.tt/2F8vnHg
via cheap web hosting

No comments:

Post a Comment