The Impossibility Of Content Moderation Extends To The People Tasked With Doing Content Moderation
For years, now, we've been writing about the general impossibility of moderating content at scale on the internet. And, yet, lots of people keep demanding that various internet platforms "do more." Often those demands to do more come from politicians and regulators who are threatening much stricter regulations or even fines if companies fail to wave a magic wand and make "bad" content disappear. The big companies have felt compelled to staff up to show the world that they're taking this issue seriously. It's not difficult to find the headlines: Facebook pledges to double its 10,000-person safety and security staff and Google to hire thousands of moderators after outcry over YouTube abuse videos.
Most of the demands for more content moderation come from people who claim to be well-meaning, hoping to protect innocent viewers (often "think of the children!") from awful, awful content. But, of course, it also means making these thousands of employees continuously look at highly questionable, offensive, horrific or incomprehensible content for hours on end. Over the last few years, there's been quite a reasonable and growing concern about the lives of all of those content moderators. Last fall, I briefly mentioned a wonderful documentary, called The Cleaners,focused on a bunch of Facebook's contract content moderators working out of the Philippines. The film is quite powerful in showing not just how impossible a job content moderation can be, but the human impact on the individuals who do it.
Of course, there have been lots of other people raising this issue in the past as well, including articles in Inc. and Wired and Gizmodo among other places. And these are not new issues. Those last two articles are from 2014. Academics have been exploring this issue as well, led by Professor Sarah Roberts at UCLA (who even posted a piece on this issue here at Techdirt). Last year, there was another paper at Harvard by Andrew Arsht and Daniel Etcovitch on the Human Cost of Online Content Moderation. In short, none of this is a new issue.
That said, it's still somewhat shocking to read through a big report by Casey Newton at the Verge, about the "secret lives" of Facebook content moderators. Some of the stories are pretty upsetting.
The moderators told me it’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.”
There may be some reasonable questions about what kind of training is being done here -- and about hiring practices that might end up having people susceptible to the internet's garbage put into a job reviewing it. But, still...
Part of the problem is that too many people are looking to the big internet companies -- mainly Google and Facebook -- to solve all the world's ills. There are a lot of crazy people out there who believe a lot of crazy things. Facebook and YouTube and a few other sites are often a reflection back of humanity. And humanity is often not pretty. But we should be a bit concerned when we're asking Facebook and Google to magically solve the problems of humanity that have plagued humans through eternity... and to do so just by hiring tens of thousands of low-wage workers to click through all the awful stuff.
And, of course, the very same day that Casey's article came out, Bloomberg reported that its growing roster of thousands of moderators are increasingly upset about their working conditions, and Facebook's own employees are getting annoyed about it as well -- noting that for all of the company's claims about how "important" this is, it's weird that they're outsourcing content moderation to third parties... and then treating them poorly:
The company’s decision to outsource these operations has been a persistent concern for some full-time employees. After a group of content reviewers working at an Accenture facility in Austin, Texas complained in February about not being allowed to leave the building for breaks or answer personal phone calls at work, a wave of criticism broke out on internal messaging boards. “Why do we contract out work that’s obviously vital to the health of this company and the products we build,” wrote one Facebook employee.
Of course, it's not clear that hiring the content moderators directly would solve very much at all. As stated at the very top of this article: there is no easy solution to this, and every solution you think up has negative consequences. On that front, I recommend reading Matt Haughey's take on this, properly titled, Content moderation has no easy answers. And that's coming from someone who ran a very successful online community (MetaFilter) for years (for a related discussion, you can listen to the podcast I did last year with Josh Millard, who took over MetaFilter from Haughey a few years ago):
People often say to me that Twitter or Facebook should be more like MetaFilter, but there’s no way the numbers work out. We had 6 people combing through hundreds of reported postings each day. On a scale many orders of magnitude larger, you can’t employ enough moderators to make sure everything gets a check. You can work off just reported stuff and that cuts down your workload, but it’s still a deluge when you’re talking about millions of things per day. How many moderators could even work at Google? Ten thousand? A hundred thousand? A million?
YouTube itself presents a special problem with no easy solution. Every minute of every day, hundreds of hours of video are uploaded to the service. That’s physically impossible for humans to watch it even if you had thousands of content mods working for YT full time around the world.
Content moderation for smaller communities can work. At scale, however, it presents an impossible problem, and that's part of the reason why it's so frustrating to watch so many people -- especially politicians -- demanding that companies "do something" without recognizing that anything they do isn't going to work very well and is going to create other serious problems. Of course, it seems unlikely that they'll realize that, and instead will somehow insist that the problems of content moderation can also be blamed on the companies.
Again, as I've said elsewhere this week: until we recognize that these sites are reflecting back humanity, we're going to keep pushing bad solutions. But tech companies can't magically snap their fingers and make humanity fix itself. And demanding that they do so, just shoves the problem down into some pretty dark places.
Permalink | Comments | Email This Story
https://ift.tt/2Vs8DEu
Comments
Post a Comment