Whats the problem!
Image Credit: Adweek
All content platforms and social media companies must keep the content flowing because that is the business model: Content captures attention, provides viewership and generates data (users’ statistics). Content is the starting and the end point of consumers’ journeys on social media. A video, an information post, a tweet, blog post, picture, public service advisories, are all types of content. The platforms then sell that attention (read: viewership), enriched by that data (read: customized ads). But how do you deal with the objectionable, disgusting, pornographic, illegal, or otherwise verboten content uploaded alongside legitimate content?
How do Facebook and other tech and social media companies ensure integrity of content on their networks? And how do these companies work to curb misinformation on their platforms about the Coronavirus pandemic or the 2020 elections or any other global or regional event. We have seen state and non-state sponsored actors with nefarious intent take advantage of lax content posting norms.
Dangerous fake news has spread on platforms like Facebook in Myanmar, where the Rohingya ethnic minority are persecuted. United Nations has clearly blamed the role of social media in spreading the persecution and this is not the only example of its kind.
Misinformation campaigns (aka “fake news”) on Facebook have interfered with democratic elections around the world. After a man used Facebook to live stream his attack on two New Zealand mosques in March 2019, the video quickly spread. YouTube Moderators fought back hard taking down the video as newer versions kept popping up seemingly beating the controls that YouTube has in place to immediately flag already removed material. The uploaders were able to sneak past by using a loophole – exact re-uploads of the video are banned by YouTube, however videos that contain clips of the original footage must be sent to human moderators for review, thereby delaying the process. And again this loophole existed for a purely legitimate reason – to ensure that news videos that use a portion of the video for their segments aren’t removed in the process.
In 2017, a live stream on Facebook showed the fatal shooting of a 74 year old retiree in Cleveland, while also showing a man murdering his own child in Thailand. Both videos remained online for hours and racked up hundreds of thousands of views.
In a December 2017 report, ProPublica took a revealing look at content moderation. ProPublica gathered from its users 900 examples of where users believed that Facebook content moderation was incorrectly applied. ProPublica then selected 49 of such posts and asked Facebook to explain. Rather shockingly, yet unsurprisingly, Facebook admitted to an error by its moderators in 22 out of 49 posts. Just imagine, 22 out of 49 means approximately 45% or half of all posts in the sample had moderation applied incorrectly. No amount of explaining can explain that.
Social media is good, right!
Image Credit: Internet
Facebook serves as a platform for its billions of regular users to post, view and offer feedback about the content hosted on its servers. But when that content is more “terrorist propaganda” than “brunch photo,” or when it becomes “porn” than “essential context” to an image, the company has struggled to determine the right approach to removing it in time. The traditional methods of company moderators reviewing user-reported infractions is too time consuming, while the AI powered algorithms are too imprecise.
With the COVID-19 risk content moderators were sent home, and without proper technology, connectivity, and safety requirements met, Facebook’s automated system took full control. That was an unmitigated disaster, leading to widespread blocking or deleting of posts mentioning Coronavirus from reputable sources such as The Independent and the Dallas Morning News, not to mention millions of individual Facebook users. Those automated systems still have problems.
Content from legitimate sources, verified fact-checked sources, and sources with history of posting appropriate and trust worthy content is suddenly being targeted. While there were always instances of some posts getting tagged erroneously, there is an order of magnitude increase in such instances in the post-covid world. Clearly the strategy to have AI and ML based programs call the shots hasn’t worked.
“Facebook is blocking COVID-19 posts from fact based sources,” a Facebook source says. On March 17th 2020, according to an Yahoo news article, Facebook suffered from a massive bug in its News Feed spam filter, causing URLs to legitimate websites including Medium, Buzzfeed, and USA Today to be blocked from being shared as posts or comments. The issue blocked shares of some but not all coronavirus-related content, while some unrelated links are allowed through and others are not. Facebook has been trying to fight back against misinformation related to the outbreak, but may have gotten overzealous or experienced a technical error.
According to a just released report by NYU Stern, Facebook content moderators review posts, pictures, and videos that have been flagged by AI or reported by users about 3 million times a day. So that is 3 million pieces of content just flagged for review out of possibly billions and billions of content posts. And since CEO Mark Zuckerberg admitted in a white paper that moderators “make the wrong call in more than one out of every 10 cases,” that means 300,000 times a day, mistakes happen.
So, is it all an experiment gone wrong? Did the novel coronavirus catch the social media content moderation framework at the worst time?
Image Credit: Webhelp
The one thing we know for sure is that you can’t control the beast that is Social Media. Generally, the response by firms to incidents and critiques of the social media platforms is primarily ‘We’re going to put more computational power on it,’ or ‘We’re going to put more human eyeballs on it.’” And that is generally fine. For it attempts to resolve the problem, or at least is seen as an attempt to resolve the problem, with or without adequate results. The focus is not on the results, rather on the proclivity to be seen as doing something.
Facebook uses more than 70 external partners and fact-checking firms. According to Facebook, it has over 30,000 people working on safety and security — about half of them are content reviewers working out of 20 offices around the world. Facebook employs almost all of these 15,000 content moderators indirectly, mostly outsourced workers. In similar context, YouTube today employs an expected 10 – 12,000 people to patrol al of Youtube and Google’s content. Similarly, Twitter employees close to 2,000 people in its content review team.
Generally speaking, content management or content review falls in to two main buckets. The first is content moderation, where content moderators, mostly contractors working on behalf of lets say Facebook or Twitter, check the content for violations like nudity, sexual content, racism, hate speech, acts of violence or promoting violence, violating laws and community standards, child pornography, and like. Moderators are responsible for reviewing flagged content, and removing it in accordance with the policies of the social media platform. The second bucket is third party fact checking, where Facebook employs more than 70 third party organizations, primarily, news outlets and prominent individuals to check a particular content as True or False. Based on the result then, any one of the many actions can be taken. Either the content is either left up or demoted, or additional labels are added, or additional constraints are placed including monetary impacts or all of the foregoing in extreme cases.
According to content management and comprehensive community standards page on Facebook directly, the efforts to moderate and regulate content have three stages. First, is the Policy development process. The content policy team at Facebook is responsible for developing our Community Standards. We have people in 11 offices around the world, including subject matter experts on issues such as hate speech, child safety and terrorism. Many of us have worked on the issues of expression and safety long before coming to Facebook. Second is Enforcement of policies developed previously through its global content moderator workforce. Facebook uses a combination of artificial intelligence and reports from people to identify posts, pictures or other content that likely violates our Community Standards. These reports are reviewed by our Community Operations team, who work 24/7 in over 40 languages. Facebook’s fact-checking rules dictate that pages can have their reach and advertising limited on the platform if they repeatedly spread information deemed inaccurate by its fact-checking partners. The company operates on a “strike” basis, meaning a page can post inaccurate information and receive a one-strike warning before the platform takes action. Two strikes in 90 days places an account into “repeat offender” status, which can lead to a reduction in distribution of the account’s content and a temporary block on advertising on the platform. And finaly, Facebook launched a review process last year. A news organization or politician can appeal the decision to attach a label to one of its posts. Facebook employees who work with content partners then decide if an appeal is a high-priority issue or PR risk, in which case they log it in an internal task management system as a misinformation “escalation.” Marking something as an “escalation” means that senior leadership is notified so they can review the situation and quickly — often within 24 hours — make a decision about how to proceed.
If Facebook’s content moderators have three million posts to moderate each day, that’s 200 per person: 25 each and every hour in an eight-hour shift. That’s under 150 seconds to decide if a post meets or violates community standards.
Image Credit: TELUS International
According the NYU Stern report, and according to some recent investigations by Buzzfeed and news articles by NBC, Forbes and others, the problem of content reviews – whether its content moderation by moderators or third party fact-checking by independent news organizations and individuals is more structural and institutional in nature. The novel coronavirus just exposed a side of it and perhaps aggravated the outcomes.
According to a NBC news article, Facebook has allowed conservative news outlets and personalities to repeatedly spread false information without facing any of the company’s stated penalties, according to leaked materials reviewed by NBC News. According to internal discussions from the last six months, Facebook has relaxed its rules so that conservative pages, including those run by Breitbart, former Fox News personalities Diamond and Silk, the nonprofit media outlet PragerU and the pundit Charlie Kirk, were not penalized for violations of the company’s misinformation policies.
The list and descriptions of the escalations, leaked to NBC News, showed that Facebook employees in the misinformation escalations team, with direct oversight from company leadership, deleted strikes during the review process that were issued to some conservative partners for posting misinformation over the last six months. The discussions of the reviews showed that Facebook employees were worried that complaints about Facebook’s fact-checking could go public and fuel allegations that the social network was biased against conservatives.
“This supposed goal of this process is to prevent embarrassing false positives against respectable content partners, but the data shows that this is instead being used primarily to shield conservative fake news from the consequences,” said one former employee.
In a recent case at Facebook, related to appeals process, a Facebook employee filed a misinformation escalation for PragerU, after a series of fact-checking labels were applied to PragerU posts. A Facebook employee escalated the issue because of “partner sensitivity” and mentioned within that the repeat offender status was “especially worrisome due to PragerU having 500 active ads on our platform,” according to the discussion contained within the task management system and leaked to NBC News. After some back and forth between employees, the fact check label was left on the posts, but the strikes that could have jeopardized the advertising campaign were removed from PragerU’s pages.
In another case, a senior engineer at one of the top social media giants collected internal evidence that showed the company was giving preferential treatment to prominent conservative accounts to help them remove fact-checks from their content, according to Buzzfeed. The company responded by removing his post and restricting internal access to the information he cited. A week later the engineer was fired, according to internal posts seen by BuzzFeed News.
Many employees at top social media companies like Facebook, Twitter and others have expressed deep anguish on their internal inter-company platforms , amid growing internal concerns about the company’s competence in handling misinformation, and the precautions it is taking to ensure its platform isn’t used to disrupt or mislead ahead of the US presidential election.
Third party fact checking also suffers from severe debilitating factors severely limiting its outreach. Scale becomes an issue for the fact checkers as most organizations Facebook contracts work of fact checking to, typically only allocates handful of people to the task of fact-checking. Coupled with an impossible amount of fact-checking requests coming in, that means the people are constantly backlogged.
According to Sarah Roberts, a pioneering scholar of content moderation, and an information studies expert at the UCLA, the social media companies handle the activities of content moderation in a fashion that diminishes its importance and obscures how the activities of content moderation work. The idea is simple: make it obscure and muddy the waters, to achieve plausible deniability. Something straight out of the play book of top politicians and business executives – plausible deniability. Content moderation is a mission critical activity, yet most social media companies fulfill it with their most precarious employees mostly by just outsourcing the entire journey of content moderation.
Just as companies save significant amount of money by outsourcing transport logistics, janitorial and food services, outsourcing content moderation saves these social media giants tons of money. Just as we pointed out earlier, between Facebook, Twitter and Youtube there are close to 40 – 50,000 content moderators. And the number is only growing. Even at conservative estimate of 40,000 people, that is outsourcing work equivalent to 100% of work performed at 4 medium sized outsourcing services providers.
The lack of access and the lack of willingness by social media companies to allow any kind of scrutiny of their moderation practices has made content moderation a kind of black box ops where only few people know what takes place. This is certainly by design and it is no accident that the top social media companies choose the convenience of maintaining plausible deniability and the wait and watch approach while incendiary content burns and lights fire to everything around it.
What about human content moderators?
Image Credit: tampabay.com
According to Guy Rosen, VP of Integrity at Facebook, content moderation is a really arduous job. Numerous people have brought this issue to the fore. Watching countless hours of sadistic, violent, disturbing and purely horrific content day in and day out takes its toll. How do you get those hours of visions and thoughts out of your head when you head home? You cannot. Those sights and sounds stay with you. As a content moderator, its really hard to live a normal life after watching 8 hours of non-stop disturbing content.
In the recent past, a former Facebook moderator sued, accusing the platform of psychological harm. Former Microsoft employees sued Microsoft for similar reasons after the alleged trauma from reviewing child porn. In a more recent report, The Verge carried out a scathing review of the job conditions for content moderators at Facebook and the harrowing conditions surrounding the job in general. As one employee interviewed in the report put it: “We were doing something that was darkening our soul — or whatever you call it,” he says. “What else do you do at that point? The one thing that makes us laugh is actually damaging us. I had to watch myself when I was joking around in public. I would accidentally say [offensive] things all the time — and then be like, Oh shit, I’m at the grocery store. I cannot be talking like this.”
Accenture which performs content moderation for social media companies, has its employees sign a form that directly acknowledges that reviewing such content may be harmful to mental health and could even lead to PTSD.
So, is it purely an evil design at play at Social Media giants when it comes to moderating?
Image Credit: VICE
To be fair to all, Facebook and other Social media giants do face somewhat of an uphill battle in their efforts of moderating the content. The moment any post, video, or content gets tagged or labeled as requiring fact-checking or misleading or inappropriate, the authors or posters are quick to raise hell about dictatorship, suppression of free speech and infringement of people’s inalienable right of expression.
There is always a debate between balancing free speech versus freedom from cruelty and hatred. Or debate between balancing freedom of expression versus right to speak against bullies. A recent attempt by Twitter to mark certain tweets from the President caused a storm and PR crisis. A similar attempt from Facebook recently drew ire of conservatives and put certain ad revenue under threat.
Aside from the morals and ethics, at the heart of the debate is a purely financial question: content attracts viewers. More viewers equals more content and vice versa. Any attempt to reduce content, even the borderline inappropriate content will reduce viewers hence impacts revenue. The business models chosen by Facebook, Twitter and Google favor an unremitting, unrelenting drive to add more users and demonstrate growth to investors. More users and more content means more content to moderate and more nuances, but all of that is secondary, a kind of an afterthought.
The debate on the usage of internet and governing content uploads is not new. The debate has been going on for some time now and is just about reaching peak interest levels around the world, with many governments promising action like EU and UK; few governments, like China, in fact taking strong action; and few just watching how the entire debate pans out and what, if any, changes come out as result.
The big tech players around the world have realized one thing – it’s a tough tight rope walk to control or govern the internet. If a platform puts in too strict controls, through user-reporting mechanism, AI backed algorithms and human monitors flagging and removing content, it will get labeled as ‘dictatorship’ and against free speech. If a platform puts in too few controls, hosting content freely and with little censorship, its going to get run over by activists from all ends of the spectrum, from left to right. It’s quite like an overflowing pot left simmering for long. The only difference is no one can lift the pot and no matter which way its tilted, boiling hot contents are sure to leave scalding marks.
With that, lets take a look at The role of Regulators in the efforts to moderate content.
When Mark Zuckerberg wrote the oped in WAPO in March 2019 asking for government and regulators to step in more aggressively to police the internet, he may have elaborated what many insiders feel regarding governing internet, and specifically what content is uploaded for viewers to view, download and use. Yet, not everything seems above board here as the challenges that Mark Zuckerberg cited so eloquently in his oped are the same challenges that have plagued tech industry for years. What has changed recently that governments are being called in to action, while so far the tech industry has fought tooth and nail for freedom of expression and freedom of speech?
As the efforts to govern the internet continue, many who are fighting the battle daily are coming to realize the magnitude of difficulty this seemingly simple question of ‘what content to be allowed’ poses. Lets face it: Internet was never known to be deferential to peoples’ preferences. The advocates of freedom of expression and free speech, often big tech companies themselves, fought for as little government control as possible, decrying every move made by governments or regulators around the world.
Technology experts, including big tech companies themselves believe that for Zuckerberg and other big tech companies, “regulation” isn’t an uncouth word anymore. As with changing times, the big tech is now embracing regulations, not because of any newfound respect for regulations but purely as a business measure. From early days when big tech companies projected all regulations as reprehensible and fought any and all regulations tooth and nail, to the current day where they are welcoming regulations, the transformation cannot be more melodramatic.
Most of big tech today sees regulations as a set of common rules enforced by governments and regulators that’ll allow them to further cement their dominance of the internet. And if anything goes wrong, they always have the comfort of pointing the finger to the” Regulator” big brother.
So what is the Way forward to moderate content and to maintain integrity of platforms
According the NYU Stern report, the solution is straight-forward, and calls for increased investment, focus and commitment. The solution is a multi-pronged approach. The first step of this approach begins with Ending outsourcing: to ensure all content moderators are official Facebook or Twitter or employees of the Social media company, with adequate salaries. Increasing the number of moderators significantly is another, as well as placing content moderation under the dedicated oversight of a senior executive.
Facebook or Twitter or other Social media companies should also expand oversight in underserved countries, the report suggests. In addition, the health and well being of content moderators employees should come first. The company should sponsor research into the mental health impacts of moderating the world’s content. And the company should expand fact-checking to curb the spread of misinformation.