This article offers a critical view of how content available on the internet quintessentially shapes public opinions and provides an unique challenge to governments around the world.
Introduction
The Social media today has a very different impact potential than it used to 10 years ago. Today the social media is capable of influencing country-wide elections, swaying public opinions, making or breaking brands and people associated with those names, and raising or flat-lining issues. The power wielded by content is truly mind-boggling when one considers that more than 56% of world’s population is online one way or the other. With growing penetration of smartphones and increase in bandwidth accompanied with decrease in data charges, content sharing is becoming ubiquitous and as easy as texting or talking.
Governments, regulators and public experts around the world have expressed concern amidst growing need to monitor content which is distributed largely freely today. Who controls what can be seen publicly has been a matter of much debate which is far from settled, however almost everyone involved favors some level of control on viewership of the media content posted online, or in other words, policing the internet. Most through recognize this policing as not a negative or an impediment, rather as a virtue or a social responsibility. Given the power, both direct and indirect, that content platforms usually possess, online available content has become the new hidden weapon on both sides of the debate.
The best way to prevent undesirable content from being seen is to never let it be uploaded in the first place. If content publishing platforms like Facebook, YouTube and several hundred others out there are required to screen every single video or post uploaded on their platform, it would simply lose out on millions of subscribers who want to upload their content immediately. If the platforms employ AI powered algorithms to pull down videos, then it must expect errors and content being pulled down or demoted needlessly.
Image Credit: Internet
Is Social media uncontrollable?
YouTube works in 80 different languages, is free at the entry level, is available almost everywhere, is extremely popular in the 18 – 24 year age group around the world, even in countries like P.R. China which have their own version of YouTube, and is increasingly being used to ‘monetize’ content like never before. With such mass viewership, Cisco predicts that videos will make up more than 80% of global internet traffic by 2022. YouTube, the world’s foremost content sharing site, has close to 2 billion logged-in users each month (one doesn’t need to be a logged in user to just view videos). Given its low entry barrier, almost everyone with an internet connection has used YouTube at some time.
Social media platforms, especially where content is freely distributed, or nearly freely distributed, have had an unfettered run over the past decade leading to their massive popularity today.
“We’re just a platform” is almost an extremely convenient way to avoid taking full responsibility for an increasingly serious set of problems. Yet this is exactly the approach that majority of platforms have adapted when it came to own up the responsibility of content moderation and review.
Few years back, Steve Stephens recorded himself murdering an innocent victim and then uploaded the footage to Facebook. The horrific act put Facebook under immense pressure to do something. This incident is not alone as there have been may documented cases of teens posting suicides on the internet.
Recently, the ghastly attack on the two churches in Christchurch, New Zealand killing more than 50 people was streamed live on Facebook for the world to see. Millions of people watched in disbelief while the lone gunman went on a rampage gunning down unarmed civilians. The video was streamed for several minutes while YouTube Moderators fought back hard taking down the video as newer versions kept popping up seemingly beating the controls that YouTube has in place to immediately flag already removed material. The up-loaders were able to sneak past by using a loophole – exact re-uploads of the video are banned by YouTube, however videos that contain clips of the original footage must be sent to human moderators for review, thereby delaying the process. And again this loophole existed for a purely legitimate reason – to ensure that news videos that use a portion of the video for their segments aren’t removed in the process.
For all live streaming and news events, especially ‘Breaking News’, YouTube’s safety team prefers not to use the system for immediately removing child pornography and terrorism-related content, by fingerprinting the footage using a hash system, and rather depends on a system similar to its copyright tool, Content ID, but not exactly the same. The system takes the much longer route – it first searches re-uploaded versions of the original video for similar metadata and imagery. If it’s an unedited re-upload, it’s removed. If it’s edited, the tool flags it to a team of human moderators, both full-time employees at YouTube and contractors, who determine if the video violates the company’s policies. YouTube considers the removal of newsworthy videos to be just as harmful. YouTube prohibits footage that’s meant to “shock or disgust viewers,” which can include the aftermath of an attack. If it’s used for news purposes, however, YouTube says the footage is allowed but may be age-restricted to protect younger viewers, reports The Verge.
Image Source: Internet
Rasty Turek, CEO of Pex, a video analytics platform that is also working on a tool to identify re-uploaded or stolen content, told The Verge that the issue is how the product is implemented. Turek, who closely studies YouTube’s Content ID software, points to the fact that it takes 30 seconds for the software to even register whether something is re-uploaded before being handed off for manual review. YouTube’s Content ID tool takes “a couple of minutes, or sometimes even hours, to register the content,” Turek said. That’s normally not a problem for copyright issues, but it poses real problems when applied to urgent situations. A YouTube spokesperson could not tell The Verge if that number was accurate.
Further, Turek contends, There is no harm to a society when copyrighted things or leaks aren’t taken down immediately. There is harm to a society here though when live-streaming and breaking news puts harmful content out there. This is precisely the reason why live-streaming is considered a high-risk area by Facebook and YouTube and nearly every other platform that provides live-streaming. People need to have different credentials for live-streaming and people violating rules related to live-streaming, who are sometimes caught using Content ID once the live stream is over, lose their streaming privileges because it’s an area that YouTube can’t police as thoroughly. The teams at YouTube are working on it, according to the company, but it’s one the safety team acknowledges is very difficult. Turek agrees.
In another incident, Facebook admitted last November that it did not do enough to prevent its platform from being used to “foment division and incite offline violence” in Myanmar against the Muslim Rohingya minority. That came after the United Nations described the events surrounding the mass exodus of more than 700,000 Rohingya people from Myanmar as a “textbook example of ethnic cleansing.”
These incidents are not limited to YouTube and Facebook. In China, Bytedance chief executive Zhang Yiming had to issue a public apology in April 2018 after the company was ordered by the central government to close its popular Neihan Duanzi app for “vulgar content”. The company’s Jinri Toutiao news aggregation app was also ordered to be taken down from various app stores for three weeks. Bytedance pledged to expand its content vetting team from 6,000 to 10,000 staff, and permanently ban creators whose content was “against community values”.
Image Credit: Internet
Addressing the elephant in the room!
YouTube reportedly employees more than 10,000 people on it’s content moderation and rule enforcement team which represents a 25 percent increase from just about a year ago. according to YouTube took this decision after a number of controversies surfaced over recent past concerning children’s safety on the platform—for those who watch the content as well as those who appear in videos. Yet as other platforms have also struggled with the delicate task of protecting viewers while also ensuring free speech, the big question next year for YouTube is whether this will work.
Similarly, Facebook employees close to 15,000 people while Twitter employees close to 2,500 people to moderate the content on its platforms.
Relying purely upon humans to act as content moderators and reviewers is not the most viable option. Further the process of reviewing user-reported content is time consuming in itself. Yet, machine learning is not advanced enough at the moment to completely automate the process, though A.I. researchers at Facebook are working on software that they say will eventually enable computers to do most of the work. Current AI and ML systems are not close to being mature enough to address the complexities of moderation, given that almost all intelligence systems operate at a pattern recognition level, where any change in any one of the hundreds of attributes can throw a false positive, thereby passing the content and throwing the system off track. Advanced use cases such as differentiating a jibe from something more serious whether in a video or written format, in an efficient manner that doesn’t unduly stifle free speech is certainly out of question for many years to come.
More traditional methods of reviewers depending on user-reported content and user feedback to promote or demote content is too slow and is often regarded as “too-little, too-late“. It could take crucial hours for user-reported infractions to filter through and reach the reviewers, by which time the damage is already done.
Image Credit: Internet
To overcome these challenges, most platforms are now deploying combination of AI with human moderators to come close to building a viable system. The system begins with AI assisted, algorithms based logic program which does an initial flagging and flags anything where it finds even barely perceptible objectionable content. Once the content is flagged, it is taken down immediately or if not yet online, stopped from proceeding further, and queued up for an in-depth review by human operators. The content is then finally posted back online or taken down permanently based on the decision of human moderation process.
To illustrate this use of AI, let’s look at Inke, which is one of China’s largest live-streaming companies with 25 million users. As of the end of last year, almost 400 million people in China had done the equivalent of a Facebook Live and live-streamed their activities on the internet. Most of it is innocuous: showing relatives and friends back home the sights of Paris or showing nobody in particular what they are having for lunch or dinner (Source: South China Morning Post). Inke employs 1,200 mostly fresh-faced college graduates who have seconds to decide whether the two-piece swimwear on their screens breaches rules governing use of the platform. The team is the biggest in Inke, accounting for about 60 per cent of its workforce. The content moderators work to detailed regulations on what is allowed and what has to be removed. AI is employed to handle the grunt work of labelling, rating and sorting content into different risk categories. This classification system then allows the company to devote resources in ascending order of risk. A single reviewer can monitor more low-risk content at one time, say cooking shows, while high-risk content is flagged for closer scrutiny.
Yet even that – employing AI assisted algorithmic logic coupled with human moderators – may not be enough as Mark Zuckerberg, Facebook’s CEO and majority shareholder, published a memo on censorship in Dec 2018. “What should be the limits to what people can express?” he asked. “What content should be distributed and what should be blocked? Who should decide these policies and make enforcement decisions?” One idea he aired might be thought of as a Supreme Court of Facebook. “I’ve increasingly come to believe that Facebook should not make so many important decisions about free expression and safety on our own,” Zuckerberg wrote. “In the next year, we’re planning to create a new way for people to appeal content decisions to an independent body, whose decisions would be transparent and binding.”
Image Credit: Internet
Bigger elephant in the room
There are two major challenges to building and deploying any content monitoring system. The first challenge is simply of scale. As more than 300 hours of videos are uploaded just on YouTube every single minute, the humans monitoring the videos must be capable to correctly identify inappropriate content very, very quickly, virtually in seconds. Making those removal decisions are not simple yet must be made in seconds. Personal biases and tolerances need to be re-calibrated in favor of policies and law. In addition the biggest challenge humans face monitoring content is to maintain their objectivity while watching hours upon hours of videos. Many content reviewers have reported increase in stress levels and allegations of mental illnesses are not uncommon. A group of moderators sued Microsoft in January alleging that they were suffering from PTSD as a result of watching child abuse and other sadistic acts.
The second challenge is bigger and is not going to go anywhere anytime soon. As Mark Zuckerberg mentioned in Dec 2018, this challenge is all about who gets to regulate and to what extent, and more importantly who decides the rules of the game? Is it solely up to each content publisher or platform to define their own rules on what content to be flagged as inappropriate and taken down or demoted? Or is it up to the government or a regulatory body to fix this? Zuckerberg feels its clearly the latter.
Experts believe that for now at least human curators must continue to work alongside AI. This of course means that both the human moderators and AI backed algorithms have immense ground to cover before a reasonable degree of accuracy can be achieved. It will take time for the platforms, lets say, Facebook, to train its neural networks to streamline that process, while the human moderators continue to face the daunting task of identifying proverbial needle in the haystack.
Image Credit: Internet
Conclusion
Lets remember the main problem once again. The social media platforms are responsible for the content hosted on their platform and this responsibility cannot and must not be avoided in the name of freedom of expression, viewers’ choice, viewers’ promoted content, etc or whatever fancy term once may come up with.
When anyone speaks about identifying harmful content and the need to censor such content, it feels like the work should be done by robot moderators, but as reality beckons, AI doesn’t yet grasp context and gray areas. The struggle between free speech and censorship keeps humans in the center in this undesirable yet undeniable role.
As a content publisher, content aggregator or content reviewer, the stakes have never been higher to allow the content which is legitimate and in the interest of pubic to run freely, while any content which doesn’t meet these two conditions to be taken down as soon as it is put up before the content has the chance to reach any audience.