Social-media-content-moderation-Ai cannt-replace-human-moderators

Machine Learning (AI) Never Replace Human Content Moderation

The Internet has democratized many facets of everyday life. Consequently, it has allowed everybody, from regular folk and progressive thinkers to ideological extremists and harmful predators (and everyone in between) to share their views. The consequential proliferation of online harmful content has meant regulation was inevitable. So what is the future of online content moderation? ✔️

However, as regulatory pressure from policymakers increases, online platforms are increasingly using automated procedures to take action against inappropriate (or illegal) material on their systems such as hate speech, pornography, or violence. But are these algorithms really up to the task? While automated systems can spot the most obvious offenders, which is undoubtedly useful, does AI lack the ability to understand cultural context and nuance?  Is it possible to utilize a single tool or approach to effectively regulate the internet while maintaining its benefit to society? Or is a more holistic approach required? Here we will explore some of the more recent events and stories, in particular the role that the COVID-19 pandemic has played in forcing many tech giants to utilize AI moderation perhaps before it was ready. We will also consider the best role both humans and AI can play in the future of online content moderation and ask: “Will an algorithm ever truly be able to replace a human moderator?”.

Social-media-content-moderation-Ai cannt-replace-human-moderators
Social media content moderation: Ai can’t replace human moderators

The spread of the coronavirus pandemic around the world this year has been unprecedented and rapid. In response, tech companies have had to contend with the dual aim of ensuring their services are still available to their users, while also reducing the need for people to come into the office. As a result, many social media companies have become more reliant on AI to make decisions on content that violates their policies on things like hate speech and misinformation. YouTube announced these changes back in March 2020.

In the same blog post, YouTube warned that automated systems will start removing some content without human review, and due to these new measures “users and creators may see increased video removals, including some videos that may not violate policies”.

Nevertheless, YouTube was surprised at just how active the AI moderation turned out to be in its attempts to spot harmful content. YouTube told the Financial Times in September 2020 that the greater use of AI moderation had led to a significant increase in video removals and incorrect takedowns.

All in all, approximately 11 million YouTube videos were removed between April and June 2020. 320,000 of these takedowns were appealed, and half of those appealed videos were reinstated. All of which was happening at roughly twice the usual rate, an indication that the AI system was somewhat overzealous in its attempts to spot inappropriate or illegal content.

Since then, YouTube has brought back more human moderators to ensure more accuracy with its takedowns. While one could consider this experiment a failure, YouTube’s chief product officer Neal Mohan suggests machine learning systems definitely have their place. “Over 50 percent of those 11 million videos were removed without a single view by an actual YouTube user and over 80 percent were removed with less than 10 views,” he said. “And so that’s the power of machines.” Machines clearly still have a lot to learn, however.

2. The problem of Facebook: online content moderation can’t be solved with artificial intelligence

Over the last few years, Facebook has invested massively in contracting content moderators around the world. So the decision to send all of its contract workers home, as the coronavirus outbreak swept the planet, was not a decision the company made lightly. Particularly as the job of content moderation is not working you can exactly bring home with you. The disturbing nature of the work is damaging enough to a moderator’s mental health when they’re working in a professional environment, it would be considerably more worrisome if done at home, surrounded by the moderator’s family. “Working from home on those types of things, that will be very challenging to enforce that people are getting the mental health support that they (need),” said Mark Zuckerberg.

That left the task of identifying and removing offensive content from Facebook largely to the algorithms. The results have been less than stellar. Just one day after Facebook announced its plans to rely more heavily on AI, some users complained that the platform was making mistakes. Facebook’s machine-learning content moderation systems began blocking a whole host of legitimate posts and links, including posts with news articles related to the coronavirus pandemic, and flagging them as spam. Despite Facebook’s ‘vice president for integrity’, Guy Rosen, declaring “this is a bug in an anti-spam system, unrelated to any changes in our content moderator workforce”, many industry specialists and pundits were suggesting the cause of the problem was Facebook’s decision to send its contracted content moderators home.

A former Facebook security executive, Alex Stamos, went a little further in his speculation.

It looks like an anti-spam rule at FB is going haywire,” he wrote on Twitter. “Facebook sent home content moderators yesterday, who generally can’t (work from home) due to privacy commitments the company has made. We might be seeing the start of the (machine learning) going nuts with less human oversight.”

There were other issues. Social media platforms, such as Facebook, play an important role in Syria. Campaigners and journalists rely on social media to document potential war crimes. But as AI struggles to understand context and intention, scores of activists’ accounts were closed down overnight, often with no right to appeal those decisions, due to the graphic content of their posts

And yet, a lot of questionable posts remained untouched. According to Facebook’s own transparency report, the number of takedowns in high-profile areas like child exploitation and self-harm fell by at least 40 percent in the second quarter of 2020 because of a lack of humans to make the tough calls about what broke the platform’s rules.

3. Twitter: AI-driven content moderation often fails to understand the context.

Twitter took a similar tack. They informed users earlier this year that they would increasingly rely on machine learning to remove “abusive and manipulated content.” The company at least acknowledged that artificial intelligence would be no replacement for human moderators.
In a blog post from March 2021, the company said “We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes,”

To compensate for the anticipated errors, Twitter said it wouldn’t permanently uphold suspensions “based solely on our automated enforcement systems.”
As perhaps expected, and similar to YouTube and Facebook, there were less than consistent results from Twitter’s shift to more reliance on automation. In a recent letter to shareholders, Twitter reported that half of all tweets on the platform deemed to be abusive or in violation of policy are being removed by its automated moderation tools before users have a chance to report them.
In France, however, campaigners fighting against racism and anti-Semitism noticed a more than 40 percent increase in hate speech on Twitter. Less than 12 percent of those posts were removed, the groups said. Clearly, the AI still has some blind spots.

4. AI cannot moderate content alone

AI and Humans Moderation by Business Processing Outsourcing (BPO)
AI and Humans Moderation Business Processing Outsourcing (BPO)

The move toward more AI shouldn’t be a surprise. For years, tech companies have been pushing automated tools as a way to supplement their efforts to fight the offensive and dangerous content that can inhabit online platforms. Coronavirus has been an opportunity to really see where the big companies are as far as their machine-learning algorithms go. So far, it has not been a resounding success.

For sure, AI can help content moderation move faster and automated systems are certainly doing quite a bit to help. They act as ‘first responders’, for example, dealing with the obvious problems that appear on the surface, while pushing more subtle, suspect content toward human moderators.

But the way they do so is relatively simple. Many use visual recognition to identify a broad category of content, like “human nudity” or “guns”. This is prone to misunderstanding the context, for example, categorizing images of breast-feeding mums as the same as pornographic content has ruffled some feathers in the past.

Technology, simply, struggles to understand the social context for posts or videos and, as a result, can make inaccurate judgments about their meaning.

Things become much trickier when the content itself can’t be easily classified even by humans. Context-dependent content like fake news, misinformation, satire, and so on, do not have simple definitions, and for each of them, there are grey areas. Someone’s background, personal ethos, or mood might make the difference between one definition and another.

The problem with trying to get machines to understand this sort of content is that it is essentially asking them to understand human culture, which is a phenomenon too fluid and subtle to be described in simple, machine-readable rules.

By pairing the efficiency of AI with the context-understanding empathy and situational thinking of humans, the two become an ideal partnership for moderation. Together, they can safely, accurately, and effectively vet high volumes of multimedia content.

5. Leap Steam provides Content Moderation services ACROSS ALL PLATFORMS

This balance is the backbone of Leap Steam’s content moderation service which combines the strengths of both humans and AI to moderate content at scale to create safe and trustworthy online environments for organizations and their communities.

By working with Leap Steam, you can be assured that your users, your brand, and your company’s legal liabilities are protected. With just the right blend human, Leap Steam can deploy experienced in-house teams, utilizing the latest in innovative moderation tools, to oversee live video stream, image moderation, text moderation, sentiment analysis, and social listening. You can integrate into our moderation tool via an API and we’ll take care of the rest. If you have your own system that you would like us to use, we can adapt to your needs to ensure smooth collaboration.

In the digital age, the only certainty is that innovation will continue to create challenges and opportunities alike. And it will take innovative thinking to ensure those challenges are kept in check and those opportunities are to the fore.

Feel free to learn more about the Content Moderation Pricing here.

Leap Steam

4.7/5 - (3 votes)

Leave a Comment

Your email address will not be published. Required fields are marked *