How-content-moderation-can-control-the-US-polical

Content Moderation and Privacy Reform

Today, we can send data from one part of the globe to the other in nanoseconds, using just our smartphones. The ability to record, upload and share content can advance the causes of many global issues, such as human rights, the fight against political corruption, and ensuring adequate disaster relief arrives to those who need it most.

There is, however, a dark side to this technology. Indeed, some corners of the Internet have become a breeding ground for terrorists and extremists, a marketplace for human trafficking and sexual exploitation, and a stage for hate speech and violence

Leading the fight against such abhorrent matters are content moderation

1.Content Moderators: The Guardians of Social Media

The content moderator’s job ranges from making sure that platforms are free of spam, content is placed in the right category, and users are protected from scammers to reviewing and analyzing reports of abusive or illegal behavior and content. They decide, based on a predetermined set of rules and guidelines, as well as the law, whether the content should stay up or come down.

Content moderation became a major political talking point in 2020 when Twitter and Facebook began flagging then-President Trump’s tweets before and after the U.S. election last November. It did so again, early on in 2021, when Twitter, Youtube, Snapchat, and Facebook outright banned Trump from their platforms in light of the January 6th attack on the U.S. Capitol, citing the risk that he would use his social media platforms to incite more violence. Scrutiny of how social media sites moderate posts on their platforms has intensified since then, as lawmakers, tech companies, and the new administration in the White House, seek ways to curb violent messages and conspiracy theories. 

What changes will 2021 bring to the world of content moderation? We take a look at just some of the potential big stories of the coming months.

2.Content Moderation Under The Biden Administration

Social media platforms are under fire, from both sides of the aisle, for how they police content (or how they don’t). Questions are being asked about what role these tech companies should have when it comes to hosting information on topics, ranging from the COVID-19 pandemic to election-related misinformation and disinformation.

Congress has called on the CEO of Facebook (Mark Zuckerberg), Google (Sundar Pichai), and Twitter (Jack Dorsey) to testify on March 25th, in front of a House Committee, to discuss the proliferation of disinformation on their platforms. Some It is believed that the solution maybe with a review and update of a 25-year-old law, Section 230 of the Communications Decency Act, and, as unlikely as it seems in these ultra-partisan times, there is agreement from both parties.

Democrats, including the newly-elected 46th President of the United States, Joe Biden, say that social media platforms aren’t doing enough to restrict or remove enough harmful content, particularly when it comes to extremism, hate speech, blatant falsehoods, and unhinged conspiracy theories (such as QAnon). They want to amend Section 230 to provide more oversight. 

At the same time, however, those on the right suggest many social media platforms have too much power to censor and remove content. They feel that conservative viewpoints are being treated harshly and point to the recent banning of many social media accounts belonging to conservatives. These include former President Trump, the account of his presidential campaign, his former chief strategist Steve Bannon, and My Pillow CEO Mike Lindell. Many Republicans and conservatives feel Section 230 gives these companies too much final say on what is hosted on their platforms, a potential First Amendment issue.

How-content-moderation-can-control-the-US-polical

How content moderation can support US politics? Should Twitter, Youtube, Snapchat, and Facebook outright banned Trump in 2020?

German Chancellor Angela Merkel, an unlikely Trump ally at the best of times, also agrees that it is “problematic” when the CEO of a privately-owned company has the power to silence an elected-leader of a democratic state. 

Jeff Kosseff is a cybersecurity law professor at the U.S. Naval Academy and the author of ‘The Twenty-Six Words That Created the Internet, a book about Section 230. He says You have half of D.C. that thinks there should be much less moderation and the other half thinking there should be more moderation. It’s hard to find the solution when you don’t have people agreeing on the problem.”

Repealing the law entirely is an extreme, and somewhat unlikely, option. However, we are likely to see an increased demand for change to the legislation this year. Back in October 2020, during a previous Congressional hearing, Mark Zuckerberg expressed a preference for updating the law. The debate about Section 230 shows that people of all political persuasions are unhappy with the status quo,” Zuckerberg said. People want to know that companies are taking responsibility for combating harmful content, especially illegal activity, on their platforms.”

3.Facebook v Apple: Two Giants Go Head-to-Head Over Privacy Concerns

Fearsome behemoths face off in an epic battle for the ages, while humanity watches on helplessly, hoping desperately to avoid becoming collateral damage in their fight for global supremacy.

So describes the plot of Godzilla vs. Kong”, a big-budget action movie due to be released later this year. But while audiences are being entertained by cutting-edge special effects, there is another struggle between two titans looming on the horizon, as epic as anything Hollywood can imagine.

The core issue of the developing hostility is Apple’s new iOS update that requires developers to ask for permission before they can track what users do across apps. While these changes should go some way in addressing privacy concerns, they will most certainly harm Facebook’s bottom line. Bank of America estimates Facebook could see as much a 3% drop in revenue as a result of the update.

But Facebook isn’t trying to win people over to their side by crying about potential lost earnings, rather its campaign is focussed on convincing the world that the social media platform is the champion of ‘the little man’. Last December, the company ran a series of full-page newspaper adverts, decrying Apple and stating how important targeted advertising (through app-tracking) is for small businesses to find customers.

Apple CEO Tim Cook hit back at those who feel increased data privacy for consumers is a bad thing during a conference in Brussels. While not mentioning Facebook directly, Cook said “If a business is built on misleading users, on data exploitation, on choices that are no choices at all, it does not deserve our praise. It deserves reform.”

Whatever happens as a result of this spat, expect this rivalry to heat up, as Zuckerberg has previously warned of a “very significant competitive overlap” between the two giants in the future. Apple and Facebook already compete over messaging apps and are set to lock horns over hardware as Apple develops its own VR headset to go up against Facebook’s Oculus Quest device. In the meantime, Facebook now has an array of smart home devices that could potentially compete with Apple’s own TV set-top box, HomePod speaker, and iPad.

Apple-and-Facebooks-Privacy-War-

Apple and Facebook’s Privacy War

4.AI Errors In 2020 Mean Big Techs Will Rely More On Human Content Moderators – Working Conditions Will Continue To Improve

It has become abundantly clear that AI is simply not up to the task of content moderation on a large scale. Many doubt if it ever will be.

In yet another U.S. Senate testimony, this one back in 2018, Facebook chief Mark Zuckerberg said he was optimistic that in five to ten years, artificial intelligence would play a leading role in the automatic detection and moderation of illegal or harmful content.

We got to see first-hand what AI-led content moderation can do when Covid-19 forced Facebook, Twitter, and YouTube to send home their human moderators early last year. The results were not encouraging. The platforms saw greater quantities of content relating to self-harm, child nudity, and sexual exploitation; overzealous censorship of news articles; otherwise perfectly acceptable content marked as spam; and businesses whose ads were erroneously taken down and temporarily prevented from appealing the decision. AI scores very poorly in detecting nuance, satire, and sarcasm. AI also struggles to understand context and intention. Some accounts belonging to activists in Syria were closed down due to the graphic content of their posts. Human involvement is clearly needed in the decision as to what content should be ultimately removed.

So it will be people who make the distinction between legitimate and harmful content. That means we must ask people to sift through the very worst images, videos, and hate-filled rhetoric that the dregs of humanity upload to the Internet. A job that, on average, pays $15 an hour, to spend six or more hours each day, viewing upwards of 500 reported posts of graphic imagery and hate speech. Many companies offer little to no mental screening before the job, and little in the way of emotional support during, or after, an employee’s time with them. In fact, many content moderators experience what can only be described as trauma as part of their daily working life. 

The good news is that finally, people in power are beginning to take note. In January, content moderators working for one of Facebook’s outsourcing partners in Ireland met with the current Tánaiste (Deputy Prime Minister of Ireland) Leo Varadkar. Employees raised a range of concerns about their working conditions including poor pay, inadequate mental health support and not being allowed to work from home despite Ireland’s Covid-19 lockdown. 

The meeting comes after over 200 Irish content moderators wrote and signed a letter addressed to Facebook’s Mark Zuckerberg and Sheryl Sandberg, addressing a list of concerns. The letter reads: “Facebook’s algorithms are years away from achieving the necessary level of sophistication to moderate content automatically. They may never get there.” It goes on to say: “Without our work, Facebook is unusable. Its empire collapses. Your algorithms cannot spot satire. They cannot sift journalism from misinformation. They cannot respond quickly enough to self-harm or child abuse. We can. Facebook needs us. It is time that you acknowledged this and valued our work.” After the meeting, Varadkar said the content moderators perform “really important work to protect us all” and he will contact the social media giant to raise concerns about their working conditions.

5.Content Moderation at Leap Steam

Life At Leap Steam: we appreciate the valuable work of our content moderators, so the health of our employees is paramount, and that includes mental health. We e consistently encourage an atmosphere of harmony and open communication. We provide our content moderators with full support, which includes health insurance, regular breaks, counseling, and the opportunity to work from home if it should be needed. 

The company follows the most up-to-date rules and local regulations concerning health & safety, vacation time, and COVID-19 guidelines. Leap Steam believes the well-being and development of our employees to be a vital component of our success. To that end, we also offer our staff English language training, yoga & meditation classes, free healthy and nutritious meals, and have a 24-hour gym located on-premises.

Leap ’s content moderators carry out extremely important work, keeping online spaces safe from offensive, harmful, and illegal content. They make positive contributions to society, so we do our best each day to ensure they operate in a working culture of trust and confidence.

Leap Steam

5/5 - (1 vote)

Leave a Comment

Your email address will not be published. Required fields are marked *