YouTube turns to human moderators again

Remigio Civitarese
Settembre 24, 2020

The platform had to shift to AI models after the lockdown due to the coronavirus pandemic. YouTube often reverses such decisions, but a problem that creators have pointed out in the past is that by the time the video is monetizable again, it's no longer being watched by a large audience.

"That's where our trained human reviewers come in", says Mohan, adding that they used the videos featured by AI and then "made decisions that tend to be more nuanced, especially on topics like hate speech or medical misinformation or harassment".

The former moderator, who remains anonymous, is now seeking medical treatment, which is compensation for the actual trauma she has suffered and also the creation of a YouTube-founded medical monitoring program that would actually screen, diagnose, and supposedly treat content moderators.

Human moderators will oversee YouTube content, the Google-owned video streaming platform has announced. They had the authority to take down the videos also.

The lawsuit against YouTube alleges the company failed to adequately inform potential content moderators about the negative impact the job could have on their mental health and what it involved.

The AI system was programmed to be cautious, which led to content that was close to breaking YouTube's rules being removed from the site.

The firm behind the lawsuit, Joseph Saveri Law Firm, sued Facebook in 2018 for also failing to protect the mental health of its content moderators, resulting in a $52 million settlement, The Verge reported.

This results in taking down around 11 million videos between the month of April and June.

Up until now, the company has asked creators to put age restrictions on their videos, with some content being flagged by the algorithm only if it is found to be extreme in nature. Twice the amount of appeals were submitted on the platforms regarding the same situation. Workers are being required to review about 100 to 300 pieces of content every single day with a given error rate of only two to five percent. YouTube's Chief Product Officer Neal Mohan acknowledged the fact that AI moderators couldn't be as precise and accurate as human moderators. This job is now done by humans on the company's Trust and Safety team - they'll likely move over to checking on the AI locking videos up at high-speed.

At the same time, tech companies are under more pressure to combat hate speech and misinformation head of the United States presidential election in November.

Altre relazioniGrafFiotech

Discuti questo articolo

Segui i nostri GIORNALE