Clicky chatsimple

AI Moderators Are TikTok’s Next Step

Category :

AI

Posted On :

Share This :

By laying off hundreds of human moderators and relying more on artificial intelligence (AI) for content review, TikTok, the well-known video-sharing website owned by ByteDance, is drastically changing its approach to content management. The company’s attempts to improve efficiency and scalability in handling the increasing volume and complexity of user-generated material include this move, which mainly impacts staff in Malaysia and the UK.

The AI Moderation Shift On TikTok

TikTok is laying off about 500 workers as part of its transition to AI-driven content filtering, mostly in Malaysia, with further layoffs reportedly occurring in the UK. This change is a component of a larger plan to improve the platform’s worldwide content review operational methodology. TikTok currently uses a hybrid strategy in which human moderators handle 20% of content violations while automatic technologies manage about 80% of them. In 2024, ByteDance intends to invest $2 billion worldwide in trust and safety initiatives, with a particular emphasis on enhancing the effectiveness of its moderation systems employing cutting-edge AI technologies.

Causes Of The AI Transition

Several important causes are driving the shift to AI-driven moderation. Since AI systems are anticipated to complete content review jobs faster and more affordably than human moderators, efficiency and cost-effectiveness are the main motivators. Another important factor is scalability; AI is thought to be a more flexible way to handle the varying workloads and growing complexity of user-generated content. Furthermore, TikTok wants to deploy cutting-edge technology to improve the accuracy and consistency of content moderation in order to overcome the difficulties brought on by the platform’s wide user base and global reach.

AI Moderation Concerns

Workers’ rights activists and industry professionals are quite concerned about the trend toward AI-driven content regulation on TikTok and other platforms. The efficacy, precision, and possible biases of AI systems in managing intricate moderation duties are the main topics of these worries. These are the main problems:

Experts doubt AI’s ability to accurately understand complex cultural settings and tiny content infractions, which are things that human moderators are taught to spot.

Job displacement: Concerns over job stability in the content moderation business have been raised by the layoffs of hundreds of human moderators.

Exploitation in the Global South: Concerns have been raised that the adoption of AI could result in more content moderators in underdeveloped nations being taken advantage of.

Lack of transparency: Questions concerning the dependability and accountability of AI moderation systems have been highlighted by the lack of independent evaluations of these systems.

Human oversight: According to critics, human moderators are still required, particularly in areas like Malaysia that are bilingual and culturally diverse.

Employee unionization: Hundreds of TikTok workers in London have organized a union in response to the layoffs in order to defend their employment and enhance working conditions.

Moderate Trends In The Industry

TikTok is not the only social media site to adopt AI-driven content moderation; this trend is widespread in the industry. The Meta-owned Instagram and Threads have likewise struggled with moderation, most notably with account locks and content down-ranking1. Adam Mosseri, the CEO of Instagram, first blamed these issues on human moderator mistake, but the business eventually admitted that technical issues with their moderating systems were also a factor.

The necessity to effectively manage enormous amounts of user-generated content is what is driving this industry-wide shift to AI moderation. But it calls into question how to strike a balance between automation and human supervision. Platforms must address the possible limitations of AI technology in comprehending complicated cultural contexts and nuanced content as they make significant investments in these technologies for content assessment. The pattern emphasizes how difficult it is still for social media businesses to keep their platforms safe while controlling expenses and satisfying legal requirements.