Skip to content
Tech News
← Back to articles

Meta will move away from human content moderators in favor of more AI

read original get AI Content Moderation Tool → more articles
Why This Matters

Meta's shift towards AI-driven content moderation signifies a major transformation in how social media platforms manage online content, aiming for faster and more comprehensive moderation across multiple languages. This move could impact user trust, moderation accuracy, and operational costs, influencing industry standards for content management. Consumers may experience changes in how their reports and appeals are handled, raising questions about transparency and fairness in moderation practices.

Key Takeaways

A little more than a year after ditching third-party fact checkers and rolling back much of its proactive content moderation, the company says it will further "transform" its approach by drastically reducing the number of human moderators in favor of AI-based systems. The company says the change will happen "over the next few years," and will allow the company to catch more issues faster than its current approach.

Meta didn't say how much of its contract workforce might be cut as it makes this transition. The company employs thousands of contractors around the world to review content flagged by its AI systems and user reports among other tasks. The company said that as it shifts its approach humans will "play a key role" in "critical decisions" and aid in training and other tasks.

"Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high‑impact decisions," Meta said in an update. "For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement."

Advertisement Advertisement

Advertisement

The company has been testing LLM-based systems for content moderation for a while and says that early tests have had "promising" results. Another advantage is that its AI can handle languages used by "98% of people online," compared with the 80 languages currently supported by its moderation capabilities.

While Meta says its underlying rules aren't changing, the new approach could dramatically change users' perception of how Meta enforces its policies. The company already relies heavily on AI for certain rules, and many users believe that these systems make too many mistakes and make it difficult for their appeals to reach a set of human eyes. On the other hand, Meta, which stands to save a lot of money if it significantly downsizes its contract workforce, says its new systems make "fewer over-enforcement mistakes" and catch more of the most "severe" violations.

In the nearer term, Meta is introducing an AI powered "support assistant" that will help users with certain types of account issues. The chatbot, which is rolling out now in the Facebook and Instagram app, will be able to help users report content and manage appeals, reset passwords and manage other account settings. It will also be able to help people who get locked out of their accounts "starting with select cases in the US and Canada."