Skip to content
Tech News
← Back to articles

Meta to cut back on third-party vendors in favor of AI for content enforcement

read original get AI Content Moderation Tool → more articles
Why This Matters

Meta's shift towards using advanced AI systems for content enforcement marks a significant move in the tech industry, aiming to improve efficiency and accuracy in moderating content while reducing reliance on third-party vendors. This transition reflects broader trends of AI integration in platform management, with implications for faster response times and enhanced safety measures for users. For consumers, this means potentially more consistent enforcement of community standards and quicker resolution of account issues, though it also raises ongoing questions about AI transparency and oversight.

Key Takeaways

Meta is beginning a yearslong rollout of more advanced artificial intelligence systems that will handle content enforcement-related tasks like catching scams and removing illegal media, as the company reduces its use of third-party vendors and contractors in favor of AI.

In a blog post Thursday, Meta said that the process could take a few years, and that the company won't completely rely on AI for monitoring content.

"While we'll still have people who review content, these systems will be able to take on work that's better-suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as with illicit drugs sales or scams," Meta said in the post.

Meta didn't name any of its current vendors, but the company has previously relied on contractors from firms like Accenture, Concentrix and Teleperformance.

The announcement represents Meta's latest effort to use its hefty investments in AI to streamline its business and operations while it struggles to find revenue-generating applications that compete with offerings from OpenAI, Anthropic and Google . Meta said AI will help more accurately flag violations "while also stopping more scams and responding faster to real-world events with fewer overenforcement mistakes."

Meanwhile, Meta is also defending itself in several high-profile trials involving the safety of children on its platform, an issue directly tied to its existing challenges with content moderation.

The company said it will still rely on experts to design, train and oversee its AI content enforcement systems, and humans will remain involved with the "most complex, high‑impact decisions" that involve law enforcement and appeals related to account disablement.

The company also said Thursday that it has debuted a new Meta AI digital support assistant that people on Facebook and Instagram can use to address various account-related issues.

According to a Reuters report last week, Meta has been considering whether to lay off over 20% of its workforce to help balance its big AI spending. Meta responded that it was "a speculative report about theoretical approaches."

WATCH: Would be surprised if Meta workforce cuts are as big as reported.