The volume of cases they were assessing was too difficult to keep on top of to keep users safe, which left teenagers and children especially at risk, he added. Cuts and the reorganisation of some moderation teams - where some roles are being replaced by AI technology - have, in his view, limited the ability to deal effectively with this kind of content within the company.
Meta, TikTok let harmful content rise after evidence outrage drove engagement
Why This Matters
This article highlights how Meta and TikTok's reliance on AI moderation and organizational changes have compromised their ability to effectively manage harmful content, putting vulnerable users like teenagers and children at increased risk. It underscores the ongoing challenge for social media platforms to balance user engagement with safety. For consumers and the industry, it emphasizes the need for more robust and transparent content moderation strategies to protect users better.
Key Takeaways
- AI moderation has limited effectiveness in managing harmful content.
- Organizational restructuring has impacted content safety efforts.
- Vulnerable users are at increased risk due to moderation challenges.
Get alerts for these topics