Skip to content
Tech News
← Back to articles

Meta, TikTok let harmful content rise after evidence outrage drove engagement

read original get Content Moderation Toolkit → more articles
Why This Matters

This article highlights how Meta and TikTok's reliance on AI moderation and organizational changes have compromised their ability to effectively manage harmful content, putting vulnerable users like teenagers and children at increased risk. It underscores the ongoing challenge for social media platforms to balance user engagement with safety. For consumers and the industry, it emphasizes the need for more robust and transparent content moderation strategies to protect users better.

Key Takeaways

The volume of cases they were assessing was too difficult to keep on top of to keep users safe, which left teenagers and children especially at risk, he added. Cuts and the reorganisation of some moderation teams - where some roles are being replaced by AI technology - have, in his view, limited the ability to deal effectively with this kind of content within the company.