Skip to content
Tech News
← Back to articles

Meta and TikTok let harmful content rise to drove engagement, say whistleblowers

read original get Content Moderation Software → more articles
Why This Matters

This article highlights the challenges faced by Meta and TikTok in moderating harmful content, especially as organizational changes and AI integration hinder their ability to protect vulnerable users. It underscores the ongoing struggle within the tech industry to balance user engagement with safety. For consumers, it raises concerns about the effectiveness of content moderation on popular platforms and the potential risks to children and teenagers.

Key Takeaways

The volume of cases they were assessing was too difficult to keep on top of to keep users safe, which left teenagers and children especially at risk, he added. Cuts and the reorganisation of some moderation teams - where some roles are being replaced by AI technology - have, in his view, limited the ability to deal effectively with this kind of content within the company.