The volume of cases they were assessing was too difficult to keep on top of to keep users safe, which left teenagers and children especially at risk, he added. Cuts and the reorganisation of some moderation teams - where some roles are being replaced by AI technology - have, in his view, limited the ability to deal effectively with this kind of content within the company.
Meta and TikTok let harmful content rise to drove engagement, say whistleblowers
Why This Matters
This article highlights the challenges faced by Meta and TikTok in moderating harmful content, especially as organizational changes and AI integration hinder their ability to protect vulnerable users. It underscores the ongoing struggle within the tech industry to balance user engagement with safety. For consumers, it raises concerns about the effectiveness of content moderation on popular platforms and the potential risks to children and teenagers.
Key Takeaways
- Organizational changes and AI use have impacted moderation effectiveness.
- Harmful content may be rising due to limited oversight.
- The safety of young users is at increased risk on these platforms.
Get alerts for these topics