Skip to content
Tech News
← Back to articles

X, a bastion for hate, claims it will reduce hate content in the UK

read original more articles
Why This Matters

X's pledge to reduce hate and terror content in the UK marks a significant effort to combat online hate speech and illegal content, especially amid rising hate speech incidents following Elon Musk's acquisition. This move highlights the ongoing pressure on social media platforms to enhance content moderation and accountability, impacting both industry standards and user safety. However, skepticism remains about the platform's ability to follow through given past challenges and Musk's own posting habits.

Key Takeaways

X has committed to reducing "hate and terror content" in the UK, according to the regulator Ofcom, by speeding up its review process for offending content and "withhold access in the UK" to accounts which post "illegal terrorist content" and are determined to be "operated by or on behalf of a terrorist organisation." This is despite X's visible increase in hate content after Elon Musk purchased Twitter (eventually renamed to X) β€” a UC Berkeley study found that the weekly rate of hate speech increased by 50 percent, buoyed by an increase in bots.

"We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites," Oliver Griffiths, Ofcom's Online Safety Group Director, said in a statement. "We are challenging them to tackle the problem and expect them to take firm action. This is of particular importance in the UK following a number of recent hate motivated crimes suffered by the country's Jewish community."

Specifically, X said it will "review and assess" terrorist and hate content in the UK "on average within 24 hours of it being reported," or at the very least, it will do so for 85 percent of hate content "within a maximum of 48 hours." X also plans to work with experts around UK hate and terror content, in addition to banning offending accounts.

Ofcom says it will review X's performance data quarterly over the next year. The regulator is also continuing its investigation into Elon Musk's Grok AI for generating CSAM and non-consensual intimate images. As of March, Ofcom also fined the notorious image board 4chan nearly $700,000 for its offenses against the country's Online Safety Act. 4chan's lawyer replied with an AI picture of a hamster.

Commitments to regulators are one thing, but they don't mean much if X doesn't actually follow through. And given Elon Musk's daily posting (and re-posting) of racist content, it's hard to believe that reducing hateful posts are actually a priority, even if it's just localized to the UK.