Tech News
← Back to articles

Europe's New War on Privacy

read original related products more articles

In theory, Chat Control should have been buried last month. The EU’s ominous plan to mass-scan citizens’ private messages was met with overwhelming public resistance in Germany, with the country’s government refusing to approve it. But Brussels rarely retreats merely because the public demands it. And so, true to form, a reworked version of the text is already being pushed forward — this time out of sight, behind closed doors.

Chat Control, formally known as the Child Sexual Abuse Regulation, was first proposed by the European Commission in 2022. The original plan would have made it mandatory for email and messenger providers to scan private, even encrypted, communications — with the purported aim of detecting child sexual abuse material.

The tool was sold as a noble crusade against some of the world’s most horrific crimes. But critics argued that the tool risked becoming a blueprint for generalised surveillance, by essentially giving states and EU institutions the ability to scan every private message. Indeed, a public consultation preceding the proposal revealed that a majority of respondents opposed such obligations, with over 80% explicitly rejecting its application to end-to-end encrypted communications.

Yet despite repeated blockages, and widespread criticism for violating privacy and fundamental rights, the text was never abandoned. Instead, it was repackaged, and continually pushed forward from one Council presidency to the next. Each time democratic resistance stopped the original plan, it kept returning in new forms, under new labels, each time dressed up as a “necessary” and “urgent” tool to protect children online, yet always preserving its core logic: normalising government-mandated monitoring of private communications on an unprecedented scale.

“The tool was sold as a noble crusade against some of the world’s most horrific crimes.”

In May, the European Commission once again presented its proposal. Yet several states objected. That included Germany, but also Poland, Austria and the Netherlands. As a result, Denmark, which currently holds the rotating presidency of the European Council, immediately began drafting a new version, known as “Chat Control 2.0” and unveiled earlier this month, which removed the requirement for general monitoring of private chats; the searches would now remain formally voluntary for providers. All this happened under the auspices of Coreper, the Committee of Permanent Representatives — one of the most powerful, but least visible, institutions in the EU decision-making process. It is where most EU legislation is actually negotiated; if Coreper agrees on a legislative file, member states almost always rubber-stamp it.

The gamble worked. Yesterday, this revised version was quietly greenlit by Coreper, essentially paving the way for the text’s adoption by the Council, possibly as early as December. As digital rights campaigner and former MEP Patrick Breyer put it, this manoeuvre amounts to “a deceptive sleight of hand” aimed at bypassing meaningful democratic debate and oversight.

While the removal of mandatory on-device detection is an improvement on the first draft, the new text still contains two extremely problematic features. First, it encourages “voluntary” mass scanning by online platforms — a practice already allowed in “temporary” form, which would now become a lasting feature of EU law. Second, it effectively outlaws anonymous communication by introducing mandatory age-verification systems.

An open letter signed by 18 of Europe’s leading cybersecurity and privacy academics warned that the latest proposal poses “high risks to society without clear benefits for children”. The first, in their view, is the expansion of “voluntary” scanning, including automated text analysis using AI to identify ambiguous “grooming” behaviours. This approach, they argue, is deeply flawed. Current AI systems are incapable of properly distinguishing between innocent conversation and abusive behaviour. As the experts explain, AI-driven grooming detection risks sweeping vast numbers of normal, private conversations into a dragnet, overwhelming investigators with false positives and exposing intimate communications to third parties.

Breyer further emphasised this danger by noting that no AI can reliably distinguish between innocent flirtation, humorous sarcasm — and criminal grooming. He warned that this amounts to a form of digital witch-hunt, whereby the mere appearance of words like “love” or “meet” in a conversation between family members, partners or friends could trigger intrusive scrutiny. This is not child protection, Breyer has argued, but mass suspicion directed at the entire population. Even under the existing voluntary regime, German federal police warn that roughly half of all reports received are criminally irrelevant, representing tens of thousands of leaked legal chats annually. According the Swiss Federal Police, meanwhile, 80% of machine-reported content is not illegal. It might, for example, encompass harmless holiday photos showing nude children playing at a beach. The new text would expand these risks dramatically.

... continue reading