"More Monitoring, But Not More Protection"

Max Planck Society

Carmela Troncoso explains the problems with voluntary monitoring and mandatory age verification in the European Council's proposed Chat Control legislation

A proposal by the Council of the European Union on chat monitoring, aimed at preventing the distribution of Child Sexual Abuse Material (CSAM), is now entering trilogue negotiations between the Council, the European Commission, and the European Parliament. The draft drops the earlier plan for mandatory surveillance. Instead, messaging services such as WhatsApp and Signal would be allowed to voluntarily install software for automated chat monitoring, and the scope of such monitoring could even be expanded. Mandatory measures may be reconsidered in the future. The proposal also seeks to make it easier for users to report chats suspected of involving CSAM and introduces mandatory age verification for users. Carmela Troncoso, Director at the Max Planck Institute for Security and Privacy, is among the authors of a commentary welcoming the removal of the mandatory chat monitoring requirement. However, the signatories warn that several elements of the current proposal do little to help combat the spread of CSAM and could have unwanted side effects. In this interview, Carmela Troncoso discusses how the current draft differs from earlier ones, the implications of voluntary monitoring, the risks involved, and possible alternative approaches.

Portrait of Carmela Troncoso (smiling, dark hair parted in the middle) in front of a winding staircase with red railings.

Carmela Troncoso is director at the Max Planck Institute for Security and Privacy and is campaigning against the introduction of automatic chat controls.

© Max Planck Institute for Security and Privacy

Carmela Troncoso is director at the Max Planck Institute for Security and Privacy and is campaigning against the introduction of automatic chat controls.
© Max Planck Institute for Security and Privacy

Professor Troncoso, what are the most significant differences between the new Chat Control proposal and previous versions? Are there any clear improvements?

It's very encouraging that the new proposal no longer demands blanket chat monitoring. That's a major step forward in balancing the very necessary protections for children online with the security and privacy risks these measures pose to society as a whole.

However, several elements of the proposal continue to present serious risks without offering clear, demonstrable benefits for children. For example, the scope of what can be monitored has expanded considerably: whereas previously only links and images could be scanned, the new proposal extends this to text messages and videos. This will further infringe on our fundamental privacy rights and produce many more false reports, ultimately undermining the effectiveness of protective measures.

The proposal also introduces mandatory age verification in two cases: first, when users want to download certain apps - messaging services, games with integrated chats, and social media platforms like X, Bluesky, or Facebook - that are classified as high-risk for distributing Child Sexual Abuse Material (CSAM) or grooming; and second, before users can access those services or specific features within them.

How is CSAM detection implemented technically in chat services?

Chats via messaging services are monitored in a fundamentally different way from emails. Most email services today don't use end-to-end encryption (E2EE). They only encrypt the message during transit - from the sender to the provider's server, and from the server to the recipient. This means the service provider can scan email content for incriminating material like CSAM whilst it sits on their server, and many already do this.

In contrast, chats via messaging services such as WhatsApp or Signal are end-to-end encrypted. This means a message is encrypted on the sender's device and remains scrambled until it's decrypted only on the recipient's device. To allow authorities to search these messages for CSAM, the proposal relies on a controversial technology known as client-side scanning.

Why does client-side scanning violate data protection?

With client-side scanning, end-to-end encryption remains in place, but it's fundamentally circumvented: to detect CSAM, content must be checked on the sender's device before encryption.

Detection software would be embedded in the messaging app or the operating system to scan chat content and automatically forward any material flagged as prohibited to law enforcement agencies. Once content is accessible to a party other than the sender or recipient, the protection provided by encryption disappears.

How is CSAM, such as images, identified using client-side scanning?

There's a distinction between searching for known and unknown harmful images. When it comes to known CSAM, actual, identifiable child pornography images obviously can't be uploaded to users' devices for comparison before encryption - that would put users in possession of depictions of abuse. Instead, what's known as cryptographic "hashes" are generated from known depictions of abuse and uploaded onto user devices. A hash is a string of characters that serves as a mathematical fingerprint. Software can also be trained using machine learning to detect unknown depictions of abuse.

Is client-side scanning effective in detecting CSAM?

The short answer is no. Research shows that using this type of software to compare chat content to known CSAM isn't effective. The detection mechanism can be easily bypassed by simply changing a few pixels in the photo.

When using AI to search for unknown depictions of abuse, there's a very high risk that perfectly harmless images will also be flagged as problematic. This could include photos of children on the beach or images of skin conditions sent by parents to a paediatrician. Current technology simply isn't capable of reliably identifying images of abuse. Therefore, the potential benefits of this legislation don't outweigh the risks to users' privacy.

Current technology simply isn't capable of reliably identifying images of abuse.

Does this also apply to text messages?

The new proposal brings back the possibility of using AI to analyse text messages for activities such as grooming - the process whereby a perpetrator builds an emotional connection with a child in order to exploit it. As with all other AI-based systems, detection is imperfect. Furthermore, grooming-related text communication can closely resemble other interactions that are perfectly acceptable in a friendly context - conversations with relatives or close friends, or chats between teenagers starting a relationship. This is bound to trigger a surge in false accusations, as many such situations will be misinterpreted as illegal. Extending chat surveillance to cover text messages and videos may even undermine child protection, as investigators will be inundated with false accusations, preventing them from focusing on genuine cases. In summary, widening the scope of scanning will only result in more monitoring, but not more protection against the spread of abusive images.

Are abuse images being circulated on a significant scale via encrypted messaging services at all?

While there exist reports that say that messaging is one of the ways in which children encounter abuse, it is hard to know the extent to which encrypted communications are used for these activities because due to encryption this cannot be measured.

Would minors be better protected if they had to verify their age to use the services?

Age verification could help prevent minors from accessing certain content or apps. But its effectiveness isn't guaranteed, and it introduces substantial privacy risks and the potential for discrimination. If verification requires scanning a full document, like an ID card, the security and privacy risks are obvious and disproportionate - it reveals far more personal information than just a person's age.

Privacy-protecting solutions, on the other hand, can create dependence on specific software or hardware, as the technology may only work on certain devices. Even if these technical issues were resolved, the introduction of age verification can still be discriminatory: only people who can provide proof of age from an authorised body would be able to use the system.

A significant share of the population - such as migrants or children from socially disadvantaged families - may not have easy access to such documents.

Furthermore, it's unclear how effective age verification actually is, as it can be easily circumvented. In the United Kingdom, for example, users increasingly switch to alternative providers or access services via VPNs in order to access the services in question If mandatory age verification drives children to shift their communication to alternative, insecure, or even unencrypted channels, their exposure to risk may actually increase, as these platforms could be operated by malicious actors seeking to exploit them. .

Would age verification based on biometric, behavioural, and contextual information, such as browser history, solve the problems you've mentioned?

This kind of age verification is much more invasive than age verification using ID documents, as it creates an incentive for more extensive collection of personal data. It carries a disproportionately high risk of serious data protection violations, as it currently relies predominantly on AI-based methods, which are known to process the required data types - such as biometric data - with high error rates and show systematic biases against certain minority groups.

Is there a better alternative to increased internet control for ensuring child safety?

The current proposal relies primarily on technical solutions that aim to remove abusive content from the internet or place barriers in front of it - at the cost of communication security and privacy.

The eradication of CSAM depends on eradicating the abuse itself

Given the limitations of current technology, this approach is unlikely to make a substantial contribution to preventing child abuse. It is crucial to remember that CSAM exists because child sexual abuse exists. The eradication of CSAM, therefore, ultimately depends on eradicating the abuse itself, not only on preventing its digital distribution. Instead of continuing to rely on technologies with dubious effectiveness - such as chat monitoring and age verification measures, which significantly weaken security and privacy - we should focus on implementing the measures recommended by organisations such as the United Nations. These include education about consent, norms and values, digital literacy, and online safety, as well as comprehensive sexuality education and trauma-informed hotlines for reporting.

The interview was conducted by Peter Hergersberg.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.