"Digital Omnibus" Is A Risk For Our Digital Rights

Gianclaudio Malgieri, Associate Professor of Law & Technology at eLaw, was interviewed by EUobserver and Al Jazeera English on the European Commission's newly unveiled "Digital Omnibus" package - a set of proposals that would amend the GDPR, the AI Act, cookie rules and parts of the EU's cybersecurity framework.

The EUobserver explainer, The Digital Omnibus has arrived - and here's what it really changes, highlights Malgieri's concern that the reform redefines how "personal data" is understood by tying identifiability much more closely to what each controller claims it can do. As he notes in the piece, The identifiability of personal data now depends on what a controller claims to know.

In Al Jazeera's coverage, EU moves to ease AI, privacy rules amid pressure from Big Tech, Trump, Malgieri places the proposals in a broader transatlantic context. He argues that, taken together, the changes risk shifting the EU away from the strong, rights-based model that has distinguished it from the United States and towards a more permissive, industry-driven approach to AI and data use.

Concerns about the GDPR reforms

In his comments, Malgieri warns that the GDPR part of the Digital Omnibus could structurally weaken EU data protection in several ways.

First, the revised notion of "personal data" would allow the same dataset to be treated as personal data for one controller but as non-personal for another that does not hold (or claims not to hold) the identifiers or means of re-identification. This controller-relative approach to identifiability risks hollowing out protections and undermining cybersecurity, particularly in an era of powerful re-identification techniques and post-quantum cryptography concerns. Second, the proposal introduces a much broader space for fully automated decision-making in contractual contexts, easing the constraints that currently follow from Article 22 GDPR. Malgieri warns that this will encourage the spread of AI-driven, non-human decisions in core areas of daily life - such as banking, insurance, employment and education - with less meaningful human oversight and fewer opportunities for explanation or contestation.

Third, by centralising what counts as "high-risk" processing requiring a Data Protection Impact Assessment (DPIA) into EU-level lists and templates, the reform may enhance harmonisation but also risks turning DPIAs into tick-box exercises that quickly lag behind emerging practices.

Most worrying for Malgieri is the creation of an AI-specific "legitimate interest" route for the development and operation of AI systems, including general-purpose models. In his view, treating AI development as a presumptively legitimate interest short-circuits the case-by-case balancing test that lies at the heart of Article 6(1)(f) GDPR and opens the door to extensive data scraping and re-use of personal - and even sensitive - data for AI training, with clear risks for the principles of lawfulness and purpose limitation.

He also points to a loosening of purpose-limitation safeguards for research and statistics, warning that if compatibility checks are relaxed too far, "research" may become a broad legal gateway for repurposing data in ways data subjects never anticipated.

AI Act changes: delayed protections and intrusive "de-biasing"

On the AI Act, Malgieri's interviews emphasise that the Digital Omnibus would delay the effective date of key safeguards for high-risk AI systems, and "grandfather" many existing systems - especially in the public sector - for additional years. This "temporal weakening" means that people subject to high-risk AI in areas such as welfare, credit, education, or law enforcement may remain without the full Chapter III protections for a prolonged period.

He is also critical of the decision to downgrade AI literacy from a direct legal duty on providers and deployers to a matter of encouragement by the Commission and Member States. Without a strong obligation to ensure that those who design and operate AI systems understand their workings and limits, Malgieri fears that requirements on human oversight and meaningful explanation will remain largely aspirational.

At the same time, he notes that the Omnibus significantly expands the lawful use of special categories of data for "de-biasing". A new legal basis would allow providers and deployers of all AI systems and models - not just high-risk ones - to process sensitive data (such as health, ethnicity, sexual orientation or religion) to detect and correct bias, under the label of "substantial public interest". While this could bolster anti-discrimination efforts, Malgieri stresses that it also normalises highly intrusive processing, and thus must be strictly conditioned by necessity tests, robust technical and organisational safeguards, bans on onward transfers, rapid deletion and thorough documentation. Malgieri further highlights that the Omnibus broadens real-world testing of high-risk systems outside regulatory sandboxes, and reduces transparency for systems that providers argue should be "downgraded" from high-risk status: such systems would no longer need to be recorded in the EU high-risk database, making external scrutiny by journalists, civil society and affected communities considerably harder.

Despite welcoming some positive elements - such as stronger dialogue between fundamental-rights bodies and market-surveillance authorities - Malgieri's overarching message in both interviews is clear: the Digital Omnibus risks moving the EU away from its distinctive, rights-based model of digital regulation at precisely the time when the rest of the world is looking to Europe for an alternative to more permissive, industry-driven approaches. eLaw will continue to monitor and analyse the negotiations on the Digital Omnibus as they move to the Council and the European Parliament.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.