🕳️ The AI Act got "simplified." Your sensitive data just got a new legal door.
The real story is buried two paragraphs lower, and it touches your health, your race, your union membership, and your sex life.
Some people asked me to talk about the AI Act amendment.
Most coverage of the 7 May 2026 AI Act amendment reads the same way. “Compliance delayed.” “More time for business.” “Machinery exempted.” Patrick Upmann calls it an execution gap. The Council calls it simplification. BEUC calls it a rollback. Take your pick.
I want to talk about something else. Something almost no one is highlighting, even though it sits in plain sight inside the Council and Parliament press releases.
The deal expands the legal basis for processing your most sensitive personal data, health, biometrics, race, sexual orientation, union membership, political opinions, to “detect and correct biases” in AI systems, I mean, both high-risk and non-high-risk!
Read that again. Non-high-risk too.
Let me explain why this matters, and why I think it is the most consequential change in the Omnibus, even if no one is calling it that yet.
GDPR Article 9 treats special category data as a hard wall. Health data, biometric data, data revealing your race or beliefs, your sex life… you do not process this by default, because you need a narrow legal carve-out, and you need a strong one.
The Omnibus widens one of those carve-outs, it says companies can process this kind of data when “strictly necessary” to find and fix bias in AI systems… good right?
Well, the Council made a point of putting “strict necessity” back in after the original draft was looser, good too, but strict necessity is a legal standard, not a technical one… and I’m a technical person… the legal standard is interpreted by lawyers, defended in audits, and tested only when something goes wrong.
In practice, this is a license to ingest sensitive data into bias-correction pipelines across thousands of vendors and customers, including in non-high-risk contexts. The justification is fairness, I mean, the byproduct is a new category of data flow that did not exist at this scale before.
To detect bias, a system has to know who is who. Agree? To know if your model under-serves Black women over 50, the dataset has to know who is a Black woman over 50. To know if a hiring model is unfair to people with disabilities, the dataset has to know who has a disability. To know if a credit scoring model discriminates against migrants, somebody has to label migrants in the data.
I mean… is that only obvious for me ??
You cannot find bias against a group you cannot see. That is the math.
So bias correction is not a privacy-neutral exercise, it is the opposite cuz it requires building a more granular profile of the people in your system, not a less granular one. Well, then we did reach where I was trying to get, privacy and fairness, in this specific procedure, pull in different directions.
This is not a new debate inside the privacy community, I know. Researchers like Andrew Selbst and Solon Barocas have written about the tradeoff between fair AI and minimal data, but the EU just made the tradeoff a feature, not a bug, and almost nobody is saying it plainly.
The other big headline is that nudifier apps are now banned. Good. They should be… but notice what was banned and what was not.
The output is banned, the training pipeline that produces those models is not.
The compute is not!! The scraped corpus of bodies that made the model possible in the first place is not, so… the architecture that lets a 19-year-old fork an open weights model on Hugging Face and fine-tune it on consensual content, then someone else weaponizes it, is not.
A ban on nudifier apps without a ban on the upstream conditions is a ban on the storefront… the factory keeps running.
I am not arguing the ban is wrong, dont get me wrong, I am arguing it is small move, and presenting it as a strong child protection measure, while postponing high-risk obligations by 16 months, is a marketing (and political) choice.
Yes, high-risk obligations are pushed to 2 December 2027 for stand-alone systems and 2 August 2028 for embedded ones.
Yes, this gives companies time to build governance.
But here is the part that gets less attention: The transparency rules under Article 50 still apply on 2 August 2026, so, do the AI literacy obligations.
So does the prohibition on nudifier-style content… so does the new bias-detection processing right.
In other words, the parts of the AI Act that constrain providers got delayed. The parts that expand what providers can legally do with your data did not.
If you are a company, this is a window, if you are a citizen, this is a transfer.
Talking about citizens…what this means for ordinary people?
Your bank uses an AI scoring model from a vendor… the vendor wants to check the model for bias against protected groups. ok ? To do that, it pulls or infers data about your race, your gender, your nationality, your disability status, maybe it gets that from internal HR records, maybe it buys it from a data broker, maybe it infers it from your name, your address, your transaction history.. I dont know. The legal basis is bias correction. Fair?
The data is now in a new pipeline, there is no clean “off” button for you, because the legal basis is not consent.
Let’s think about other example: Your hospital uses a triage AI.. to make sure the triage is fair across ethnic groups, the vendor needs ethnicity-tagged data, so, the hospital may not have it cleanly, so the vendor brings tools to infer it, now your medical record sits in a pipeline that includes inferred sensitive attributes you never declared.
This is the predictable consequence of putting “fairness” and “bias correction” into the same legal sentence as “process special category data.”
I am not saying bias correction is wrong. Again, don’t get me wrong…I am saying it is being legalized at scale without a real public conversation about what infrastructure it requires.
What companies should actually ask their vendors
If you are a DPO, a CISO, or a privacy lead, the question for your AI vendors is not “are you AI Act ready by 2027.” The question is what they are doing right now, in the bias correction window that opens on 2 August 2026.
Ask: which sensitive attributes do you process for bias detection ? Where does that data come from? Is any of it inferred? Who is the controller, who is the processor? How long is it stored? Is the bias-correction dataset segregated from the production model? What happens to it after the audit? Can you produce a record of strict necessity for each attribute?
If the vendor cannot answer, you are not buying a fairer model. Simple like that.
I hope you understood the message… fairness is a worthy goal!, child protection is a worthy goal! Industrial competitiveness is a worthy goal! None of these goals are free and each one carries a data cost that someone is paying.
This time, you are paying it.



