Fake beards, drugstore makeup, and the end of the illusion: why AI age verification was born broken
The Digital ECA takes effect on March 17, and Brazil still has no answer to a problem that no country in the world has solved.
In February 2026, security researchers investigated Persona, a biometric verification startup used by Discord in the UK, and found 2,456 public files on a US government server. The exposed code showed that a simple âage checkâ actually ran 269 different tests. It compared selfies against lists of high-profile political figures, created facial recognition risk scores, and kept IP addresses, device data, ID numbers, and selfie backgrounds for up to three years. You just wanted to prove you were 18 to use voice chat in a game, but you ended up with a permanent file tracking your potential involvement in wildlife trafficking.
Discord stopped using Persona. But they had already dropped their previous provider, which suffered a breach in 2025 that exposed 70,000 ID photos used in a ransomware attack. Itâs the same story: the company swaps the bucket under a leaking roof when the problem is the roof itself. Every law requiring age verification creates a âhoneypotâ, a central target, filled with the exact data hackers want most. Biometric facial data has no âpassword reset.â Once it leaks, itâs gone forever.
At the same time, a study by researchers from UC Berkeley, Duke, and Reality Inc. tested eight age-estimation models, including advanced AI like Gemini 3 Flash and GPT-5-Nano, against simple disguises: fake beards, gray hair, makeup, and fake wrinkles. All these items can be bought at a costume shop for less than R$ 50. The result: a fake beard alone tricks the AI 28% to 69% of the time, letting minors pass as adults. Combining all four tricks raises the predicted age by an average of 7.7 years. The weakest model (DEX) was fooled in 83% of cases. Even the strongest model (GPT-5-Nano) fails 59% of the time. And we are talking about cheap drugstore makeup, used without any technical skill.
The 15-to-17 age group is the most vulnerable because the AI already guesses they are close to 18. The irony is that the system fails exactly where it is needed most.
This is the situation as Brazil steps in. The âDigital ECAâ law takes effect on March 17, 2026. It bans simply âdeclaringâ your age and requires real verification for adult sites, gambling, social media, and games with loot boxes. A government report recognizes the tension: everyone agrees the solution should match the risk, but they disagree on who should provide it (the government or private companies) and how to classify those risks. The National Data Protection Authority (ANPD) is now the regulator in charge of the technical details.
This regulatory gap is both a risk and an opportunity. The final rules are still being written by five ministries, and there is no set technical standard yet. This leaves room for âprivacy-preservingâ ideas: anonymous tokens from trusted third parties, âzero-knowledge proofsâ that prove you are old enough in milliseconds without revealing your birth date, or age signals verified directly on your phone so only a âYes/Noâ answer is sent to the platform.
For those in security and privacy, the time to act is now. We must audit age verification providers as strictly as payment processors. We must demand privacy impact assessments before systems are launched, not after a data leak. Most importantly, we must document why facial age estimation alone isnât enough before it becomes the law. Recent heavy fines against Reddit and Imgur in the UK show that enforcement is coming.
AI facial age verification isnât a solution; itâs âsecurity theaterâ with real privacy risks. The challenge is to stop choosing between protecting children and protecting data, and instead build systems that do both without creating the next massive biometric data breach.



