🤖 AI: The Study That Reveals the 12 Biggest Privacy Risks
From deepfakes to mass surveillance: how artificial intelligence is reshaping (and complicating) digital privacy
Privacy has always been a concern in the digital age, but with artificial intelligence, the rules of the game are changing – and not for the better. A recent study1 analyzed 321 AI-related incidents and found that in 92.8% of cases, these technologies either created new risks or made existing ones worse. If privacy was already a challenge due to data leaks and constant monitoring, AI is making things even more complicated – from hyper-realistic deepfakes to facial recognition systems that can identify anyone in seconds.
Imagine never posting a single photo on social media, yet somehow, your face ends up in a database used to train AI models. That has already happened. Or worse: algorithms trying to predict your sexual orientation just by analyzing your appearance. Is that really innovation? I don’t think so.
The study shows that AI doesn’t just expand government and corporate surveillance—it also fuels practices like personality profiling based on photos. It’s a pseudoscience repackaged for the automation era. This kind of thing used to be limited to neuromarketing projects, but now, anyone can do it with a few tools.
And the consequences aren’t just theoretical. Police have already used facial recognition to identify protesters. Companies have trained AI models with sensitive medical data—without patient consent. Deepfake technology has been exploited to create explicit content without the victims’ knowledge. The promise of AI as a tool for social progress is clashing with a reality where data is collected without permission and used in ways people never imagined.
Let’s be honest, when we talk about data protection, this is exactly what we want to prevent, right?
And if you think AI advancements are improving privacy, the research says otherwise (I left the study link below). Methods like federated learning and differential privacy only address a small part of the risks. Have you ever tried using differential privacy algorithms? Even with Google making theirs open-source, their practical use is still very limited for most businesses.
In practice, AI needs so much data that its very existence depends on mass data collection, making issues like surveillance, exclusion, and data misuse even worse. The current solutions are just a band-aid on a wound that keeps getting bigger.
Some might argue that training AI on large datasets is no different from how search engines have been indexing data for over a decade. But let’s be real—there’s a big difference between targeted ads on Google and AI-driven decision-making that can lead to discrimination.
Given this reality, the question isn’t just how we can reduce risks, but whether we’re truly prepared to control them. Privacy is often treated as just another "technical detail" in AI development, but this study makes one thing clear: artificial intelligence isn’t just changing technology—it’s redefining what privacy even means in a world where our data is the fuel for the future.
I recommend reading it. I tried to give a brief summary and my own “reaction” to the study, but it’s well-written and worth checking out.
“Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks”
Authors: Hao-Ping Lee, Yu-Ju Yang, Thomas Serban von Davier, Jodi Forlizzi, Sauvik Das
https://arxiv.org/abs/2310.07879