đ€ AI: The Study That Reveals the 12 Biggest Privacy Risks
From deepfakes to mass surveillance: how artificial intelligence is reshaping (and complicating) digital privacy
Privacy has always been a concern in the digital age, but with artificial intelligence, the rules of the game are changing â and not for the better. A recent study1 analyzed 321 AI-related incidents and found that in 92.8% of cases, these technologies either created new risks or made existing ones worse. If privacy was already a challenge due to data leaks and constant monitoring, AI is making things even more complicated â from hyper-realistic deepfakes to facial recognition systems that can identify anyone in seconds.
Imagine never posting a single photo on social media, yet somehow, your face ends up in a database used to train AI models. That has already happened. Or worse: algorithms trying to predict your sexual orientation just by analyzing your appearance. Is that really innovation? I donât think so.
The study shows that AI doesnât just expand government and corporate surveillanceâit also fuels practices like personality profiling based on photos. Itâs a pseudoscience repackaged for the automation era. This kind of thing used to be limited to neuromarketing projects, but now, anyone can do it with a few tools.
And the consequences arenât just theoretical. Police have already used facial recognition to identify protesters. Companies have trained AI models with sensitive medical dataâwithout patient consent. Deepfake technology has been exploited to create explicit content without the victimsâ knowledge. The promise of AI as a tool for social progress is clashing with a reality where data is collected without permission and used in ways people never imagined.
Letâs be honest, when we talk about data protection, this is exactly what we want to prevent, right?
And if you think AI advancements are improving privacy, the research says otherwise (I left the study link below). Methods like federated learning and differential privacy only address a small part of the risks. Have you ever tried using differential privacy algorithms? Even with Google making theirs open-source, their practical use is still very limited for most businesses.
In practice, AI needs so much data that its very existence depends on mass data collection, making issues like surveillance, exclusion, and data misuse even worse. The current solutions are just a band-aid on a wound that keeps getting bigger.
Some might argue that training AI on large datasets is no different from how search engines have been indexing data for over a decade. But letâs be realâthereâs a big difference between targeted ads on Google and AI-driven decision-making that can lead to discrimination.
Given this reality, the question isnât just how we can reduce risks, but whether weâre truly prepared to control them. Privacy is often treated as just another "technical detail" in AI development, but this study makes one thing clear: artificial intelligence isnât just changing technologyâitâs redefining what privacy even means in a world where our data is the fuel for the future.
I recommend reading it. I tried to give a brief summary and my own âreactionâ to the study, but itâs well-written and worth checking out.
âDeepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risksâ
Authors: Hao-Ping Lee, Yu-Ju Yang, Thomas Serban von Davier, Jodi Forlizzi, Sauvik Das
https://arxiv.org/abs/2310.07879