AI Governance and Privacy Risks
How do privacy professionals handle AI governance?
January 28th is recognized as Data Privacy Day. The overwhelming amount of content and activities surrounding this date made me feel a bit of content fatigue, so I decided not to post anything on that day.
I have OCD, and when too many people are talking, I struggle to speak. So, when the whole world is discussing a topic, I prefer to stay quiet. Just a personal thing.
I let the day pass to write about something I had been structuring: AI governance and privacy program management.
According to Gartner1, by 2027, 17% of cyberattacks or data breaches will involve Generative AI.
Additionally, Gartner predicts that global spending on information security will grow by 15% in 2025, reaching a total of $188 billion. This increase is driven by the growing concern over cybersecurity—obviously—especially in a landscape where companies face threats from all directions. The research indicates that organizations are prioritizing investments in security technologies such as artificial intelligence, governance, and automation to strengthen their defenses.
Gartner also highlighted that digital transformation and the adoption of new technologies like cloud computing and IoT are contributing to the rise in security investments—not that this is surprising.
As for Gartner’s prediction that 17% of cyberattacks will involve GenAI, I see this as a clear warning about the urgent need for strong AI and privacy governance. By integrating technologies like GenAI, organizations not only increase efficiency but also expose their operations to new risks. And risk management is a core element of privacy governance as well.
Companies need to adopt governance policies to ensure that AI is used ethically and securely.
Furthermore, privacy governance must be a priority, especially considering that GenAI can process vast amounts of personal data. Companies need to implement controls to protect this information.
I also believe that predictions like this—whether accurate or not—indicate that privacy and security can no longer be treated as separate domains within organizations. With the rapid adoption of these technologies, businesses must reassess their data protection practices and strengthen governance to ensure AI is used responsibly. This means not only implementing technical controls but also establishing ethical guidelines to prevent negative impacts on the use of personal and sensitive data.
Let’s say I were a hacker (some people say I am, but I disagree). I think exploiting vulnerabilities in Generative AI would start with identifying attack vectors such as prompt injection, data poisoning, and model extraction. One of my first steps would be to test the robustness of the implemented filters by manipulating prompts to bypass restrictions, forcing the model to generate unintended responses or expose data the company may have embedded in text vectors used in Retrieval-Augmented Generation (RAG).
Additionally, I would explore data poisoning techniques by injecting malicious data into the training set to influence model outputs and compromise reliability. Another approach would be model extraction (model inversion attacks and membership inference attacks), where, through targeted queries, I could infer critical information from the original training data or even reconstruct parts of the proprietary model.
And let’s not forget about biases! Exploring the ethical aspects of Generative AI would focus on manipulating the model’s built-in biases to test its neutrality on sensitive topics such as politics, religion, and ideology. One of the first tests would involve crafting progressively polarized prompts to uncover latent biases and see how the AI responds to controversial questions. Additionally, I would explore bias exploitation attacks, where, after identifying patterns favoring certain groups or ideologies, I could amplify these biases through controlled interactions—leveraging them to manipulate public perception by exposing the model’s responses in social media posts with screenshots and links, directly associating the AI's answers with the brand's positioning. Another approach would involve hallucination attacks, inducing the AI to generate misinformation about historical events or public figures, testing the extent to which the model verifies sources before producing content.
A key takeaway here is that AI governance cannot be reactive, adjusting only when problems arise—it must follow privacy by design and security by design principles. Just like privacy management, the approach should be proactive and continuous, evolving alongside new threats and challenges.
Zero-Day Bug in Generative AI?
The creation of internal committees, the involvement of privacy and security professionals from the very beginning of AI development, and transparency in data collection and usage are fundamental steps to prevent unnecessary risks.
After all, if technology evolves rapidly, governance needs to keep up—otherwise, well, you can imagine the consequences...
https://www.gartner.com/en/newsroom/press-releases/2024-08-28-gartner-forecasts-global-information-security-spending-to-grow-15-percent-in-2025