š® Vibe Coding: A Road to Data Breaches
When intuition outweighs engineering, security becomes a matter of luck. And in the digital world, luck is a terrible framework.
For those outside the tech world, "vibe coding" can be translated as "coding by intuition." Itās the practice of developing software, often with the help of generative AI, in a fluid and fast-paced way, focusing on immediate results (āif it works, itās good enoughā) rather than following structured processes of engineering, review, and testing. Itās the modern version of āmove fast and break things,ā but with an added layer of abstraction where the ādeveloperā (I mean, the creator) may not fully understand the code being implemented.
This approach, while appealing for its speed, is turning out to be a breeding ground for security and privacy vulnerabilities with some pretty unusual consequences.
I donāt want to sound like the grumpy old guy who complains about change and progress. To be clear, I use AI myself for software development, but you have to know how to use it. Those building apps from scratch without even looking at the code are just helping create a new category of developers: the āvibe-firefighters,ā whoāll be stuck putting out fires caused by AI-generated code.
And hereās the catch: if thereās AI to write code for you, thereās also AI to find vulnerabilities in that same code.
The problem with vibe coding is that information security is not intuitive; itās deliberate and methodical. Trusting AI-generated code, without review, to handle data securely is a high-stakes gamble. And we already have clear examples of where that gamble fails. Looking into recent security incidents, we see a pattern: basic flaws slipping through because of a rushed development culture with no rigor.
My own coding style is closer to āgo horse.ā Iāve always been like that, focus on getting it functional first, then improve, fix, polish, refine, and release. But that never meant not knowing the code. Quite the opposite: Iāve always had full understanding and complete control of it. Developers know what I mean, one misplaced semicolon on the backend can trigger a storm of bugs on the frontend.
A striking case was the data breach at the āTea App,ā a social platform that exposed usersā sensitive information. The cause? Broken access controls and open databases, elementary mistakes that code reviews and security tests would have caught easily. While we canāt say with 100% certainty that āvibe codingā caused it, the outcome is a textbook example of its risks: prioritizing speed over basic security. Another case, documented by Databricks researchers, showed that AI-generated code for a simple game introduced a critical remote code execution (RCE) vulnerability through insecure object serialization, something any experienced developer would immediately flag as a red alert.
The solution isnāt abandoning AI tools, but integrating them into a Secure Software Development Life Cycle (Secure SDLC). The right approach is to treat AI-generated code as if it came from a ājuniorā developer: a welcome boost, but in need of review, validation, and rigorous testing.
And forgive the bluntness, but I put ājuniorā in quotes because, from what Iāve seen so far, not even most juniors would produce the bizarre code AI sometimes spits out.
I personally like AI programming assistants for very specific tasks, like:
Refactoring existing code for better performance or security, spotting flaws, or improving efficiency.
Frontend development: Lovable, for example, is outstanding for UI, but you need to manage maintenance carefully. Donāt trust everything blindly. The bigger the project, the more it will āchange things you didnāt ask for.ā The more complex it gets, the more precise your prompt needs to be.
Bulk actions like implementing localization across an entire project or other large-scale, repetitive but low-complexity tasks.
Translating code from one language to another: here, Replit, Gemini, Claude, and OpenAI do great work.
Documentation: I know there are AI tools just for this, but I havenāt tried them. I used Gemini to generate detailed Javadocs and Swagger for APIs, it worked perfectly. Goodbye to the painful job of documenting code. Even renaming cryptic variables like āabxā into something readable like āisUserā is a big win.
But some practices are non-negotiable: static application security testing (SAST), software composition analysis (SCA), and peer code reviews. AI-generated code must be understood, not just copy-pasted. At the end of the day, responsibility lies not with the algorithm, but with the engineer using it.
Another positive example: I once had a CRUD and needed to build another very similar one in the same architecture. It was a straightforward case, no complex business rules or infrastructure quirks, just āsit down and code.ā I gave the AI a few examples, and it generated what I needed much faster, exactly within my architecture.
That kind of controlled use, aiming for productivity gains, I fully support. I think thatās the future. But full AI-generated applications, pushed straight into production, might look cool but will bring serious problems down the road. Donāt forget that after launch comes maintenance, scalability, migrations, integrations, database version upgrades, and hundreds of other challenges that only real developers understand are all too common.
In conclusion, vibe coding sells the promise of frictionless productivity, but in practice, it delivers a shortcut to technical debt, and, more critically, to data exposure. For technology professionals: vibe coding may be fine for quick innovation, but when it comes to security and privacy, the only acceptable āvibeā is diligence. Speed may be a market requirement, but trust is the real currency, and itās built through engineering, not improvisation.
If you donāt know what was done and canāt explain how it was done, then it shouldnāt have been done in the first place.