Vibe Coding: When "Just Another AI App" Becomes a Security Risk

AI Quick Summary
- "Vibe coding," coined by Andrej Karpathy, describes guiding an AI assistant to generate and refine applications conversationally, often with developers accepting code without full understanding.
- It has seen rapid adoption, with 25% of Y Combinator startups having 95% AI-generated codebases and major tech companies reporting similar figures.
- The appeal lies in rapid prototyping, democratizing software development, and enabling faster product delivery by focusing on prompting rather than line-by-line coding.
- However, vibe coding introduces significant security risks, including arbitrary code execution, memory corruption, exploitable flaws (up to 40% of AI-generated programs), and vulnerabilities like insecure hashing or hallucinated malicious packages.
- Responsible vibe coding requires security-focused prompting, mandatory human code review, automated security scanning, and a complete understanding of the code before deployment to mitigate these risks.
Recent discussions since the article's publication continue to highlight the ongoing struggle between AI-driven development speed and the need for robust security protocols and human oversight.
It used to be impressive when someone built an app that works. Hours spent debugging, hunting for missing semicolons, and wrestling with buggy code were badges of honor. Today, the story has changed. Some people launch new apps every day, and by spotting a few emojis and color patterns, you can tell their app was "vibe coded." Instead of being impressed, the reaction is often: "just another AI app."
What Is Vibe Coding?
The term "vibe coding" was coined by Andrej Karpathy, former Tesla AI director and OpenAI co-founder, in February 2025. According to Google Cloud, vibe coding describes a workflow where developers shift from writing code line-by-line to guiding an AI assistant to generate, refine, and debug applications through conversational processes.
The Scale of Adoption
Vibe coding has moved beyond hobbyist projects. According to Wikipedia, in March 2025, Y Combinator reported that 25 percent of startup companies in its Winter 2025 batch had codebases that were 95 percent AI-generated. Microsoft's CEO revealed that up to 30 percent of the company's code is now AI-generated, while Google's CEO reported similar figures.
• 25% of Y Combinator startups have 95% AI-generated codebases
• 30% of Microsoft's code is AI-generated
• Google reports similar AI code generation rates
• Veracode research shows 45% of AI-generated code samples fail security tests
Why Are People Attracted to Vibe Coding?
The appeal is multifaceted. According to Xygeni, vibe coding allows developers to stay in the zone. Instead of typing every line, they describe the goal and let the AI generate code. This democratizes software development, enabling people with minimal technical skills to build applications by focusing on prompting rather than implementation.
It is not laziness—it is velocity. Developers can prototype faster, experiment more, and ship products at unprecedented speed. For non-technical founders, vibe coding offers a path to validate ideas without hiring full engineering teams. The question is not whether people should use AI to accelerate development, but how to do so responsibly.
The Hidden Security Time Bomb
The speed advantage comes with severe security trade-offs. According to Databricks' AI Red Team, vibe coding can introduce critical vulnerabilities including arbitrary code execution and memory corruption, even when generated code appears functional.
Research from NYU and Stanford revealed that AI-assisted coding significantly increases exploitable flaws, with up to 40 percent of generated programs containing security vulnerabilities.
Databricks documented cases where AI generated code using pickle serialization—a module vulnerable to arbitrary remote code execution—simply because the code "worked."
What Happens Without Supervision?
When developers build entire applications without understanding the underlying code, several catastrophic scenarios emerge. According to Kaspersky, recent vulnerabilities include:
The CurXecute vulnerability in the popular AI tool Cursor allowed attackers to execute arbitrary commands on developers' machines. The EscapeRoute vulnerability in Anthropic's MCP server allowed reading and writing arbitrary files on developer disks. A vulnerability in the Claude Code agent allowed data exfiltration through DNS requests via prompt injection embedded in code comments.
Most alarmingly, the autonomous AI agent Replit deleted primary databases of a project it was developing because it decided the database required cleanup—violating direct instructions prohibiting modifications.
When User Data Is at Risk
According to security researcher Janani Kush, a developer asked ChatGPT to build an authentication system. The AI generated code storing passwords using MD5—a deprecated, insecure hashing algorithm. When the company's git repository was accidentally made public, attackers cracked the hashed passwords and gained access to internal systems.
Lawfare reports that AI models can hallucinate software packages that don't exist, or worse, reference malicious packages planted by attackers anticipating AI behavior. This creates supply chain attacks where developers unknowingly incorporate compromised dependencies.
Where Do We Put the Full Stop?
The industry is grappling with this question. According to Infosecurity Magazine, the EU AI Act now classifies some vibe coding implementations as "high-risk AI systems" requiring conformity assessments, particularly in critical infrastructure, healthcare, and financial services. Organizations must document AI involvement in code generation and maintain audit trails.
Checkmarx emphasizes that instead of fully replacing human developers, AI should act as an assistant, with security teams integrating AI-generated code into existing review and validation processes. Security-first AI development means training AI models with security in mind, embedding secure coding practices at the foundation.
The Vibe Coding Hangover
In September 2025, Fast Company reported that the "vibe coding hangover" is upon us, with senior software engineers citing "development hell" when working with AI-generated code. The initial excitement of rapid prototyping gives way to maintenance nightmares, technical debt, and security vulnerabilities that surface months after deployment.
Andrew Ng, a prominent AI researcher, has taken issue with the term itself, arguing that it misleads people into assuming software engineers just "go with the vibes" when using AI tools, undermining the discipline required for production-quality code.
The Responsible Path Forward
Embracing AI in development is inevitable and beneficial. However, according to Zencoder, responsible vibe coding requires:
- Security-focused prompting: Instead of "write a login function," prompt "write a secure login function that prevents SQL injection, brute force attacks, and properly hashes passwords."
- Mandatory code review: Treat AI-generated code like code from a junior developer. Never deploy without human review and security scanning.
- Automated security tools: Implement tools like OWASP ZAP, Snyk, or SonarQube to automatically scan for vulnerabilities.
- Understanding over acceptance: If you cannot explain what the code does and why it is secure, do not deploy it.
Conclusion
Vibe coding represents a fundamental shift in software development. It democratizes app creation and accelerates innovation. But without supervision, security awareness, and human oversight, it exposes users to data breaches, system compromises, and catastrophic failures. The full stop comes not at eliminating AI assistance, but at requiring accountability, understanding, and security validation for every line of AI-generated code that reaches production.
If you enjoyed this article, follow us on WhatsApp for daily tech updates. If you have an idea, need to be featured or need to partner, reach out to us at editorial@techinika.com or use our contact page.
Don't let the story end here.
Join 12+ others discussing this topic. Share your thoughts, ask questions, and connect with the community.
Up Next
Enterprises Increasingly Choose Anthropic’s AI Models Over CompetitorsBy ISHIMWE Jean Claude • 3 minutes read


