Secure the Vibe
Let’s talk about a phenomenon that’s been creeping into codebases like weeds into my garden: vibe coding — a now-mainstream dev habit of writing code that feels intuitively correct, even when it skips over documentation, testing, or basic security checks. It’s the developer equivalent of jazz: all improvisation, no sheet music. Wikipedia defines it as writing code in a way that feels right, rather than strictly following specs, best practices or (let’s be honest) any actual documentation. There might be a vague understanding of the goal, but the execution? That’s all vibes, baby.
It’s become trendy. Somewhere along the way, dev culture started glamorizing the lone wolf hacker-genius who just “feels the code” — and now, in the age of autocomplete and generative AI, that intuition is often coming from a language model, not even the dev themself.
Shipping MVPs built with half-understood frameworks, pasted-in code from decade-old forums, and function names that read like inside jokes. And hey — sometimes it actually works. But more often, it leaves behind a trail of bugs, breaches, and confused engineers wondering how their system got turned into Swiss cheese.
This new generation of vibe coding skips past all the boring stuff: planning, documenting, designing. But in 2025, it’s not just guts and caffeine driving this trend — it’s generative AI.
Developers aren’t making all these decisions themselves; they’re just riding the autocomplete wave. It’s like building a rocket out of duct tape because you asked your chatbot for blueprints and didn’t even bother double-checking its work. Sure, you might get off the ground — but will it land? Will it explode on reentry?
The core problem? No one stops to ask, “What happens if this fails?” or “How could someone abuse this?”
Security isn’t just missing from the checklist — there is no checklist. It’s all about speed. Vibe coders are shipping what the AI suggests with the confidence of someone who definitely did not write tests. I’ve seen production APIs with no rate limiting, no auth, and a data model that apparently communicates only in base64-encoded emoji. And when you ask why, the answer is something like, “It just felt cleaner this way.”
Now, before the security folks start pointing fingers, let’s take a quick walk down memory lane. Infosec’s been vibing for years.
Remember when “just say no” was considered a valid strategy? Instead of doing actual risk assessments or modeling threats, security pros (myself included) would just deny the request outright or force an app into a design that was either over-protective or made no damn sense.
And when the friction got too high? We weren’t enabling secure solutions — we were creating the exact conditions that led to workarounds, misconfigurations, and a rise in shadow IT.
That’s the thing about security vibing: it skews hard toward overcorrection. We say no ’cause it’s easier than actually understanding a complex system. We throw in blanket restrictions because nuance takes time. We confuse rigidity with resilience.
This approach doesn’t prevent breaches. It delays delivery, alienates devs, and eventually backfires. When teams feel like they can’t count on security for help, they stop asking. They build without us. They operate in the dark. And ironically — we end up less secure.
We vibed our way through security decisions with the same gut-feel gusto we now criticize in developers.
“I don’t think that app should talk to the internet.”
“Just deny all and allow the exceptions.”
Entire security architectures have been built on not much more than vibes and the hope that the business doesn’t ask the hard questions.
Both dev and security cultures suffer from the same over-romanticization of cleverness. There’s this weird prestige in being the person who can do it all from memory — who doesn’t document anything because “they get it.”
But cleverness without discipline is like a ship with no rudder.
In dev, that means fragile codebases only the original author understands — and they just left for a startup. In security, it means inconsistent controls that look impressive but don’t actually work.
Worse — it means defenses that are brittle. Policies that get in the way instead of enabling safe behavior. Devs start routing around them. Shadow IT starts popping up. Because the official path is too slow, too strict, or too confusing.
Cleverness, unchecked by empathy and collaboration, builds walls — not bridges.
The fallout? Security becomes the department of “no.” Developers stop asking. The business tunes us out because we’ve cried wolf too many times without providing clear value. So while we roll our eyes at unreviewed code in prod, we also need to admit when our own clever shortcuts contributed to the chaos. We’ve all vibed before.
It’s time to do better.
Insecure code. Breaches. Delays. Compliance headaches. That’s the invoice vibe coding sends when it finally catches up with you.
It’s easy to miss the cost because the speed feels good. You’re hitting milestones, shipping features, getting high-fives. But that speed is usually hiding a mountain of tech debt — and worse, a pile of unaddressed security risk.
The truth is: good code should feel boring. Secure design should feel slow.
Documentation, testing, threat modeling, code reviews — these are not glamorous, but they’re what let you sleep at night knowing your containers won’t be out there on the public internet hemorrhaging data to anyone who knocks.
And just when the vibes couldn’t get more chaotic, here comes generative AI.
Generative AI tools like GitHub Copilot and ChatGPT have poured jet fuel on the vibe coding trend.
Now devs aren’t just skipping the fundamentals — they’re shipping code they don’t fully understand. Ask any developer who auto-completed their way through an entire function if they can explain it line by line. You’ll get a shrug and a nervous laugh.
That’s the danger. The AI doesn’t know your threat model. It doesn’t know your infra. It doesn’t know your customer data shouldn’t be logged in plaintext or that the code it just spit out skips verifying JWT signatures.
It hands you something that resembles code, delivered with the misplaced swagger of an intern who watched one YouTube video and now thinks they’re Linus Torvalds. (bless ’em, they really do try.)
A 2022 study by Stanford and NYU showed GitHub Copilot suggested code with security vulns in 39.33% of test scenarios. Input sanitization issues, broken crypto, missing auth checks — the hits keep coming. And devs just shipped it. Because it looked right.
This aren’t edge cases — these are common patterns.
Security teams need to be watching this closely. Code generation is not a shortcut if it creates silent vulnerabilities. If anything, it demands more scrutiny. The code might be from an AI, but that doesn’t mean it earned your trust.
Review it. Test it. Validate it. Assume nothing.
We’re not doomed to vibe forever. (Though some of y’all are really testing that theory.)
Some teams have figured this out. In a past life (2015–2018-ish), my team was building an AppSec program with lightweight threat modeling, pre-merge checks, and security champions embedded with devs — all designed to bake in security early, not bolt it on after the fact.
At OWASP Global AppSec 2023, Twilio shared a similar story: embedding security directly into CI/CD pipelines without killing dev velocity. The result? Fewer bugs, faster shipping, happier teams.
We didn’t kill the vibe. We channeled it.
Vibe coding might get you to MVP. But it won’t save you from the breach report that starts with:
“The exposed S3 bucket was traced back to an undocumented service running in production.”
The truth is — it’s not always the zero-days getting us. It’s the old stuff. The gut calls. The undocumented configs. The AI-suggested snippets we didn’t look too closely at.
But we’re not powerless. We know how to do this right.
Threat modeling ain’t rocket science. Reviewing AI code is not optional. Shipping fast is fine — as long as you’re not shipping security incidents to your customers.
So the next time the code feels done but looks like improv jazz — don’t ship the vibes.
Interrogate the decisions. Validate the assumptions.
Make sure your gut is not dragging you into prod without a seatbelt.
Because vibes aren’t security.
And clever isn’t coverage.
