May 2025
On May 1st, 2025, I set myself a rule: no new vibe coding projects for the entire month. Five months in, I had to face the truth — I had a problem. Not a substance problem, but an ideas problem. I was addicted to starting things.
I lasted six hours.
The Addiction Nobody Warns You About
Here's what nobody tells you about AI-assisted coding: the barrier to starting something new drops to nearly zero, but the barrier to finishing something stays exactly the same.
Before AI, starting a new project required setup, boilerplate, configuration, the boring grind of getting a skeleton running. That friction was actually useful — it filtered out the ideas that weren't worth pursuing. By the time you had a working scaffold, you'd already committed enough effort that you were likely to follow through.
With vibe coding? An idea could go from thought to working prototype in 15 minutes. Which sounds incredible — and it is — until you realize you've now got 300+ chat sessions scattered across Bolt, Lovable, v0, Cline, and the Anthropic Console, each representing a partially realized idea that you'll "get back to."
I called it what it was: a problem. The hit of dopamine from seeing an AI bring your idea to life is real. The satisfaction of shipping something complete is a different, harder thing to earn.
The Rules
The May Challenge was simple:
- No new vibe coding projects for the entire month of May.
- Instead, focus on shipping at least 2 of the 300+ existing projects.
- Pick the ones closest to done and push them over the line.
I announced it on LinkedIn because public accountability is the only accountability that works for me.
Six Hours
I'd like to tell you I made it a week. I didn't. By dinnertime on May 1st, I'd already started something new. The excuse was good — it always is — something related to an existing project, which technically wasn't a new project, except it was.
But failing the letter of the challenge taught me something about its spirit. The problem wasn't that I was starting new things. The problem was that I was using "starting" as a substitute for the harder work of finishing. AI made the first 80% easy and the last 20% was still... the last 20%.
What Shipping Actually Requires
Here's what I learned that month about the gap between prototype and product:
Error handling. AI-generated prototypes handle the happy path beautifully. The sad paths — network failures, invalid input, edge cases, race conditions — those still needed a developer's brain. Not because AI couldn't handle them, but because you had to think of them first, and thinking of what can go wrong requires experience that no prompt can replace.
Authentication and security. Every prototype lives in a world where everyone is trustworthy. Shipping means adding login flows, API keys, rate limiting, data validation. These aren't glamorous, but they're table stakes.
The boring middle. Between "working prototype" and "shipped product" lies a valley of boring decisions. Which hosting? Which domain? How do you handle backups? What happens when someone finds a bug at 2 AM? AI doesn't help much with this stuff because it's not coding — it's operations.
User feedback. The moment you show a prototype to someone else, they see all the things you've been unconsciously ignoring. "What happens if I click this twice?" "Why is this page slow?" "I expected this button to do something different." Each piece of feedback is a new coding task, and the cycle restarts.
Architecture Decision Records
One genuine breakthrough this month was discovering how powerful Architecture Decision Records (ADR) are in the context of vibe coding.
An ADR is a simple document that records a technical decision: what the decision was, why it was made, what alternatives were considered. In traditional development, they're useful documentation. In AI-assisted development, they're transformative.
Here's why: when you start a new project with AI, you're constantly making decisions. Which framework? Which data structure? How should authentication work? If you record these decisions in ADR format — with enough context and examples — you can feed them to the AI on your next project. The AI doesn't just follow the decision; it understands the reasoning and applies it consistently.
I was refining how to work with less code-level involvement and more decision-level guidance. The goal was getting to a place where I could tell the AI "build this feature" and it would commit its own changes when they passed tests, while I caught issues in the PR review process. Less micromanagement, more strategy.
As I wrote at the time: "I'm starting to believe I'll be coding from the beach with my piña colada in a few years."
The Shift Begins
May was also the month I started applying vibe coding to real work at Delegate. Not just hobby projects and prototypes, but actual client-facing projects. The dynamic was different — more constraints, more stakeholders, more consequences for getting things wrong.
But the core skill transferred. Understanding context windows, writing clear prompts, iterating quickly, managing AI-generated code quality — all of this applied directly. The difference was that in a professional context, the "last 20%" wasn't optional.
What Stuck
By the end of May, I hadn't shipped the 2 projects I'd targeted. But I'd learned something more valuable: the distinction between exploring and executing. Both are valid. Both use AI. But they require different mindsets, different tooling, and different measures of success.
I'd also started building a personal library of ADRs — reusable decision records that let me bootstrap new projects with consistency. When I started a new project (which I inevitably did), the AI already knew how I liked to structure things, which packages I preferred, how I handled error logging. It was like having an AI that knew my coding style.
That library would become one of the most powerful tools in my workflow over the next year.









