ENvibe-codedthe-hackathon-era

The Hackathon Era

Hackathons, landing pages in 10 minutes, and the realization that you can now develop features 4 times and pick the best.

July – August 2025

"Vibe coder nightmare — the internet is down."

That was the post. Five words that captured both the absurdity and the dependency of what we'd become. Without a connection to the AI, the vibe coder is just... a person staring at a screen.

Building at Speed

By summer 2025, the workflow had crystallized into something I started calling "Vibecoding 101": multitask. While the AI is working on your current prompt, plan your next one. Don't watch it type. Don't sit idle. Think ahead. The AI is fast, but your thinking is the bottleneck.

I was generating landing pages and deploying them in under 10 minutes. The pipeline: prompt → code → push to GitHub → deploy on Coolify. End to end. From idea to live URL in the time it takes to make a cup of coffee.

One of my LinkedIn posts from this period was typically blunt: "Do you also develop all your features 4 times and pick the best? No, probably not — because before AI it was too expensive." That was the real unlock. When building is cheap, you can afford to explore multiple approaches and pick the winner. Traditional development forced you to bet on one approach and commit. AI development let you play the field.

The Bolt Hackathon

The hackathon was a landmark moment. Not because we built something revolutionary, but because we shipped something real in a day. A working product, deployed and demo-ready.

The energy at a hackathon changes when AI is part of the toolset. The traditional hackathon dynamic — frantic all-night coding sessions, heroic debugging sprints, exhausted demos in the morning — gives way to something different. You spend more time on what to build and how to present it, because the building itself is no longer the hard constraint.

We shipped. It worked. And the experience confirmed something I'd been suspecting: the competitive advantage in software was shifting from "who can code fastest" to "who can think clearest about what to build."

Webcontainers and VS Code in the Browser

One discovery from this period that deserves its own mention: webcontainers. The ability to run a full development environment in a browser — VS Code, terminal, everything — was an underrated breakthrough.

I recorded a 3-minute video showing how I could spin up a full development environment, connect it to AI tooling, and start building without installing anything on my local machine. The implications were significant: development was becoming portable, ephemeral, and accessible from anywhere.

Combined with AI coding, this meant: open a browser, describe what you want, get working software, deploy it. No local setup. No dependency management. No "works on my machine." Just ideas turning into reality.

The MCP Moment

MCP — Model Context Protocol — was gaining traction, and I was paying close attention. The idea was simple but powerful: a standardized way for AI models to interact with external tools, services, and data sources.

For a developer who'd been building integrations for two decades, this was immediately interesting. MCP meant that AI coding tools could interact with your databases, your APIs, your deployment pipelines — not as copy-paste hacks, but as proper, authenticated, structured integrations.

I started thinking about which MCP servers to build. The community was buzzing with possibilities. It felt like the early days of any API ecosystem — everyone trying to figure out where the value was.

Anthropic's Pricing Puzzle

A smaller but memorable moment: Anthropic announced "unlimited" usage plans, and the internet immediately started arguing about what "unlimited" means. My take was characteristically direct: "I love that you have to explain what unlimited means — maybe don't use that word then :)"

But underneath the snark was a real business model question. The middle-layer tools — Cursor, Cline, Copilot — were caught between the model providers (Anthropic, OpenAI) who controlled the AI and the developers who just wanted things to work. NVIDIA was making money selling picks and shovels. The model providers were making money on API calls. The IDE tools were... somewhere in the middle, trying to find a margin.

I wrote: "It'll be super fun to see how middle-layer apps like Cursor survive between Anthropic and OpenAI." That prediction would age well.

The Conference

I attended an AI agent conference around this time. My takeaway post was dry: "Excited to see if I've understood what an agent is :)"

I had. And I hadn't. The conference talked about agents as autonomous entities — software that could take actions, make decisions, and work toward goals without constant human supervision. In theory, I understood. In practice, I was still doing prompt → response → prompt → response. The agent stuff sounded like a different paradigm entirely.

It was. And it was coming next.

Summer Reflections

As summer set in, I noticed something in my GitHub commit graph: a visible spike in activity, followed by the telltale quiet of vacation. I posted about it with a joke: "Guess which day I got summer vacation :)"

But before that vacation, I'd found something that would change the next phase of my journey. Claude Code. And its wilder cousin: Claude-Flow.

The story of that discovery belongs in the next chapter.