Claude Code Changed Everything
Vibe Coded

Part 6

Claude Code Changed Everything

The week I discovered Claude Code and Claude-Flow. Multi-agent coding workflows opened up a frontier I didn't know existed.

10 posts

August 2025

A week ago, I started investigating what claude-flow was. First, I was blown away.

That's how my post from August 2025 began. No preamble, no buildup. Just the raw surprise of discovering something that felt fundamentally different from everything I'd used before.

The Discovery

Claude Code was Anthropic's command-line coding tool — an AI that lived in your terminal rather than your IDE. Where Cline and Copilot were like smart assistants sitting beside you in VS Code, Claude Code was more like a colleague who took the keyboard and said "let me handle this."

But it was Claude-Flow that really caught me. The concept: multiple Claude Code instances running in parallel, each working on a different part of your project, coordinated by an orchestrating agent. Not prompt-response-prompt-response. But prompt... and then go make coffee while three AI agents refactor your codebase simultaneously.

I admitted in my post that I'd made a methodological mistake: I'd started two experiments at the same time — Claude-Flow and Claude Code — and couldn't tell which was producing the magic. Was it the multi-agent orchestration? Or was Claude Code itself just that much better as a coding tool?

After a week of deeper investigation, my answer was: both. But in different ways.

What Made Claude Code Different

The tools I'd been using — Cline, Copilot, Bolt.new — operated on a request-response model. You asked for something, the AI produced it, you evaluated, repeat. Good, but fundamentally reactive.

Claude Code operated differently. It could browse your project structure, read files, understand relationships between components, make edits across multiple files, run tests, check the results, and iterate — all from a single prompt. It was agentic. It had goals, not just instructions.

The difference felt like the gap between texting someone directions versus giving them the address and letting them navigate. Both get you there. But one of them doesn't require you to micromanage every turn.

The $100 Decision

I upgraded to the $100/month Claude Code plan almost immediately.

This wasn't trivial. I'd been tracking costs obsessively since December — $2 for 10 websites, a few dollars a day on API calls. But Claude Code at $100/month was a different proposition. It was a commitment. A statement that this tool was worth a meaningful monthly investment.

The calculus was simple: if Claude Code saved me even a few hours a week on side projects, it paid for itself. If it fundamentally changed how I approached development, it was the bargain of a lifetime. Within a week, I was sure it was the latter.

Multi-Agent Dreams

Claude-Flow pushed the boundaries further. The idea of multiple AI agents working in parallel — what people were starting to call "swarms" — was intoxicating for someone who'd spent 20+ years dealing with the inherent seriality of solo development.

I posted about GitHub Copilot's team features, wondering if they were attempting something similar: "See how Claude Code uses agents/tasks for parallel tasks (Swarms?)."

The parenthetical question mark captured my uncertainty. Was this actually better? Or was it just more AI doing more things, with the coordination overhead negating the parallel benefits?

Through August, I experimented. The answer: it was genuinely better for certain classes of problems. When you had multiple independent tasks — update these 5 endpoints, add tests for these 3 modules, refactor these 2 utilities — parallel agents crushed it. When the tasks were deeply interdependent, the coordination cost was real.

I started to get a feel for when to go solo and when to go swarm.

Live Streaming the Journey

Something changed in how I shared my work during this period: I started live streaming. From Teams sessions, from my terminal, showing exactly what the AI was doing in real time.

The response was interesting. People weren't just curious about the output — they wanted to see the process. How did I phrase prompts? What happened when the AI went wrong? How did I recover? The messy, iterative, sometimes frustrating reality of AI-assisted development was more valuable to people than polished demos.

I realized I'd been accumulating a kind of tacit knowledge — the "feel" of working with AI — that couldn't be transmitted through posts or articles. It had to be shown.

Anthropic Wins

By the end of August, I wrote something definitive: "Anthropic is the big winner for me, and I'm seriously considering putting all my money in that direction because I can get everything on one platform."

Code (Claude Code). Chat (Claude). Research (the Console). MCP integration. It was all converging. Where I'd previously been spread across Bolt, Lovable, Cline, Copilot, and a half-dozen other tools, Anthropic was offering a unified ecosystem.

The middle-layer tools I'd wondered about earlier — how would they survive between Anthropic and OpenAI? — were getting their answer. The model providers were building the tools themselves.

The Shift in Vocabulary

I noticed something in my own language changing. In December 2024, I'd said "AI coding." In February, I adopted Karpathy's term: "vibe coding." By August, I was saying "agentic development."

The evolution wasn't just branding. Each term reflected a real shift in how the technology worked:

  • AI coding: AI generates code on request. Human assembles.
  • Vibe coding: Human describes vibes, AI implements. Fast feedback loop.
  • Agentic development: AI takes goals, plans approach, executes, tests, iterates. Human supervises.

I was at the beginning of that third phase. And everything was about to go quiet while I absorbed what it meant.

Posts in this part

Part 5The Hackathon EraPart 7The Plateau