December 2025 – January 2026
The year turned, and the vocabulary changed with it. I wasn't vibe coding anymore — I was doing agentic development. The distinction mattered more than it sounds.
The New Paradigm
Vibe coding was interactive. You described, the AI built, you adjusted. Like pair programming where one partner is impossibly fast and never gets tired. Fun, productive, but fundamentally limited by one constraint: you had to be in the loop for every decision.
Agentic development removed that constraint. The AI didn't just execute tasks — it planned them, tested them, evaluated the results, and iterated. Your role shifted from "co-driver" to "architect." You set the destination and the guardrails. The AI drove.
I marked this transition with a simple LinkedIn post: "Agentic Development in 2026." No explanation needed. If you'd been following along, you understood. If you hadn't, the post probably made no sense. That was fine.
.NET Aspire: The Agent-Ready Framework
One of the most impactful technical discoveries of this period was .NET Aspire. Not because it was new — Microsoft had been building it for a while — but because it turned out to be accidentally perfect for agentic development.
Aspire's killer feature for agents: it could self-diagnose. When something failed, the framework could tell the agent what went wrong, suggest fixes, restart services, and verify the repair. I posted enthusiastically about it: "If you haven't caught why Aspire is epic — the agent can find errors and fix them itself, restart services, all custom commands available through apphost.cs setup. Mind-blowing."
My advice was pointed: "Bet on frameworks from people who use agentic development — they're 10 steps ahead of everything else." David Fowler, one of Aspire's architects, was clearly building for a future where AI agents were first-class consumers of his framework. Following his work on Twitter became a daily habit.
AI-Powered Failure Analysis
One concrete implementation from this period that I was genuinely proud of: I added an AI-powered failure analysis step to my CI/CD pipelines.
The concept was simple. When a deployment pipeline failed, instead of just logging an error, the system would pass the failure context to an AI that would analyze the root cause, suggest fixes, and in some cases, apply them automatically and re-run the pipeline.
It sounds like a small thing. It wasn't. Traditional CI/CD failures require a human to read logs, understand the context, identify the issue, implement a fix, and trigger a re-run. This could take minutes or hours depending on complexity. The AI-powered step collapsed most of that to seconds.
I wrote about it with contained excitement: "🤖 AI-Powered Failure Analysis — I've added an if-failure step in my pipeline that sends failure data to Claude for analysis." The robot emoji was a rare concession to excitement.
End-to-End Tests: A New Love
Something I didn't expect: I became genuinely passionate about end-to-end testing.
In traditional development, e2e tests are the vegetables of software engineering. Everyone knows they're important. Nobody enjoys writing them. They're brittle, slow, and a pain to maintain.
With AI, writing e2e tests became almost fun. I could describe the user flow in natural language, let the AI generate the test code, run it, see what failed, iterate. The cycle that used to take hours took minutes. And because the AI could generate variations, I could cover more scenarios than I'd ever bother to write manually.
I posted: "End2End tests — I've become insanely happy with how I can write e2e tests and run them with Aspire." Coming from a developer who'd traditionally viewed e2e tests as a necessary evil, this was a genuine mindset shift.
The Agentic Live Sessions
In December 2025, I started something new: biweekly "Agentic Live" sessions. Every other Friday, 8:15 to 9 AM, I'd open a video call and work on a project live, with anyone who wanted to watch and participate.
The format was simple and unpolished. No slides. No prepared demo. Just me, Claude Code, and whatever project I was working on that week. Sometimes it was a mobile app. Sometimes it was infrastructure. Sometimes things broke spectacularly live on camera.
The sessions served multiple purposes. They forced me to articulate my thinking in real time. They created accountability for making progress on actual projects. And they built a small community of people on the same journey — developers figuring out how to work with AI agents, sharing discoveries, and learning from each other's mistakes.
Building Tools for Building Tools
A meta-shift happened during this period: I started building tools for AI-assisted development, rather than just using them. A Claude Code plugin marketplace concept. Custom skills and commands. Frameworks for how agents should interact with codebases.
This was the moment the hobby became a vocation. I wasn't just a consumer of AI development tools anymore. I was thinking about how these tools should work, what was missing, what could be better. The perspective of a professional developer with 20+ years of experience, combined with 12 months of intensive AI coding, was producing ideas that felt genuinely new.
One idea I explored: what happens when skills and commands merge in Claude Code? Anthropic made this change, and I had opinions: "Personally, I think they're two different things and should remain as agentic primitives — but they must know something I don't. I wonder if it's about context management."
The willingness to publicly disagree with platform decisions, while remaining curious about the reasoning, was becoming part of my voice.
Gemini 3 and the Multi-Model World
When Google released Gemini 3, I experimented immediately — not with the flagship model, but with the cheaper variants. My thesis from the summer still held: rules and context mattered more than raw model intelligence. Gemini 3 produced interesting results with my existing prompt frameworks.
But I also noticed something new: the models were developing personality. Running the same prompts through different models produced subtly different outputs — not just in quality, but in style. Claude was thorough and careful. Gemini was creative and sometimes surprising. GPT was confident and sometimes wrong.
Learning to match models to tasks was becoming its own skill. Like choosing the right tool from a toolbox, except the tools kept getting upgraded every few weeks.
The Wave Is Coming
I ended this phase with a post that I think captured the moment: "The wave is coming — are you ready? :)"
The smiley took the edge off, but the message was serious. Agentic development wasn't a curiosity or an experiment anymore. It was a working methodology that produced real results. And it was about to hit the mainstream.
