September – November 2025
There's been a quiet period. Now let me tell you why. I don't think anything new is happening.
That's how I broke the silence after nearly three months of reduced posting. Not because I'd stopped building. I was building more than ever. But the discovery phase was over. What replaced it was something less glamorous and more productive: applied routine.
When Revolution Becomes Routine
The transition from "this is amazing!" to "this is just how I work now" happened so gradually I almost missed it. One day I was excitedly posting about every new trick I'd discovered. The next, I was just... using the tools. The way you use a hammer. You don't post about every nail.
The posting gap from September to November wasn't writer's block. It was contentment. The tools worked. My workflow was established. The ADRs were accumulating. The context management was second nature. I was producing more software than ever before, with less friction and fewer surprises.
And I was bored of talking about it.
What Changed Under the Surface
While I was quiet on LinkedIn, several things were shifting:
AI coding went professional. I was no longer just using AI on hobby projects and prototypes. At Delegate (now Context&), I was actively incorporating these tools into client work. The dynamics were different — more constraints, more quality requirements, more stakeholders — but the fundamental approach was the same. Good prompts, good context, good iteration.
The team started noticing. Colleagues who'd been skeptical or curious were starting to have their own "magic moments" — that first time the AI produces something that makes you sit back and think "wait, it can do that?" I'd had mine in December. They were having theirs nine months later. The adoption curve was real.
Rules beat intelligence. My experiment with Gemini Flash — the cheap model that performed comparably to expensive ones when given enough rules — kept proving itself. Project after project, I found that investing in clear rules, thorough ADRs, and well-structured context mattered more than which model I was running. This was quietly revolutionary but not very postable.
The Cost of the Plateau
Plateaus feel like failure if you're addicted to progress. After months of constant discovery — new tools, new techniques, new possibilities — the plateau felt flat. Where was the next breakthrough? Where was the next dopamine hit?
I had to recalibrate what "progress" meant. Progress wasn't learning a new tool every week. Progress was shipping actual software. Progress was my colleagues starting to use AI effectively. Progress was the work getting done faster without the drama.
The plateau wasn't a stop. It was a launch pad.
What I Built (Quietly)
During these months, without posting about it:
- Deepened the ADR library across multiple projects
- Refined a personal set of rules for AI coding that worked across models
- Started using AI in actual Delegate consulting engagements
- Built internal tools that nobody saw but everyone used
- Figured out the line between "let AI handle it" and "this needs a human"
That last point deserves emphasis. After 10 months of pushing AI to do more and more, I'd developed a sense for its limits. Not the theoretical limits debated on Twitter, but the practical limits you only discover by pushing. The AI was excellent at translation (turning clear requirements into code), good at variation (generating alternatives), and poor at judgment (deciding what should be built).
The judgment part was — and still is — firmly human territory.
The Industry Catches Up
While I was heads-down building, the industry was catching up. AI coding went from "interesting experiment" to "strategic imperative" for many organizations. Enterprise adoption started in earnest. The conversations shifted from "should we use AI for coding?" to "how do we use AI for coding at scale?"
Articles and studies kept appearing, trying to quantify the productivity gain. Some said 10x. Some said barely any. I could see truth in both, and I wrote about it later: the gain depended entirely on what you measured. Lines of code? Huge gain. Shipped features? Moderate gain. Business outcomes? Still unclear.
The frontrunners — people like me who'd jumped in early — were operating in a different reality from the mainstream. We'd already absorbed the learning curve. We'd already found our workflows. We were productive. But our organizations hadn't yet figured out how to scale what we'd learned.
That would be 2026's challenge.
Breaking the Silence
When I finally posted again, it was about the next phase: agentic development. The plateau hadn't been a stop — it had been a transition. From "person using AI tools" to "person building AI-integrated systems."
The vocabulary shift from "vibe coding" to "agentic development" was more than semantic. Vibe coding was about me and the AI, working together interactively. Agentic development was about building systems where the AI could work independently, within guardrails I'd set up, on tasks I'd defined.
The plateau was where I learned to trust the AI enough to let go of the keyboard.










