AI Strategy

I Built a Tool That Won't Last And That's Amazing

Fractured glass bottleneck with code-like strips flowing outward, symbolizing transient data.

TL;DR: I built git-watchtower to monitor branches while running multiple Claude Code sessions. A week later, OpenAI shipped Codex with similar features built-in. This isn't a failure—it's evidence that the entire dev workflow is being reinvented in real-time. The bottlenecks have completely shifted. Build tools for today's problems, expect them to be obsolete in months, and embrace how fast this is moving.

Common questions answered below

I've been using Claude Code on the web to run multiple coding tasks in parallel. It's transformative—fire off five tasks, let them run, come back to completed work. But I kept losing time to a mundane problem: pulling updates from git and switching branches to check on what the AI had done.

So I had Claude build me a tool. Git-watchtower is a lightweight terminal UI that monitors git remotes, shows me when branches update, and lets me quickly switch to review changes. It took an afternoon to build. It solved a real problem.

A week later, OpenAI released their Codex app—a "command center for agents" with built-in worktrees, parallel agent management, and in-app diff review. Everything git-watchtower does, plus more, integrated into the agent workflow itself.

My tool will probably be useful for a few more months. Then it won't be. And that's amazing.

The Old Bottleneck Is Gone

For decades, the constraint in software development was getting code written. We optimized relentlessly for it: faster typing, better IDE shortcuts, code snippets, generators, frameworks that reduced boilerplate. The entire developer tooling industry was built around the assumption that humans were the execution layer and execution was slow.

That bottleneck evaporated. AI agents can now produce code faster than we can review it. The question isn't "how do we write code faster?" anymore. It's "what do we do with infinite code production capacity?"

The answer is: we're figuring it out in real-time. And the tools are being reinvented just as fast.

The New Bottlenecks (For Now)

When I started running multiple Claude Code sessions, new constraints emerged immediately:

Review bandwidth. Five agents producing code means five branches to review. AI generates faster than I can verify. The bottleneck moved from production to verification.

Context fragmentation. Which branch was fixing the auth bug? What approach did the agent take on the refactor? Tracking parallel workstreams creates cognitive load that didn't exist when I was the one writing the code.

Specification quality. Garbage prompts produce garbage code. When execution is free, the leverage point moves upstream. How clearly can you articulate what you want?

Integration complexity. Parallel work creates merge conflicts, architectural drift, and inconsistent patterns. Coordination costs that were manageable with human-speed development become significant at agent-speed.

Architectural coherence. AI writes code that works locally but doesn't fit the system. Each agent optimizes for its task without seeing the whole picture. Maintaining architectural vision across many parallel changes is genuinely hard.

Here's the thing: these bottlenecks won't last either.

The Bottlenecks Are Temporary Too

Review bandwidth? AI-assisted code review is already emerging. LLMs can do first-pass review, flag issues, and surface what needs human attention. We're not fully there yet—current models are maybe 70% accurate at classifying code correctness—but the trajectory is clear. Review will be augmented, then largely automated.

Context fragmentation? That's exactly what OpenAI's Codex app addresses. Agents organized by project, worktrees so they don't conflict, diffs reviewable in-thread. The infrastructure for managing parallel agent work is being built right now.

Specification quality? We'll build more curious agents—ones that ask clarifying questions when the spec is ambiguous, that probe for edge cases, that surface assumptions. The specification bottleneck will spawn agents designed to improve specifications.

Architectural coherence? Expect continuous architectural review agents. Background processes that scan for drift, flag inconsistencies, maintain patterns across parallel changes. If humans struggle to maintain architectural vision across many agents, we'll build agents to help.

The pattern is clear: each bottleneck spawns tools to address it. The tools emerge fast because building tools is now cheap. The tools become obsolete fast because the bottlenecks keep shifting.

Ephemeral Tooling as a Mindset

Git-watchtower solved a real problem for me. It will continue solving that problem for a few months. Then either the workflow will change enough that I don't need it, or better solutions will absorb its functionality, or the whole paradigm will shift to something I can't predict yet.

This is fine. Actually, it's better than fine.

Building the tool took an afternoon. The value it provides over those few months far exceeds the investment. The learning from building it—understanding the workflow gap, thinking through what matters—that persists even after the tool is obsolete.

We need a new mindset for developer tooling: ephemeral by design. Build tools to solve today's workflow problems. Expect them to become obsolete. Don't over-invest in polish or durability. The half-life of workflow tools is shrinking from years to months.

Stop building tools to last. Build tools to learn.

You're No Longer Coding. You're Orchestrating.

Something fundamental shifted. As one developer put it: "AI agents don't wait for instructions anymore. You're no longer coding. You're orchestrating."

The workflow used to be: think, write code, debug, ship. Now it's: specify, delegate, monitor, review, integrate. Every engineer became a manager of AI agents, whether they wanted to or not.

This changes what skills matter. The competition isn't "who can write code fastest" anymore. It's "who can coordinate multiple agents effectively." Who can write clear specifications. Who can review AI output quickly and accurately. Who can maintain architectural coherence across parallel workstreams.

The model isn't the differentiator—everyone has access to the same models. What matters is orchestration: how you combine models, tools, and workflows. How you manage the new bottlenecks while they're still bottlenecks, and how quickly you adapt when they shift.

What We're Still Figuring Out

Nearly two-thirds of organizations are experimenting with AI agents. Fewer than one in four have successfully scaled them to production. The gap between "this is cool" and "this is how we work" is 2026's central challenge.

We don't know yet:

The playbook isn't written. We're writing it as we go, building tools that won't last to solve problems that keep changing.

The Bottleneck Moved. It Will Move Again.

Git-watchtower exists because I hit a bottleneck in January 2026. OpenAI shipped a better solution in February. By March, the bottleneck might be somewhere else entirely.

This is the new normal. The tools we build today are experiments in workflow design. Some will be absorbed into platforms. Some will become obsolete when the workflow shifts. Some will teach us something useful even as they become irrelevant.

The engineers and companies who thrive won't be the ones with the best tools. They'll be the ones who redesign workflows rather than layer AI onto old ones. Who build fast, learn fast, and let go fast. Who treat every tool as temporary and every workflow as provisional.

I built a tool that won't last. I'll probably build another one next month. That's not a bug in how this era works. It's the feature.

Frequently Asked Questions

What are the new bottlenecks in AI-assisted development?
The bottleneck shifted from writing code to reviewing it, managing context across parallel workstreams, writing quality specifications, integrating parallel changes, and maintaining architectural coherence. These bottlenecks are themselves temporary—AI tools are already emerging to address each one.
How is the developer workflow changing with AI agents?
Development is shifting from single-threaded execution to multi-agent orchestration. You're no longer coding—you're supervising code production across multiple parallel workstreams. The workflow becomes: specify, delegate, monitor, review, integrate. Tools like OpenAI Codex and Claude Code on the web are building infrastructure for this new paradigm.
Should I build custom developer tools in the AI era?
Yes, but with a different mindset. Build tools to solve today's workflow problems, but expect them to become obsolete quickly. Building tools is now cheap enough that ephemeral tooling makes sense. The half-life of workflow tools is shrinking—stop building tools to last, build tools to learn.

Dan Rummel is the founder of Fibonacci Labs. He built git-watchtower in an afternoon, watched OpenAI ship something better a week later, and is genuinely excited about how fast this is all moving. The tool is open source if you want it while it lasts.

Want help rethinking your engineering workflows for the AI era?

Let's Talk →