Written by Alpha Bits team
February 27, 2026 ai-workflow-automation

What We Actually Learned Switching from Claude Code to Antigravity

We've changed our primary AI coding tool three times in 18 months. Each time, we thought we'd found "the one." Each time, something better appeared and we migrated again.

This isn't a review or a recommendation. It's a diary of what actually happened — the things that broke, the things that quietly improved, and the lessons we learned that will still apply when the next tool makes us switch again.

Our AI coding tool migration: Cursor → Claude Code → Antigravity, with Trae.ai running alongside for quick edits

Chapter 1: Cursor (October 2025 – December 2025)

Cursor was our first real AI-powered IDE. Before Cursor, we used GitHub Copilot's autocomplete inside VS Code — helpful, but limited to suggesting the next few lines.

Cursor felt different. You could highlight a block of code, describe what you wanted, and it would rewrite the block. You could ask it questions about your codebase. For someone who'd been managing teams of 20-25 engineers and was suddenly working solo, this was the first tool that made solo development feel genuinely viable.

What was good:

  • Inline code editing was fast and intuitive
  • The "chat with your codebase" feature saved hours of manual code reading
  • Tab completion felt eerily good — it predicted not just the next line, but the next intent

What broke:

  • Context window limits hit fast on large projects. Our ERP codebases would exceed the context, and Cursor would start hallucinating function signatures that didn't exist
  • The subscription kept changing. Pricing shifted, token limits were adjusted, features were gated
  • Multi-file refactors were risky. It could edit one file brilliantly but didn't understand the ripple effects across the project

The honest assessment: Cursor was a great introduction to what AI-assisted development could be. But it was a copilot, not a colleague. It helped you type faster, but it didn't help you think differently.

Chapter 2: Claude Code (December 2025 – February 2026)

Claude Code — specifically through a Z.AI subscription with the GLM 5 custom model — was a leap forward. Not incremental. A leap.

The difference wasn't the quality of individual suggestions. It was the depth of codebase understanding. Claude Code could hold entire architectures in context. You could ask it "refactor the authentication system to use JWT instead of sessions" and it would understand the implications across routes, middleware, database schemas, and frontend components — then make all the changes in one pass.

What was good:

  • Deep multi-file awareness. It genuinely understood how components connected
  • The GLM 5 model through Z.AI was exceptional at architectural reasoning — not just "write this function" but "here's how these three modules should interact and why"
  • Terminal integration meant you could go from question to implementation to deployment without switching tools
  • Cost: $30/month through Z.AI. A fraction of what Cursor cost for equivalent capability

What broke:

  • Speed. Claude Code was slow. Complex operations could take 30-60 seconds, and during that time you're just watching a spinner
  • Token consumption was unpredictable. Some days you'd burn through the allocation on a single complex refactor. Other days you'd barely touch it
  • The extension ecosystem was thin. Cursor had VS Code's entire extension library. Claude Code was more isolated
  • When it got something wrong at the architectural level, it got it confidently wrong. Debugging a bad Claude Code decision could take longer than doing the work manually

The real lesson: Claude Code taught us that the bottleneck in AI-assisted development isn't writing code — it's providing context. We started spending more time writing system prompts, maintaining documentation, and structuring our codebases for AI readability. This habit — what we now call context engineering — turned out to be more valuable than any tool.

We also learned to use multiple AI tools simultaneously. Claude Code for heavy architectural work. Trae.ai ($6/month) for quick edits. Warp.dev for terminal AI. The "one tool to rule them all" approach was replaced by a toolkit approach. Each tool had a role.

Chapter 3: Antigravity (February 2026 – Present)

Three weeks in. That's a long time in AI tooling years.

Antigravity is Google's agentic AI coding assistant, included with the Google AI Pro subscription ($20/month family plan). We were already paying for it — NotebookLM, AI Studio, and five family accounts — so Antigravity was effectively free on top of tools we were already using.

Before and after: the migration from Claude Code to Antigravity

What changed immediately:

  • Speed. Operations that took Claude Code 30-60 seconds complete in 5-15 seconds. This sounds minor. It's not. When you're iterating on a feature and making 20-30 AI interactions per hour, the cumulative time difference is enormous
  • Agentic behaviour. Antigravity doesn't just suggest code — it plans, implements, tests, and iterates. You describe a feature, and it creates an implementation plan, writes the code, runs the tests, and fixes the failures. The workflow changed from "write code with AI help" to "review and direct AI work"
  • Browser integration. Antigravity can take screenshots, interact with web pages, and visually verify that changes look right. We used to switch constantly between IDE and browser. Now the tool handles that loop
  • File management. It creates new files, manages imports, handles directory structures. Claude Code could do this, but Antigravity does it as part of a larger plan rather than individual file operations

What's different in practice:

  • Code reviews changed. We now spend more time reviewing AI-generated code than writing our own. This is a fundamental shift in the developer role — from writer to architect/reviewer
  • System prompts and project documentation became first-class development artefacts. The better our docs, the better Antigravity's output. Garbage in, garbage out — but the inverse is also true
  • Multi-step tasks that previously required back-and-forth ("okay now update the tests," "now update the types," "now fix the import") happen automatically in one pass

What still breaks:

  • It's three weeks old for us. We haven't hit the edge cases yet. That's not confidence — that's inexperience with this specific tool
  • The Google ecosystem dependency is real. If Google AI Pro changes pricing or removes features, our primary coding tool changes with it
  • Context can get stale during long sessions. We've developed a habit of starting fresh conversations for new features rather than continuing long threads

What We Learned That Applies to Every Tool

Here's the non-obvious stuff. These lessons survived across all three migrations:

1. Your Documentation Is Your Real Tool

Every time we switched tools, our documentation transferred perfectly. System prompts, architecture docs, coding standards, project conventions — all of it worked immediately with the new tool.

The code we wrote with the previous tool also transferred perfectly. Because it was standard TypeScript, standard Svelte, standard Node.js.

The only thing that didn't transfer was muscle memory with the tool's UI. That took about a day to rebuild.

The lesson: Invest in documentation and standards, not tool mastery. The tool will change. The docs won't.

2. The "One Perfect Tool" Doesn't Exist

Our current stack uses four AI coding tools simultaneously:

Tool Role Cost
Antigravity Primary IDE, agentic development, multi-file work $20/mo (Google AI Pro)
Claude Code + Z.AI Architectural reasoning, second opinion on complex decisions $30/mo
Trae.ai Quick edits, lightweight tasks $6/mo
Warp.dev Terminal AI, deployments, system operations $18/mo

This costs $74/month total. Each tool is better than the others at something specific. Using all four together produces better results than any single tool at any price point.

3. Speed Matters More Than Quality (Within a Threshold)

Controversial take: once AI output quality crosses a "good enough" threshold, speed becomes the dominant factor.

Claude Code produced slightly more elegant solutions than Antigravity on some tasks. But Antigravity's speed meant we could iterate 3-4x more often in the same time window. More iterations = more refinement = better final output.

Fast and good beats slow and great, because fast lets you keep improving.

4. The AI Changed How We Write Code, Permanently

Even if every AI tool disappeared tomorrow, our coding practices have permanently changed:

  • We write more documentation. Not because we became more disciplined — because we learned that documentation is input to AI tools. Better docs = better AI output. The incentive structure changed
  • We structure code for AI readability. Smaller modules. Clearer naming. More explicit type definitions. These changes also happen to make code better for human readers
  • We think in terms of orchestration, not implementation. The question shifted from "How do I implement this?" to "How do I describe what I want clearly enough that AI implements it correctly?"

5. Migration Is Cheap When Your Architecture Is Clean

Each migration took about a day. Not because the tools are similar (they're not), but because our codebase is modular and well-documented.

If your code is a monolith with tribal knowledge scattered across Slack messages, switching AI tools will be painful. If your code is well-structured with clear module boundaries and written documentation, the new tool picks it up immediately.

This is another argument for Data-First Principle Thinking — the investment in structured documentation pays dividends every time you upgrade your tools.

What's Next

We'll switch again. Probably within 6 months. Maybe within 3. The AI coding tool landscape is moving so fast that today's best tool is next quarter's baseline.

And that's fine. Each migration costs a day. The improvements we gain last months. The maths is overwhelmingly in favour of staying current.

The real question isn't "Which AI coding tool should I use?" It's "Is my codebase ready to benefit from any AI coding tool?" If your docs are strong, your architecture is modular, and your conventions are explicit — any tool in this category will make you dramatically faster.

If they're not, the most expensive subscription in the world won't help.


This post is part of our operator war stories series — honest accounts of the tools, migrations, and technical decisions behind Alpha Bits. Read more about our full AI coding stack or how we think about data.

Share this article