Written by Alpha Bits team
February 23, 2026 software

Alpha Bits Web Stack Update: AI Agents and the Road Ahead

Back in September 2025, we published Building Alpha Bits Website with AI-Powered Workflows, a post that walked through the stack powering this very website: Svelte 5, Directus, and Trae IDE with Model Context Protocol (MCP) integrations. The post struck a nerve—readers kept asking for a follow-up. Here it is.

Alpha Bits website development in February 2026

Six months later the site still runs on the same open-source DNA, but everything around it has evolved. This update is written for both technical decision-makers who want the architecture details and non-technical leaders who want to understand what AI-assisted development actually looks like in production.

What Stayed the Same

The foundation we documented in September is still here:

  • Svelte 5 + SvelteKit for the frontend—compiled reactivity, server-side rendering, zero virtual-DOM overhead.
  • Open-source everything, visible at github.com/AlphaBitsCode/studio-os.

These choices have aged well. Our Lighthouse scores remain consistently high and the site deploys in under a minute.

What Changed

1. TailwindCSS v4 — Design System Upgrade

We migrated from hand-rolled CSS to TailwindCSS v4 with the @tailwindcss/typography and @tailwindcss/forms plugins. Combined with bits-ui (a Svelte-native component library built on Radix primitives), this gives us a design system that is both consistent and easy for AI agents to work with—utility classes are predictable tokens that language models handle exceptionally well.

2. Goodbye Directus, Hello Static Markdown + Turso

This is probably the most significant architectural decision we made: we dropped Directus entirely.

In September, Directus was our headless CMS—self-hosted, managing all content through its admin interface. It worked, but it was also another service to maintain, another attack surface to monitor, and another dependency that could go down.

We replaced it with two things:

For blog content: static .md files checked directly into Git. Every article you read on this site—including this one—is a markdown file living in our content/blog/ directory, version-controlled alongside our source code. SvelteKit reads these files at build time using remark for markdown processing. The content is technically unhackable: there is no CMS admin panel to breach, no database to inject, no API endpoint to exploit. If someone wants to tamper with our articles, they would need write access to our GitHub repository—and that's protected by branch rules, code review, and two-factor authentication.

For transactional data: Turso, a distributed SQLite database at the edge, accessed through @libsql/client and managed with Drizzle ORM. This handles everything that actually needs a database:

  • AI chat session logs — every conversation with our AI Receptionist is stored and made queryable.
  • Newsletter subscriptions — collected via forms on the /news page and processed through our API.
  • Lead capture data — company name, industry, and contact details gathered during live AI conversations.

Turso gives us the relational guarantees of SQL with edge-level performance, zero cold starts, and a generous free tier. Drizzle provides type-safe queries that integrate directly with our SvelteKit server functions. The schema is version-controlled and migrations are run with a single drizzle-kit push command.

The net result: we eliminated an entire self-hosted service from our infrastructure, reduced our attack surface, and actually improved the content authoring experience—our team now writes in markdown in their preferred editor, commits via Git, and the site updates on deploy.

3. Groq-Powered AI Chat — The Receptionist Service

The most user-facing change is our AI Receptionist service. Visitors can now engage in a real-time, 5-minute chat session with one of four distinct AI personas—Tim, Clara, Brian, or Kamala—each designed with a unique communication style and role.

Under the hood:

  • Groq SDK provides LLM inference with sub-second response times \u2014 fast enough that chat feels like talking to a real person.
  • Server-side API routes in SvelteKit (/api/chat) handle prompt assembly, context injection, and response streaming.
  • Persona engineering — each agent has a detailed system prompt covering personality, objectives, knowledge boundaries, and guardrails.
  • Covert lead capture — the AI is trained to naturally gather company name, industry, and contact information through conversation rather than forms.

The result is a service we now offer to clients as a fully managed, white-label product backed by a CRM dashboard.

4. Resend — Transactional Email

We plugged in Resend for transactional email delivery. Newsletter confirmation emails, lead notification emails to our sales team, and system alerts all flow through Resend's API. Setting it up took about 15 minutes—it's one of those rare developer tools that just works.

5. Playwright — Automated End-to-End Testing

Shipping AI features to production without tests is like deploying to production on a Friday. We added Playwright for browser-level end-to-end testing alongside Vitest for unit and component tests. Our smoke test suite runs before every deployment, and we've recently begun writing browser-level component tests with vitest-browser-svelte.

6. Netlify Adapter — Serverless Deployment

We swapped our deployment target to Netlify via @sveltejs/adapter-netlify. This gives us serverless rendering for dynamic routes (like the chat API), automatic CDN distribution for static assets, and a deployment pipeline triggered by a single git push. The annual hosting cost for this level of infrastructure? Effectively zero for our usage tier.

The Real Game Changer: Agentic Development

In September we talked about MCP integrations as "game changers." They still are—but the game itself has changed.

From MCP to Antigravity: Our 3-Week Transformation

Our original workflow used Trae IDE with MCP servers for Directus (our CMS at the time) and Shadcn UI components. That was already a leap forward: the AI could query content and suggest components. But it was still prompt-response. We typed an instruction, the AI executed it, we reviewed.

For the better part of 2025, we ran Claude Code as our primary agentic coding tool, powered by GLM 4.7 and later GLM 5. We invested heavily in this setup—building a library of custom skills and pulling in community-maintained ones from skills.sh. Skills are reusable instruction sets that teach the AI agent how to perform specific tasks within your codebase: deployment procedures, testing workflows, component creation patterns, and more. Claude Code with a well-curated skill library was genuinely productive, and it's what we used to ship the majority of our HomeLab series and early feature work.

Then, three weeks ago in early February 2026, we switched to Antigravity, Google's agentic coding assistant, running on the Google Ultra plan with access to Claude Opus 4.6 and Gemini Pro 3.1. The improvement was immediate and substantial.

The combination of these two frontier models gives us complementary strengths: Gemini Pro 3.1 excels at rapid iteration, codebase-wide refactoring, and tasks where speed matters; Claude Opus 4.6 brings exceptional precision for nuanced architecture decisions, complex prompt engineering, and code that requires careful reasoning. We switch between them depending on the task at hand—often multiple times per day.

Here is what the agent actually does that we couldn't do before:

  • Multi-step execution — Antigravity doesn't just answer questions. It plans, researches the codebase, writes code across multiple files, runs tests, and iterates on failures—all in a single workflow.
  • Tool integration — It has native access to the terminal, file system, browser automation, and MCP servers (including Netlify for deployment and Stitch for UI prototyping).
  • Context retention — Knowledge items from past conversations carry forward. After three weeks of daily use, the agent now understands our architecture, coding patterns, design system, and even our preferred writing tone without re-explanation.
  • Browser verification — The agent launches a browser, navigates to our running dev server, and visually verifies that changes render correctly before reporting back.

In just three weeks, the list of features we shipped using this workflow is frankly longer than what we would typically accomplish in a quarter: the entire AI Receptionist service, the white-label CRM dashboard, newsletter subscription flows, blog navigation, email integrations, persona engineering for four AI agents, lead capture logic, Playwright test suites, and—yes—even this blog post.

What This Looks Like in Practice

Here's a real scenario from this week: we needed to add "Previous Article" and "Next Article" navigation buttons to our blog posts, with the next article title truncated and styled correctly. In the old workflow, a developer would manually edit the server loader, the Svelte component, and the CSS. With Antigravity, we described the requirement in natural language, and the agent:

  1. Read the existing +page.server.ts to understand the data model
  2. Modified the server loader to fetch adjacent posts
  3. Updated the Svelte template with navigation buttons
  4. Applied consistent styling using our existing TailwindCSS tokens
  5. Verified rendering in the browser

Total human effort: one prompt and a code review. Total clock time: under 10 minutes.

The Human Role Has Shifted

This is the key insight for decision-makers: AI agents don't replace developers—they change what developers do. Our engineers now spend the majority of their time on:

  • Architecture decisions — which database, which API pattern, which deployment strategy
  • Product direction — what features to build, how they should feel, what the business impact is
  • Code review — the AI writes the first draft, the human ensures it meets quality and security standards
  • Prompt engineering — crafting the system prompts and guardrails for our AI agents (both the Receptionist service and the development agents)

The low-value glue work—wiring up routes, writing boilerplate, debugging CSS inconsistencies—is increasingly handled by AI.

What's Coming Next

We're actively working on several additions for 2026:

  • Paraglide i18n — multi-language support is already in our package.json and will roll out for Vietnamese and English content soon.
  • Cal.com Integration — we've begun integrating @calcom/atoms for self-serve meeting booking directly on the site.
  • Expanded AI Team — additional AI personas tailored for specific industries (F&B, education, eCommerce) based on our consulting engagements.
  • Advanced Analytics — moving beyond basic Lighthouse scores to real user monitoring and conversion tracking.

For Decision-Makers: Why This Matters

If you are a CTO, VP of Engineering, or operations leader evaluating your own web stack, here are the takeaways:

  1. The cost of a modern, AI-augmented web presence is approaching zero. Our hosting, database, email, and deployment infrastructure runs on free or near-free tiers with open-source tooling.
  2. AI agents are production-ready for web development. This isn't a demo—it's how we actually build and ship code every day.
  3. The speed advantage is real and compounding. Features that took days now take hours. The more context the AI accumulates about your codebase, the faster subsequent work becomes.
  4. Human expertise is more important, not less. You still need engineers who understand architecture, security, and performance. The AI amplifies their output; it doesn't eliminate the need for it.

Our repository remains open at github.com/AlphaBitsCode/studio-os. Star it if you'd like to follow along as we continue building in the open.

Questions about our stack or the AI Receptionist service? Start a conversation with our AI team — they're available 24/7.

Share this article