Đọc phiên bản tiếng Việt tại đây.
What is it? Konductor is an AI Orchestration Workflow, a Markdown-first memory and coordination system for AI coding agents that lives entirely within your local repository. Who is it for? Developers, indie-hackers, and teams who use AI coding agents (like Gemini, Claude, Cursor, Trae) for real, multi-session development workflows. Why use it? AI agents suffer from "context amnesia." Konductor provides them with a persistent, self-updating memory of your project's rules, architecture, and state—so you never have to re-onboard your AI from scratch every session.
We didn't set out to build a framework.
We set out to stop repeating ourselves. After managing teams of 20+ engineers and now running lean AI-powered development across a fleet of production systems, the single biggest drain on our productivity wasn't the code — it was rebuilding context every time we switched models, started a new session, or handed off to a different agent.
So we fixed it. The result is konductor-workflow — a Markdown-first workflow standard for AI coding agents that's now running quietly underneath more than a dozen of our live projects.
The Problem: AI Has No Memory
Every AI coding session starts the same way. You open a chat, the AI generates something reasonable, you correct it, it learns your patterns — and then the context fills up, the session ends, or you switch models. Next session, you start over.
Multiply that by a team, multiply it by a complex ERP, multiply it by weeks of iterating on a payment gateway integration, and the cost becomes real. We were spending more time re-orienting AI agents than actually shipping.
The standard advice — write better prompts, use longer system instructions — doesn't scale. It just shifts the problem.
What Konductor Actually Is
Konductor is not a tool. It's a set of durable Markdown files that live inside your codebase, structured so that any AI agent can orient itself in seconds.
The core files are:
KONDUCTOR.md— a compact contract at the repo root. Repo role, stack, mission, rules. Always tagged on every prompt.docs/CHECK_IN.md— short-term working memory. Current task, WIP, what's next..konductor/memory/KONDUCTOR_MEMORY.md— long-term constraints and anti-patterns the AI has learned..konductor/memory/KONDUCTOR_VISION_ROADMAP.md— the why behind the project..konductor/memory/KONDUCTOR_ADR_HISTORY.md— critical architectural decisions, logged as embedded ADRs.
The workflow is a 9-step metacognitive loop: load context → clarify and plan → check in before executing → execute → reflect → persist memory → loop. The AI commits what it learns back into these files so the next session starts smarter than the last.
No cloud. No subscriptions. No vendor lock-in. Plain Markdown inside your Git repository.
Token Usage & Context Optimization
By deploying token-reduction techniques, we force the AI to write and read documentation using stripped-out filler text. This greatly reduces prompt bloat and keeps the AI's attention entirely on the technical requirements.
Instead of bloated traditional AI outputs where the agent consumes ~45 tokens to explain three basic architectural facts, the Konductor pattern enforces a "caveman" communication style (e.g., - UI: Next.js + React, - State: Zustand). This consumes a fraction of the tokens, leaving plenty of room in the context window for actual code generation.
Battle-Tested Across Real Projects
This isn't a proof-of-concept we built for a blog post. Over the past year, Konductor has been the coordination layer on:
- eCommerce ERPs — multi-tenant inventory management, order processing pipelines, fulfilment tracking
- Payment gateway integrations — where agent handoffs between planning and execution sessions happen daily
- Delivery and taxi booking platforms — systems with complex state machines that AI agents need to deeply understand before touching
- POS systems for F&B chains — where architectural decisions made months ago still need to be respected by new agents today
- Internal tooling and client projects — spanning greenfield builds and legacy codebases
Across all of these, the pattern held: when agents had access to the Konductor memory files, they produced better code, fewer regressions, and required less correction. When they didn't, they drifted.
Why We Open-Sourced It
We've been running this internally since 2020 — evolving it as AI tools evolved around us. It started as a shared set of guidelines across IDEs and grew as we adopted Claude, Gemini, and other coding agents at scale.
We open-sourced it for one reason: the problem it solves is universal, and the solution doesn't benefit from being proprietary.
The Claude Code architecture leak in April 2026 crystallised something we'd already built around — transparency in how AI agents reason and remember is not optional. You should be able to inspect exactly what your agent knows, exactly what rules it's following, and exactly what decisions it has made. With Konductor, that's all readable Markdown checked into your repository.
.konductor/
├── KONDUCTOR_VERSION.json
└── memory/
├── KONDUCTOR_ADR_HISTORY.md
├── KONDUCTOR_MEMORY.md
└── KONDUCTOR_VISION_ROADMAP.md
No black boxes. No synced cloud state you don't control.
Who This Is For
Konductor is for developers, indie builders, and small teams who are already using AI coding agents seriously — and who are tired of rebuilding context from scratch every session.
It works best if you:
- Use AI agents in a real development workflow (not just for one-off completions)
- Work across multiple sessions, models, or team members
- Have a codebase with established architectural patterns you want the AI to respect
- Care about reproducibility and don't want AI to hallucinate your own decisions back at you
It is not magic. It requires discipline to maintain the documentation. But that discipline is just good engineering — writing down what you decided and why — and the AI helps you do it.
Get Started
Let AI Agent in your favourite IDE do it, by copy and paste this prompt
Install this package npx konductor-workflow@latest
then begin review this codebase and update/compact all project docs
to match progress, must follow strict Konductor workflow
Or do it manually yourself
npx konductor-workflow
After installing, initialise the framework inside your repo by following the setup in KONDUCTOR_WORKFLOW.md. Tag @KONDUCTOR.md at the start of every prompt.
The full package, blueprints, and workflow documentation are on npmjs.com/package/konductor-workflow.
Acknowledgments & References
We believe in giving credit where it is due. Some of the core concepts we used to make this workflow framework possible include:
- Hyperagent & Darwin Gödel Machine (DGM March 2026): Built on concepts for autonomous self-improvement architecture and metacognitive looping to guide our reasoning workflows. (arXiv:2603.19461)
- Architecture Decision Records (ADR 2026): The methodology we use to capture our causal memory and long-term architectural choices. (adr.github.io)
- "Caveman" Token Reduction Technique (Apr 2026): A critical influence on our workflow structures to dramatically reduce token bloat without sacrificing necessary context. (JuliusBrussee/caveman)
- Second Brains "Conductor" Node (March 2024): The conceptual predecessor to this framework. Originally implemented in Node-RED, the "conductor" served as a critical orchestration node for routing flows and managing agents across our entire enterprise architecture. (AlphaBitsCode/second.brains)
Alpha Bits is a technology company based in Vietnam. We build AI-powered systems for businesses and share what we learn in the open. If you want to see Konductor in action at one of our upcoming workshops in Da Nang, register your interest here.