When people hear "20 AI agents," they imagine chaos. Twenty chatbots arguing with each other. In reality, it is the opposite. The structure is what makes it work. Here is the actual mechanics — no hand-waving, no marketing.
The Sprint System
Every project runs on one-week sprints, Monday through Sunday. All five projects share the same cadence. This is not an accident — synchronization reduces context-switching overhead. When Monday comes, every project starts fresh. When Friday hits, every project is pushing toward a sprint boundary.
KABSAT, the scrum master agent, owns the ceremonies. Sprint planning on Monday. Daily standups via the /startup command. Sprint review and retrospective at close. Boris — the governance agent — oversees all of it, but KABSAT runs the actual rituals.
Each project has its own tracker: a markdown table with stories, acceptance criteria, status, and a cross-project dependency column. That dependency column is the secret weapon. When the AHA website needs a component that depends on the EUN dashboard schema, it shows up in both trackers.
Parallel Execution Groups
The twenty agents are organized into five groups that mirror a real engineering organization:
Group 1 — Foundation Layer: Datar (platform engineer) and Likoud (backend architect) handle infrastructure. Database schemas, API design, deployment pipelines.
Group 2 — Data & Intelligence: Nagan (data labeling), Roupa (feature engineering), and Tawa (context window architecture) prepare the data layer. What goes into prompts, how context is structured, what gets retrieved when.
Group 3 — AI Application: Wen (prompt engineering) and Sangou (frontend) build what the user sees and what the AI says. Prompts are engineered, not improvised. UI is designed, not hacked.
Group 4 — ML Pipeline & Delivery: Ramitaan converges everything into deployable form. Model serving, pipeline orchestration, the bridge between development and production.
Group 5 — Production Intelligence: Bassit (monitoring), Kitaen (observability), Nakem (ethics), Linteg (compliance), Kwarta (cost), and Ulep (FinOps) keep things running, legal, and affordable.
The critical design choice: Groups 1, 2, and 3 run in parallel. They do not wait for each other. Group 4 converges their outputs. Group 5 runs continuously. This mirrors how modern engineering orgs actually work — parallel streams feeding into integration points.
Governance That Actually Works
Boris is not a cheerleader. Boris's job is to say no. To flag scope creep. To surface stale backlog items that have been sitting for two sprints without movement. To ensure that plans are followed as approved, and that deviations — even Andrea's own deviations — require team buy-in.
Knnam measures but does not enforce. Friction logs, DORA metrics (deployment frequency, lead time, change failure rate, recovery time), bottleneck analysis. The data goes to Boris for action. This separation — measurement from enforcement — prevents the measurer from becoming the bottleneck.
Real Numbers
Andrea averages about six hours per working session. In that time, she touches all five projects — not equally every day, but across a sprint, every project gets attention proportional to its priority.
Sprint velocity is tracked per project, not per agent. This is deliberate. Agents are tools; projects are outcomes. A project that ships three stories in a sprint is performing differently than one that ships eight, and the response is about project-level adjustment, not agent-level blame.
Cycle time — the duration from "in progress" to "done" — is the metric that matters most. Not story points. Not lines of code. How long does it take for a decision to become a deployed feature? That is what governance exists to shorten.
The Human in the Loop
None of this works without Andrea. The AI agents do not self-direct. They do not decide what to build next. They do not resolve priority conflicts between projects. Andrea is the product owner for everything. The agents amplify her capacity — they do not replace her judgment. That distinction matters. AI that runs without human oversight is not an organization. It is a hallucination with a deployment pipeline.