A smart agent is not a workflow. A team is.
The promise of "multi-agent coding" usually arrives in a demo: three AI agents running in parallel, each knocking out a task. What the demo leaves out is the part that makes it work in real projects. An engineer who commits code without tests produces a mess. A team of six engineers with no dispatcher produces six conflicting branches. Without an explicit workflow, more agents means more chaos, not more throughput. The hard part of multi-agent coding is not running multiple agents; it is the coordination layer.
Real teams, human or agent, have role specialization and handoff protocols. Engineers write code. A QA validates. A dispatcher assigns work and tracks who has what. Someone decides when to ship. The roles are not decorative — they enforce a sequence (claim, implement, test, review, merge) that catches the failure modes that come from acting without accountability. An agentic workflow needs the same structure, translated into primitives agents can actually use.
initech is not an AI agent. It is the runtime in which a team of agents operates. Four pieces matter:
initech.yaml. The role preloads a system prompt, workflow expectations, and a boundary (engineer writes code; QA runs tests; shipper cuts releases). See the roles docs.initech send. Messages are synchronous with a delivery guarantee, so the coordinator knows the target received the task.bd CLI). Agents claim beads, comment plans, post DONE status with a commit hash, and update status through the lifecycle: open → in_progress → ready_for_qa → qa_passed → closed.initech peek and initech patrol let any participant check state without interrupting.Here is what a normal task looks like as it moves through a four-role team (super, eng1, qa1, shipper):
# 1. super dispatches a bead to eng1 $ initech send eng1 "ini-87: add retry logic to payments API. bd show ini-87 for AC." # 2. eng1 claims, plans, implements, tests, commits, pushes $ bd update ini-87 --status in_progress --assignee eng1 $ bd comments add ini-87 --author eng1 "PLAN: ..." # ...writes code, runs make test, commits, pushes... $ bd comments add ini-87 --author eng1 "DONE: commit abc123" $ bd update ini-87 --status ready_for_qa # 3. eng1 reports back; super routes to qa1 $ initech send super "[from eng1] ini-87: ready for QA. commit abc123" $ initech send qa1 "ini-87: validate. commit abc123" # 4. qa1 runs the acceptance criteria, posts PASS or FAIL $ bd comments add ini-87 --author qa1 "PASS: all AC verified" $ bd update ini-87 --status qa_passed # 5. shipper bundles qa-passed beads and cuts a release $ initech send shipper "ship v2.4.0 with ini-87 included"
Nothing in that sequence is specific to any AI provider. The agents inside the panes can be Claude Code, Codex, or OpenCode; the workflow is the same. The runtime is what makes the workflow legible to the coordinator and to future-you looking at the bead log.
The failure modes a workflow like this catches:
You do not need six agents to benefit. Two (one engineer plus a QA) already forces the structure: write code, run tests in a separate pane, have the test-runner tell you whether to ship. Add a dispatcher role the moment you have two engineers working in parallel. Add a shipper the moment releases become a step worth automating. The runtime scales down as well as up. See the getting started guide for a minimal two-agent setup.
macOS via Homebrew:
$ brew install nmelo/tap/initech
Or curl:
$ curl -fsSL https://initech.sh/install.sh | sh
Then initech init && initech.