March 28, 2026
We built a Go TUI that manages multiple Claude Code agents running in parallel. It replaced tmux as our session runtime.
Give each agent a terminal running Claude Code with role-specific instructions. A supervisor agent coordinates. Engineer agents implement. A QA agent validates. A shipper agent releases. Each agent gets a CLAUDE.md that encodes its identity, constraints, workflow, and communication protocol.
We ran this across four projects before building initech. It shipped real software: parallel engineering, independent QA, release gating, context that survived session resets.
tmux breaks in three ways that get worse as you add agents.
tmux send-keys has no delivery guarantee. You send a message to an agent's pane and hope it arrives. When it doesn't, there's no error. A completion report from eng to super drops, super doesn't know eng finished, QA never gets dispatched, and the bead stalls for an hour.
A hung agent and a productive one look identical in tmux. Finished and mid-work look identical. The only way to know is to peek each pane. With 9 agents, that's 9 manual checks every 10-15 minutes.
tmux doesn't know what work exists, who's assigned to what, or that an agent just marked a task as ready for QA. All orchestration lives in the supervisor's context window. Context compacts. Messages get lost. The supervisor agent forgets eng finished. The dispatch chain stalls.
tmux is a general-purpose multiplexer. It has no concept of agent state.
Each session runs a Unix domain socket. initech send writes text to an agent's PTY through the emulator, then confirms delivery. The sender gets an explicit OK or error within seconds.
$ initech send eng1 "fix the auth bug in middleware.go" $ initech peek eng1 -n 20
We tried JSONL session log tailing first. Unreliable: Claude writes to JSONL at conversation boundaries, not predictably. During a 45-second thinking pause, zero entries. Any timeout is wrong for some value of the timeout.
What works: track when the PTY last produced output. Claude Code's spinner animates at 10-30 fps during thinking. The only state with zero output is idle-at-prompt. A 2-second recency threshold gives clean binary detection.
Green dot means working. Gray means idle. Yellow means idle with tasks waiting. The overlay shows every agent's state without opening any pane.
JSONL failed for activity detection but works for semantic events. The session logs contain tool use results, assistant messages, and error sequences. The event system parses these for:
* Bead completion: agent ran bd update --status ready_for_qa
* Stalling: no output for 10+ minutes with a bead assigned
* Error loops: 3+ consecutive tool failures
* Bead claims: auto-detected from bd commands, no extra CLI call needed
Events show as toast notifications. "eng1 completed ini-bhk.3" appears in green before super even reports it.
Each agent's ribbon shows its current bead. The overlay shows every agent's state at a glance. When an agent goes idle after holding a bead, the TUI tells the supervisor agent directly: "[from initech] eng1 is now idle (bead: ini-bhk.3). Check if work is complete."
Every number above was produced by the tool itself. initech managed the agents that built initech.
The TUI is the runtime. The templates are the intelligence. A bad CLAUDE.md produces bad output regardless of how good the TUI is. We rewrote all 11 templates three times, each time codifying lessons from actual failures: engineer agents skipping PLAN comments, QA agents rubber-stamping, the supervisor agent doing work instead of dispatching.
Three approaches before PTY byte recency worked. JSONL tailing: too slow. Terminal output rate: SIGWINCH false positives. Prompt detection: fragile to UI changes. The spinner approach only works because Claude Code always produces output when working. Static "thinking..." display would break it.
The supervisor agent's number one failure: doing implementation instead of dispatching. Dispatching feels slow. But the whole point of multi-agent is specialization. "Not using agents" is now the first critical failure mode in the supervisor template.
Every agent eventually forgets to comment PLAN before coding, or push before marking ready_for_qa, or clear its bead display. Auto-notify exists because agents forget to report completion. Guardrails help but don't eliminate this. Process compliance is a gradient, not a binary.
Session portability is unstarted. Moving a session between machines (MacBook to workbench) requires manual rsync. The PRD describes an initech migrate command that doesn't exist yet.
Resource management is behind a flag. Auto-suspend/resume under memory pressure (the feature that would double effective agent capacity on a 36GB laptop) is implemented but gated behind --auto-suspend because the policy hasn't been tested enough in real sessions.
Onboarding is rough. A new user who runs initech for the first time sees 7 panes with no guidance. Status bar tips cycle at the bottom, but there's no operator guide documenting the full workflow. The tool was built by its own user, and it shows.
Early adoption. initech has been used across multiple projects now, but the user base is still small. More real-world usage will surface gaps the dogfooding hasn't caught.
curl -fsSL https://initech.sh/install.sh | bash mkdir myproject && cd myproject initech init initech
Source: github.com/nmelo/initech