March 30, 2026
initech now supports cross-machine coordination. Run initech serve on any machine — a workstation, a remote GPU box, a cloud VM — and the local TUI streams all its agent panes live. One terminal. Every agent. Any machine.
A typical setup: MacBook running the TUI, workbench doing the heavy lifting. Before this feature, you needed separate terminal windows per machine, no unified view, and manual SSH to coordinate anything across them. Agents on the workbench were invisible to the supervisor on the MacBook unless you opened a separate shell.
The supervisor should see and message every agent regardless of which machine it runs on.
The remote machine runs initech serve — a headless daemon that launches agents without a TUI and listens on TCP for incoming connections. The local TUI connects, and a yamux session multiplexes everything over a single TCP connection: one stream per remote pane for live PTY output, plus a control stream for IPC (send, peek, status).
No SSH tunneling. No port forwarding. Direct TCP, token-authenticated.
Add mode: headless, a peer_name, a listen address, and a shared token to the remote machine's initech.yaml:
project: myproject root: /home/user/myproject mode: headless peer_name: workbench listen: ":7391" token: "your-shared-secret" roles: - eng1 - eng2 - eng3
Then start the daemon:
$ initech serve
That's it. The daemon launches all configured agents and waits for connections.
Add a remotes: block to the local initech.yaml pointing at the remote machine:
project: myproject
root: /Users/you/myproject
token: "your-shared-secret"
roles:
- super
- pm
- qa1
remotes:
workbench:
addr: "192.168.1.100:7391"
Launch the TUI normally:
$ initech
The TUI connects to each configured remote, discovers its agents, and renders them alongside local panes in the grid. Remote panes are indistinguishable from local ones except for the host prefix in the overlay.
All IPC commands accept host:agent syntax:
$ initech send workbench:eng1 "start the API refactor" $ initech peek workbench:eng2 -n 30 $ initech status
initech status shows a HOST column when remotes are present. initech peers lists every connected machine and its agents:
$ initech peers PEER AGENTS workbench eng1 eng2 eng3
initech doctor validates connectivity to each configured remote before you start a session.
If the connection to a remote drops — network blip, machine sleep, daemon restart — the TUI reconnects automatically when the peer comes back. Remote panes show a disconnected state during the gap. No manual intervention needed.
Right now: run compute-heavy agents (large context, long tool chains) on a workbench with more RAM and CPU, while the supervisor and QA agents run on the laptop. The TUI on the laptop sees everything.
Longer term: a fleet of dedicated agent machines, each running initech serve with a role specialization, all visible and addressable from one terminal. The session lives on the coordinator machine. Agents live wherever makes sense.
curl -fsSL https://initech.sh/install.sh | bash # On the remote machine: # Add mode/peer_name/listen/token to initech.yaml, then: initech serve # On the local machine: # Add remotes: block to initech.yaml, then: initech
Source and operator guide: github.com/nmelo/initech