1. Install
Requirement: Python >= 3.11.
python -m venv .venv
source .venv/bin/activate
pip install -e .[dev]
bash -n examples/demo-agent-repo/scripts/run_agent_task.sh
2. Run a smoke test
Use the public demo registry before starting the multi-terminal flow.
agent-hub version
agent-hub --projects-file examples/agent-driven-projects.example.json list-projects
agent-hub --projects-file examples/agent-driven-projects.example.json list-project-task-templates demo-codex
3. Start the web surface
Use the example projects registry that models repo-agent pairs.
python -m agent_hub --projects-file examples/agent-driven-projects.example.json serve --port 8080
http://127.0.0.1:8080/http://127.0.0.1:8080/apphttp://127.0.0.1:8080/dashboard
4. Start the dispatcher
python -m agent_hub --projects-file examples/agent-driven-projects.example.json dispatch
5. Open your assistant terminal
Open Codex or Claude Code in another terminal, then ask it to use the board for you.
Create a Codex task in demo-codex to investigate why the local build script is flaky and summarize the likely root cause.Create a Claude task in demo-claude to review the proposed fix and call out operator-facing risks.Queue the review-then-implement pipeline in demo-codex for "Add a dry-run mode to the deployment helper".
If you want to evaluate the board manually first, you can run:
python -m agent_hub --projects-file examples/agent-driven-projects.example.json run-task-template demo-codex delegate-task --input "Investigate why the local build script is flaky"
python -m agent_hub --projects-file examples/agent-driven-projects.example.json dashboard
Under the hood, the assistant will call commands like:
python -m agent_hub --projects-file examples/agent-driven-projects.example.json run-task-template demo-codex delegate-task --input "Investigate why the local build script is flaky and summarize the likely root cause"
python -m agent_hub --projects-file examples/agent-driven-projects.example.json run-task-template demo-claude delegate-task --input "Review the proposed fix and call out any operator-facing risks"
python -m agent_hub --projects-file examples/agent-driven-projects.example.json run-pipeline demo-codex review-then-implement --input "Add a dry-run mode to the deployment helper"
This demo uses wrapper scripts under examples/demo-agent-repo/scripts/ as a stand-in for real tools like Claude Code, Codex, Kimi Code, or Qwen Code.
6. Ask the assistant to inspect handoff
You can also ask the assistant to inspect the board and report back.
Show me the human inbox and explain which task needs manual routing.If anything failed, tell me whether I should retry it or mark it for manual review.
Under the hood, those checks map to commands like:
TASK_ID=$(python -m agent_hub --projects-file examples/agent-driven-projects.example.json create-task "Manual review: choose agent" --kind noop | python - <<'PY'
import json,sys
print(json.load(sys.stdin)["id"])
PY
)
python -m agent_hub --projects-file examples/agent-driven-projects.example.json mark-needs-human "$TASK_ID" --note "operator must decide whether Codex or Claude should own this task"
python -m agent_hub --projects-file examples/agent-driven-projects.example.json add-task-label "$TASK_ID" routing
python -m agent_hub --projects-file examples/agent-driven-projects.example.json add-task-note "$TASK_ID" "ambiguous ownership; human should choose executor"
python -m agent_hub --projects-file examples/agent-driven-projects.example.json list-human-inbox
7. Ask the assistant for saved views
Once the board has useful slices, the assistant can use saved queries to answer questions like:
Show only Codex-owned tasks.Show only tasks that need manual review.
The underlying commands look like:
python -m agent_hub --projects-file examples/agent-driven-projects.example.json create-saved-query tasks "Needs Human" --filter status=needs_human
python -m agent_hub --projects-file examples/agent-driven-projects.example.json create-saved-query tasks "Codex Tasks" --filter project_id=demo-codex
python -m agent_hub --projects-file examples/agent-driven-projects.example.json list-saved-queries
python -m agent_hub --projects-file examples/agent-driven-projects.example.json apply-saved-query <query-id>
8. Confirm the board view
At any point, the assistant or operator can inspect the board state directly:
python -m agent_hub --projects-file examples/agent-driven-projects.example.json dashboard
curl -s http://127.0.0.1:8080/dashboard | sed -n '1,80p'
Expected result
- Multiple assistant-task records move through one queue.
- Different repo-agent pairs show up as distinct
project_ids. - A pipeline shows serial assistant work.
- The human inbox shows ambiguous or manual-review cases.
- Saved queries slice the board by assistant or handoff state.
More importantly, the normal operating mode should feel clear: you talk to Codex or Claude Code, the assistant submits work into agent-hub, and the board becomes the shared visibility layer for many code-assistant tasks.