Scenario 1 — Current System
Cascaded text relay
Representative cascaded orchestration (text-dominant; partial structure not assumed). Full NLU inference. Each agent receives the prior agent's text output. Sequential by necessity. Handoff overhead at every step. No shared kernel.
Scenario 2 — New / Sequential
K-governed, sequential scheduler
Commitment kernel extracted once. All agents receive K (JSON). Sequential scheduler. No re-inference. No text relay between agents.
Scenario 3 — New / Parallel
K-governed, near-parallel scheduler (staggered launch)
Same K. Same agents. Parallel execution — all agents access K simultaneously. Wall-clock ≈ slowest single agent.