Bug Fix April 16, 2026

OpenClaw ACP Bug: Agent Responses Generated But Never Delivered During Long Claude Code Runs

If you're running multi-agent setups with ACP (Claude Code) and your Discord messages sometimes vanish into the void after a long coding task โ€” this is why. A GitHub issue filed today (#67502) documents the exact failure: nested lane congestion causes silent delivery corruption, and responses that appear in the session transcript simply never reach the channel.

What's Happening

OpenClaw uses an internal "nested lane" for ACP (Claude Code) runs. When a coding task takes a long time โ€” 7 to 14 minutes is the range in the bug report โ€” that lane stays occupied the entire time. Other agent sessions trying to deliver responses queue behind it.

Here's where it gets nasty: when the queue finally clears, OpenClaw generates the backed-up responses (you can see them in the session transcript with stopReason: "stop") but the delivery mechanism has already timed out or gotten confused. The response exists โ€” it just never gets sent to Discord.

There's a second failure mode on top of that: after a long ACP run completes, the agent session's channel field can silently flip from discord to webchat. So even if delivery resumes, it's going to the wrong place.

The Timeline From the Bug Report

T+0:00  ACP Claude Code starts (TASK-RTE-001-A)
        Nested lane: OCCUPIED

T+9:36  WARN lane wait exceeded: waitedMs=576497 queueAhead=1
T+9:37  ACP run completes โ€” but channel flipped to "webchat" โŒ

T+9:37  Second agent kicks off ACP run
        Nested lane: OCCUPIED AGAIN

T+14:12 WARN lane wait exceeded: waitedMs=851678 queueAhead=1
T+14:19 All lanes finally release

        Agent A generates summary response
        Response appears in session transcript โœ…
        Response never arrives in Discord โŒ

Who Gets Hit By This

This affects you if you're running:

Single-agent setups doing quick tasks are unlikely to trigger this โ€” you need both long ACP runs AND queued responses for the congestion to cause dropped delivery.

Workarounds While Waiting for the Fix

1. Add timeoutSeconds to your sessions_send calls

The bug report was using sessions_send(timeoutSeconds: 0) โ€” fire and forget. Setting a non-zero timeout won't prevent the lane congestion, but it at least makes the caller aware when something is stuck rather than silently moving on.

2. Serialize long ACP tasks rather than parallelizing

If you're dispatching multiple long coding tasks simultaneously, try running them sequentially instead. The nested lane congestion is caused by multiple long-running tasks competing for the same lane. One at a time avoids the queue buildup.

3. Check channel field after ACP runs complete

Until the channel flip bug is fixed, add a verification step after long ACP runs: have your orchestrator agent confirm it's still routing to discord before sending summaries. You can do this with a simple check in your SOUL.md workflow or a post-task prompt.

4. Break long tasks into shorter chunks

Tasks that take 7โ€“14 minutes can often be broken into smaller sub-tasks of 2โ€“3 minutes each. This keeps lane occupation time short and lets the queue clear between tasks. Slower wall-clock, but more reliable delivery.

5. Roll back to 4.12 for ACP-heavy setups

If multi-agent ACP workflows are central to your setup and reliability matters more than 4.14's new features, pinning to 4.12 is the safest option:

npm install -g openclaw@2026.4.12
openclaw gateway restart

Status

Issue #67502 is open as of April 16, 2026. No fix timeline confirmed yet โ€” watching for 4.15 stable. This is marked as a regression (worked before 4.14), which typically gets prioritized.

TL;DR

Multi-agent setups are tricky to get right

ClawReady helps configure multi-agent OpenClaw architectures that are actually reliable โ€” with the right lane management, session patterns, and fallback handling. If your agent org is dropping responses, let's fix it.

Book a Free Call โ†’