Lossless Claw: The OpenClaw Plugin That Makes Your Agent Never Forget a Conversation
OpenClaw's default context compaction works like most AI systems: when a conversation gets long enough, older messages get truncated. Information disappears. Your agent loses context it might need later. Lossless Claw replaces that entire system with a DAG-based summarization approach where nothing is ever deleted. Here's how it works and whether you should install it.
The Problem With Default Context Compaction
All AI agents hit the same wall: model context windows have limits. When a session grows beyond that limit, something has to give. OpenClaw's default approach โ like Claude Code, Hermes, and most other agents โ uses a sliding window that drops the oldest messages first.
For short sessions, this is fine. For long-running agent workflows โ multi-day projects, ongoing business operations, persistent assistants that handle the same context week after week โ it's a real problem. The agent "forgets" decisions made earlier in the conversation. You find yourself re-explaining context you know you already provided. Tasks fail because the agent lost a detail from 50 turns ago.
How Lossless Claw Solves It
Lossless Claw (built by Martian Engineering, based on Voltropy's LCM paper) replaces the sliding window with a three-layer system:
1. Every message persisted to SQLite
Raw messages are never deleted. They go into a local SQLite database organized by conversation. If the agent ever needs to recall something from 200 turns ago, the data is there.
2. DAG-based summarization
As conversations grow, older message chunks get summarized by your LLM. Those summaries get condensed into higher-level summary nodes as they accumulate โ forming a directed acyclic graph (DAG) of conversation history. Each summary links back to the raw messages it came from.
3. Smart context assembly
Each turn, the plugin assembles context by combining recent raw messages with relevant summaries. Your agent gets the full picture of the conversation without blowing the context window โ and it has tools to drill into any summary if it needs the original detail.
The Recall Tools
Three tools are exposed to your agent:
lcm_grepโ search through compacted conversation history by keywordlcm_describeโ get a summary of what happened in a specific time/turn rangelcm_expandโ drill into a summary to recover the original raw messages
This means your agent can proactively go looking for context it knows exists โ not just hope it was in the active window.
Plugin Commands
Once installed, you get slash commands inside OpenClaw:
/lcmโ status: version, DB path/size, summary counts, health/lcm backupโ timestamped backup of the SQLite database/lcm rotateโ compact the active session transcript without changing session identity/lcm doctorโ scan for broken or truncated summaries/lcm doctor cleanโ diagnose junk from archived subagents and orphaned runs/losslessโ alias for/lcm
Installation
openclaw plugins install @martian-engineering/lossless-claw
Requirements:
- OpenClaw with plugin context engine support (4.x)
- Node.js 22+
- An LLM provider configured in OpenClaw (used for summarization โ same model or a cheaper one)
After install, the plugin activates automatically for new sessions. Existing sessions continue with their current compaction behavior until rotated.
Cost Consideration
Lossless Claw uses your LLM to generate summaries as conversations compact. That costs tokens. For most workflows, the summarization calls are much cheaper than the context you'd otherwise re-inject or lose โ but worth knowing if you're cost-sensitive.
You can configure summarization to use a cheaper model (Sonnet 4.6, Haiku 4.5, or a local Ollama model) rather than your primary Opus model. Most summarization tasks don't need frontier-level intelligence.
Who Needs This
You'll get real value from Lossless Claw if:
- Your agent handles long-running projects where historical context matters
- You've experienced the agent "forgetting" something important mid-workflow
- You're running persistent sessions that span days or weeks
- You have complex multi-step processes where early decisions affect later steps
You probably don't need it if:
- Your agent handles short, self-contained tasks that reset each session
- You're cost-sensitive and want to minimize API calls
- Your workspace memory files already capture all the context your agent needs
TL;DR
- Lossless Claw = no more context compaction data loss in OpenClaw
- Every message persisted to SQLite; older messages summarized into a DAG; agents can recall via lcm_grep/describe/expand
- Install:
openclaw plugins install @martian-engineering/lossless-claw - Cost: uses LLM tokens for summarization โ configure a cheap model for this
- Best for: long-running projects, persistent assistants, multi-week workflows
Context loss killing your agent's effectiveness?
ClawReady can install and configure Lossless Claw alongside a properly structured workspace memory system โ so your agent retains context at the right granularity without runaway costs.
Book a Free Call โ