Lossless Claw: The OpenClaw Plugin That Makes Your Agent Never Forget a Conversation

OpenClaw's default context compaction works like most AI systems: when a conversation gets long enough, older messages get truncated. Information disappears. Your agent loses context it might need later. Lossless Claw replaces that entire system with a DAG-based summarization approach where nothing is ever deleted. Here's how it works and whether you should install it.

The Problem With Default Context Compaction

All AI agents hit the same wall: model context windows have limits. When a session grows beyond that limit, something has to give. OpenClaw's default approach โ€” like Claude Code, Hermes, and most other agents โ€” uses a sliding window that drops the oldest messages first.

For short sessions, this is fine. For long-running agent workflows โ€” multi-day projects, ongoing business operations, persistent assistants that handle the same context week after week โ€” it's a real problem. The agent "forgets" decisions made earlier in the conversation. You find yourself re-explaining context you know you already provided. Tasks fail because the agent lost a detail from 50 turns ago.

How Lossless Claw Solves It

Lossless Claw (built by Martian Engineering, based on Voltropy's LCM paper) replaces the sliding window with a three-layer system:

1. Every message persisted to SQLite

Raw messages are never deleted. They go into a local SQLite database organized by conversation. If the agent ever needs to recall something from 200 turns ago, the data is there.

2. DAG-based summarization

As conversations grow, older message chunks get summarized by your LLM. Those summaries get condensed into higher-level summary nodes as they accumulate โ€” forming a directed acyclic graph (DAG) of conversation history. Each summary links back to the raw messages it came from.

3. Smart context assembly

Each turn, the plugin assembles context by combining recent raw messages with relevant summaries. Your agent gets the full picture of the conversation without blowing the context window โ€” and it has tools to drill into any summary if it needs the original detail.

The Recall Tools

Three tools are exposed to your agent:

This means your agent can proactively go looking for context it knows exists โ€” not just hope it was in the active window.

Plugin Commands

Once installed, you get slash commands inside OpenClaw:

Installation

openclaw plugins install @martian-engineering/lossless-claw

Requirements:

After install, the plugin activates automatically for new sessions. Existing sessions continue with their current compaction behavior until rotated.

Cost Consideration

Lossless Claw uses your LLM to generate summaries as conversations compact. That costs tokens. For most workflows, the summarization calls are much cheaper than the context you'd otherwise re-inject or lose โ€” but worth knowing if you're cost-sensitive.

You can configure summarization to use a cheaper model (Sonnet 4.6, Haiku 4.5, or a local Ollama model) rather than your primary Opus model. Most summarization tasks don't need frontier-level intelligence.

Who Needs This

You'll get real value from Lossless Claw if:

You probably don't need it if:

TL;DR

Context loss killing your agent's effectiveness?

ClawReady can install and configure Lossless Claw alongside a properly structured workspace memory system โ€” so your agent retains context at the right granularity without runaway costs.

Book a Free Call โ†’