A busy thread on r/openclaw this week: "OpenClaw removed browser relay extension โ€” how are you guys handling real website interactions now?" Hundreds of upvotes, a lot of frustrated replies, not many clean answers.

Here's the situation and what actually works.

What Happened

The browser relay extension โ€” which allowed OpenClaw to control a browser session on your local machine via a Chrome/Firefox extension โ€” was removed in the v2026.3.x release cycle. The official reason: the extension's communication model created a security surface that conflicted with the new callerScopes enforcement introduced in v2026.3.28.

In plain English: the way it passed commands between the agent and your browser was too easy to abuse once the privilege escalation CVE was fixed. So it got cut.

Affected workflows: Anything that required your agent to log into a website on your behalf, fill out web forms, scrape authenticated pages, interact with web apps that don't have APIs, or automate browser-based tasks.

The 3 Working Alternatives

1
OpenClaw's Built-in Browser Tool (Recommended)
Easiest

OpenClaw now ships a native browser tool that uses a headless Chromium instance controlled directly by the agent โ€” no extension required. It's more capable than the old relay for most tasks because it runs server-side and doesn't require your local machine to be open.

How to enable it:

// In openclaw.json, under tools:
{
  "tools": {
    "browser": {
      "enabled": true,
      "headless": true,
      "profile": "default"
    }
  }
}

You'll need Chromium installed on your server: sudo apt install chromium-browser (Ubuntu/Debian) or brew install chromium (Mac).

Limitation: Headless mode can't handle sites that detect and block headless browsers (some banking sites, Cloudflare-protected pages). For those, use Option 2.

2
OpenClaw Node + Host Browser
Medium effort

If you need a real (non-headless) browser โ€” because the site detects headless or you need an existing logged-in session โ€” OpenClaw's node architecture lets you route browser control through your local machine while keeping the agent on the server.

Set up an OpenClaw node on your local machine (the one with the real browser), then in your agent config point browser actions to that node:

// In openclaw.json:
{
  "tools": {
    "browser": {
      "enabled": true,
      "target": "node",
      "node": "your-local-node-name",
      "headless": false,
      "profile": "user"
    }
  }
}

This requires your local machine to be running and the OpenClaw node to be connected. Works best for scheduled tasks where you're okay with your machine being on.

3
Replace Browser Scraping with APIs
Most work, most reliable

For any service that has an official API, the right long-term answer is to stop scraping the web UI and call the API directly. It's faster, more reliable, and immune to UI changes.

Common replacements for browser-based workflows:

  • Google Workspace (Gmail, Calendar, Sheets) โ†’ Google APIs via MCP or direct HTTP tool
  • Stripe, QuickBooks, HubSpot โ†’ official APIs, all have OpenClaw-compatible MCP servers
  • LinkedIn, Twitter/X โ†’ use official API (rate-limited but stable)
  • Custom web apps โ†’ if you own it, add an API endpoint; if you don't, use Option 1 or 2

This takes more upfront work but gives you automation that doesn't break when someone redesigns their website.

Which Option Should You Use?

Quick test after switching: Ask your agent to fetch the title of a webpage using the browser tool. If it responds with the correct title, browser control is working. If it errors, check that Chromium is installed and the config is set correctly.

Still Broken After Trying These?

The most common issues we see:

If you're still stuck after this, book a free call. We'll look at your specific config and get browser control working. ClawReady setup includes browser tool configuration as standard โ€” it's on our 32-point checklist.