Files
website-enchun-mgr/.opencode/skills/agent-browser/references/session-management.md
pkupuk ad8e2e313e chore(agent): configure AI agents and tools
Add configuration for BMad, Claude, OpenCode, and other AI agent tools and workflows.
2026-02-11 11:51:23 +08:00

3.8 KiB

Session Management

Run multiple isolated browser sessions concurrently with state persistence.

Named Sessions

Use --session flag to isolate browser contexts:

# Session 1: Authentication flow
agent-browser --session auth open https://app.example.com/login

# Session 2: Public browsing (separate cookies, storage)
agent-browser --session public open https://example.com

# Commands are isolated by session
agent-browser --session auth fill @e1 "user@example.com"
agent-browser --session public get text body

Session Isolation Properties

Each session has independent:

  • Cookies
  • LocalStorage / SessionStorage
  • IndexedDB
  • Cache
  • Browsing history
  • Open tabs

Session State Persistence

Save Session State

# Save cookies, storage, and auth state
agent-browser state save /path/to/auth-state.json

Load Session State

# Restore saved state
agent-browser state load /path/to/auth-state.json

# Continue with authenticated session
agent-browser open https://app.example.com/dashboard

State File Contents

{
  "cookies": [...],
  "localStorage": {...},
  "sessionStorage": {...},
  "origins": [...]
}

Common Patterns

Authenticated Session Reuse

#!/bin/bash
# Save login state once, reuse many times

STATE_FILE="/tmp/auth-state.json"

# Check if we have saved state
if [[ -f "$STATE_FILE" ]]; then
    agent-browser state load "$STATE_FILE"
    agent-browser open https://app.example.com/dashboard
else
    # Perform login
    agent-browser open https://app.example.com/login
    agent-browser snapshot -i
    agent-browser fill @e1 "$USERNAME"
    agent-browser fill @e2 "$PASSWORD"
    agent-browser click @e3
    agent-browser wait --load networkidle

    # Save for future use
    agent-browser state save "$STATE_FILE"
fi

Concurrent Scraping

#!/bin/bash
# Scrape multiple sites concurrently

# Start all sessions
agent-browser --session site1 open https://site1.com &
agent-browser --session site2 open https://site2.com &
agent-browser --session site3 open https://site3.com &
wait

# Extract from each
agent-browser --session site1 get text body > site1.txt
agent-browser --session site2 get text body > site2.txt
agent-browser --session site3 get text body > site3.txt

# Cleanup
agent-browser --session site1 close
agent-browser --session site2 close
agent-browser --session site3 close

A/B Testing Sessions

# Test different user experiences
agent-browser --session variant-a open "https://app.com?variant=a"
agent-browser --session variant-b open "https://app.com?variant=b"

# Compare
agent-browser --session variant-a screenshot /tmp/variant-a.png
agent-browser --session variant-b screenshot /tmp/variant-b.png

Default Session

When --session is omitted, commands use the default session:

# These use the same default session
agent-browser open https://example.com
agent-browser snapshot -i
agent-browser close  # Closes default session

Session Cleanup

# Close specific session
agent-browser --session auth close

# List active sessions
agent-browser session list

Best Practices

1. Name Sessions Semantically

# GOOD: Clear purpose
agent-browser --session github-auth open https://github.com
agent-browser --session docs-scrape open https://docs.example.com

# AVOID: Generic names
agent-browser --session s1 open https://github.com

2. Always Clean Up

# Close sessions when done
agent-browser --session auth close
agent-browser --session scrape close

3. Handle State Files Securely

# Don't commit state files (contain auth tokens!)
echo "*.auth-state.json" >> .gitignore

# Delete after use
rm /tmp/auth-state.json

4. Timeout Long Sessions

# Set timeout for automated scripts
timeout 60 agent-browser --session long-task get text body