WuJerry commited on
Commit
aa56ce5
·
verified ·
1 Parent(s): 9df536b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -133
README.md CHANGED
@@ -1,134 +1 @@
1
  See: https://github.com/Halluminate/browserbench
2
-
3
- # BrowserBench
4
-
5
- BrowserBench exercises multiple hosted Chromium providers against a shared set of autonomous web-browsing tasks. It wraps the provider-specific session bootstrap in `browser_test.py` and coordinates parallel task execution in `run_browserbench.py`, emitting timestamped CSV reports so you can compare reliability and latency provider by provider.
6
-
7
- ## Repository Layout
8
- - `run_browserbench.py` – asynchronous benchmark runner; loads tasks from CSV, fans them out with bounded concurrency, and writes aggregated results.
9
- - `browser_test.py` – single-task harness that spins up a provider session, runs the `browser_use` agent, captures the final message, and performs provider-specific teardown.
10
- - `providers/` – lightweight adapters for Anchor, Browserbase, SteelBrowser, and Hyperbrowser. Each exposes `create_session(...)` and `cleanup_session(...)` so the runner never touches SDK details.
11
- - `browserbench.csv` / `test_tasks.csv` – canonical and sandbox task lists. Each row describes the start URL, natural-language instruction, and ground-truth expectation.
12
- - `results/` – auto-created folder containing `browserbench_results_<provider>_<timestamp>.csv` exports for every run.
13
- - `pyproject.toml` – Project configuration and dependencies for uv package management.
14
- - `requirements.txt` – Python dependencies (maintained for pip compatibility).
15
-
16
- ## Prerequisites
17
-
18
- ### Installation with uv (Recommended)
19
- [uv](https://github.com/astral-sh/uv) is a fast Python package installer and resolver. If you don't have it installed:
20
- ```bash
21
- curl -LsSf https://astral.sh/uv/install.sh | sh
22
- ```
23
-
24
- Then install dependencies:
25
- ```bash
26
- # Install all dependencies
27
- uv sync
28
-
29
- # Run commands with uv
30
- uv run python run_browserbench.py --help
31
- uv run python browser_test.py --help
32
- ```
33
-
34
- ### Installation with pip (Alternative)
35
- If you prefer traditional pip:
36
- ```bash
37
- # Create virtual environment
38
- python -m venv venv
39
- source venv/bin/activate # On Windows: venv\Scripts\activate
40
-
41
- # Install dependencies
42
- pip install -r requirements.txt
43
- ```
44
-
45
- ### Environment Variables
46
- Create a `.env` file in the project root with the following variables:
47
- ```bash
48
- # OpenAI API Key (required for all providers)
49
- OPENAI_API_KEY=your_openai_api_key_here
50
-
51
- # Anchor Browser API Key
52
- ANCHOR_API_KEY=your_anchor_api_key_here
53
-
54
- # Browserbase API credentials
55
- BROWSERBASE_API_KEY=your_browserbase_api_key_here
56
- BROWSERBASE_PROJECT_ID=your_browserbase_project_id_here
57
-
58
- # Steel Browser API Key
59
- STEEL_API_KEY=your_steel_api_key_here
60
-
61
- # Hyperbrowser API Key
62
- HYPERBROWSER_API_KEY=your_hyperbrowser_api_key_here
63
- ```
64
-
65
- Alternatively, export them manually in your shell. Both `run_browserbench.py` and `browser_test.py` call `python-dotenv.load_dotenv()`, so a local `.env` file is respected automatically.
66
-
67
- ## Running the Benchmark Suite
68
-
69
- ### With uv:
70
- ```bash
71
- uv run python run_browserbench.py \
72
- --provider browserbase \
73
- --concurrency 5 \
74
- --tasks 20 \
75
- --csv-file browserbench.csv
76
- ```
77
-
78
- ### With pip/virtualenv:
79
- ```bash
80
- python run_browserbench.py \
81
- --provider browserbase \
82
- --concurrency 5 \
83
- --tasks 20 \
84
- --csv-file browserbench.csv
85
- ```
86
-
87
- Key flags:
88
- - `--provider {anchor|browserbase|steelbrowser|hyperbrowser}` – choose which adapter to exercise. Each run targets a single provider.
89
- - `--concurrency <int>` – number of simultaneous browser sessions. The runner uses an `asyncio.Semaphore` to cap parallelism.
90
- - `--tasks <int>` – optionally limit the number of rows pulled from the CSV.
91
- - `--csv-file <path>` – alternate task list.
92
- - `--output <filename>` – custom name for the result CSV (otherwise auto-generated).
93
- - `--no-stealth` – disable provider-specific stealth settings where available.
94
-
95
- The runner validates that required environment variables exist, loads tasks, dispatches them through `browser_test.main(...)`, and writes a CSV report under `results/`. Each row includes:
96
- - task metadata (ID, prompt, URLs, ground truth)
97
- - provider + configuration fields (`provider`, `timestamp`, `success`, `error_message`)
98
- - completion info (`agent_result`, `session_url`, `execution_time`)
99
-
100
- ## Running a Single Task
101
- Use `browser_test.py` when you need to debug prompts or provider wiring:
102
-
103
- ### With uv:
104
- ```bash
105
- uv run python browser_test.py --provider steelbrowser --task "Find the latest pricing for the Oculus Quest 3" --no-stealth
106
- ```
107
-
108
- ### With pip/virtualenv:
109
- ```bash
110
- python browser_test.py --provider steelbrowser --task "Find the latest pricing for the Oculus Quest 3" --no-stealth
111
- ```
112
- This script spins up the requested provider, launches the `browser_use.Agent`, streams intermediate logging, and returns both the final natural-language answer and any provider session URL/recording. Cleanup is performed automatically even on failure.
113
-
114
- ## Provider Behaviors
115
- All adapters follow the same two-function contract but expose slightly different features:
116
- - **Anchor** – provisions a mobile proxy with CAPTCHA solving and returns a CDP URL alongside recording links.
117
- - **Browserbase** – can enable `advanced_stealth` + proxies; session URLs follow `https://www.browserbase.com/sessions/<id>`.
118
- - **SteelBrowser** – REST API for session creation/release with optional stealth payload (`useProxy`, `solveCaptcha`, `stealthConfig`).
119
- - **Hyperbrowser** – REST API for session start/stop, optional stealth/captcha solving, and direct session playback URLs.
120
-
121
- Because the runner calls `browser_test.main(...)`, any provider enhancements made there automatically propagate to batch runs.
122
-
123
- ## Customising Task Sets
124
- The benchmark CSV expects four columns: `starting_url`, `Task`, `ground_truth_url`, and `Ground Truth`. Add rows, duplicate the file under a new name, and supply it via `--csv-file`. For quick smoke tests, trim the dataset or point to `test_tasks.csv` with a small subset of records.
125
-
126
- ## Operational Notes
127
- - Logging is configured at `INFO` level in `run_browserbench.py`; per-task start/stop messages stream to stdout.
128
- - Result files are overwritten only when you supply the same `--output` name. The default timestamped filenames are unique.
129
- - Failures are captured with the raised exception stored in `error_message`; the row still appears in the CSV so aggregate success rates remain accurate.
130
- - Session teardown happens in adapter-specific `cleanup_session(...)` calls. We still attempt to return human-usable session URLs even if the cleanup API raises.
131
-
132
- ## Next Steps
133
- - Integrate the produced CSVs with your analytics tooling to visualise latency and success deltas per provider.
134
- - Extend `providers/` with additional adapters by mirroring the `create_session`/`cleanup_session` contract and adding the provider name to the CLI choices in both scripts.
 
1
  See: https://github.com/Halluminate/browserbench