File size: 3,145 Bytes
20bb7ec 9c701ed 20bb7ec ab42ff4 20bb7ec ab42ff4 20bb7ec |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 |
---
pipeline_tag: image-text-to-text
library_name: transformers
language:
- en
base_model:
- Qwen/Qwen3-VL-30B-A3B-Instruct
tags:
- browser_use
---
# BU-30B-A3B-Preview
<picture>
<source media="(prefers-color-scheme: light)" srcset="https://github.com/user-attachments/assets/2ccdb752-22fb-41c7-8948-857fc1ad7e24"">
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/user-attachments/assets/774a46d5-27a0-490c-b7d0-e65fcbbfa358">
<img alt="Shows a black Browser Use Logo in light color mode and a white one in dark color mode." src="https://github.com/user-attachments/assets/2ccdb752-22fb-41c7-8948-857fc1ad7e24" width="full">
</picture>
Meet BU-30B-A3B-Preview β bringing SoTA Browser Use capabilities in a small model that can be hosted on a single GPU.
This model is heavily trained to be used with [browser-use OSS library](https://github.com/browser-use/browser-use) and provides comprehensive browsing capabilities with superior DOM understanding and visual reasoning.
## Quickstart (BU Cloud)
You can directly use this model at BU Cloud. Simply
1. Get your API key from [BU Cloud](https://cloud.browser-use.com/new-api-key)
2. Set environment variable: export BROWSER_USE_API_KEY="your-key"
3. Install the browser-use library following the instructions [here](https://github.com/browser-use/browser-use) and run
```python
from dotenv import load_dotenv
from browser_use import Agent, ChatBrowserUse
load_dotenv()
llm = ChatBrowserUse(
model='browser-use/bu-30b-a3b-preview', # BU Open Source Model!!
)
agent = Agent(
task='Find the number of stars of browser-use and stagehand. Tell me which one has more stars :)',
llm=llm,
flash_mode=True
)
agent.run_sync()
```
## Quickstart (vLLM)
We recommend using this model with [vLLM](https://github.com/vllm-project/vllm).
#### Installation
Make sure to install **vllm >= 0.12.0**:
```
pip install vllm --upgrade
```
#### Serve
A simple launch command is:
```bash
vllm serve browser-use/bu-30b-a3b-preview \
--max-model-len 65536 \
--host 0.0.0.0 \
--port 8000
```
which will create an OpenAI compatible endpoint at localhost that you can use with.
```python
from dotenv import load_dotenv
from browser_use import Agent, ChatOpenAI
load_dotenv()
llm = ChatOpenAI(
base_url='http://localhost:8000/v1',
model='browser-use/bu-30b-a3b-preview',
temperature=0.6,
top_p=0.95,
dont_force_structured_output=True, # speed up by disabling structured output
)
agent = Agent(
task='Find the number of stars of browser-use and stagehand. Tell me which one has more stars :)',
llm=llm,
)
agent.run_sync()
```
## Model Details
| Property | Value |
|----------|-------|
| **Base Model** | Qwen/Qwen3-VL-30B-A3B-Instruct |
| **Parameters** | 30B total, 3B active (MoE) |
| **Context Length** | 65,536 tokens |
| **Architecture** | Vision-Language Model (Mixture of Experts) |
## Links
- π [Browser Use Cloud](https://cloud.browser-use.com)
- π [Documentation](https://docs.browser-use.com)
- π» [GitHub](https://github.com/browser-use/browser-use)
- π¬ [Discord](https://link.browser-use.com/discord)
|