metadata
license: mit
task_categories:
- reinforcement-learning
- question-answering
language:
- en
tags:
- web-navigation
- preference-learning
- reward-modeling
size_categories:
- 1K<n<10K
Web Agent Grouped Graph Dataset
This dataset contains web navigation tasks in grouped graph format with full history and candidate actions for training reward models.
Dataset Description
- Format: JSON Lines (JSONL) - One entry per step with grouped candidates
- Size: ~2.8K step entries from 2.8K tasks
- Domains: GitLab, OpenStreetMap, Reddit, Shopping, Shopping Admin
Data Format
Each line in graph_dataset.jsonl represents a single step with all candidate actions grouped together:
{
"task_id": "...",
"goal": "Find product X and add to cart",
"domain": "shopping",
"step_index": 3,
"history": [
{"state_id": "S0", "screenshot": "...", "url": "...", "obs": "..."},
{"state_id": "S1", "screenshot": "...", "url": "...", "obs": "..."},
{"state_id": "S2", "screenshot": "...", "url": "...", "obs": "..."},
{"state_id": "S3", "screenshot": "...", "url": "...", "obs": "..."}
],
"current_state": {
"state_id": "S3",
"screenshot": "path/to/current.png",
"url": "http://...",
"obs": "accessibility tree..."
},
"candidates": [
{
"label": "gold",
"action": "click('153')",
"next_state": "S4",
"next_screenshot": "path/to/next.png",
"next_url": "http://...",
"next_obs": "..."
},
{
"label": "negative",
"action": "click('88')",
"negative_type": "hard_negative",
"reason": "Leads to wrong page",
"next_state": "S_bad1",
"next_screenshot": "path/to/bad1.png",
"next_url": "http://...",
"next_obs": "..."
}
]
}
Key Features
- Full History: Complete trajectory up to current state
- Grouped Candidates: All possible actions (gold + negatives) from same state
- Next-State Screenshots: Screenshot paths for all candidate outcomes
- Rich Metadata: Task ID, domain, goal, step index
- Negative Types: Classification (easy_negative, hard_negative, detour_negative)
Usage
import json
# Load dataset
with open('graph_dataset.jsonl', 'r') as f:
for line in f:
entry = json.loads(line)
# Access task info
goal = entry['goal']
step_idx = entry['step_index']
# Access history
history = entry['history']
current_state = entry['current_state']
# Process candidates
for candidate in entry['candidates']:
if candidate['label'] == 'gold':
print(f"Gold action: {candidate['action']}")
else:
print(f"Negative: {candidate['action']} ({candidate['negative_type']})")
Training Reward Models
This format is ideal for:
- Preference Learning: Compare gold vs negative actions from same state
- Reward Modeling: Predict which action leads to goal completion
- Action Ranking: Rank all candidates by predicted reward
Example:
# For each step entry
gold = [c for c in entry['candidates'] if c['label'] == 'gold'][0]
negatives = [c for c in entry['candidates'] if c['label'] == 'negative']
# Train model to rank gold higher than all negatives
reward_gold = model(entry['current_state'], gold)
rewards_neg = [model(entry['current_state'], neg) for neg in negatives]
Statistics
See conversion_stats.json for detailed statistics.
License
MIT License