processbench commited on
Commit
8ca373b
·
verified ·
1 Parent(s): 0acf8ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md CHANGED
@@ -1,3 +1,75 @@
1
  ---
2
  license: other
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ language:
4
+ - en
5
+ pretty_name: ProcessBench
6
+ task_categories:
7
+ - visual-question-answering
8
+ tags:
9
+ - robotics
10
+ - vision-language-models
11
+ - embodied-ai
12
+ - benchmark
13
+ - process-understanding
14
+ - manipulation
15
+ size_categories:
16
+ - 10K<n<100K
17
+ configs:
18
+ - config_name: default
19
+ data_files:
20
+ - split: eval
21
+ path: data/processbench_eval.parquet
22
+ - split: train
23
+ path: data/processdata_sft.parquet
24
  ---
25
+
26
+ # ProcessBench
27
+
28
+ ## Dataset Summary
29
+ ProcessBench is a process-aware VLM benchmark for robotic manipulation. It contains 57,892 QA items, including 48,841 SFT items and 9,051 evaluation items, across 12 task families.
30
+
31
+ ## What is included
32
+ - Frozen evaluation QA items
33
+ - SFT QA items or manifests
34
+ - Split metadata
35
+ - Task distribution statistics
36
+ - Human audit summary
37
+ - ProcessEval-7B bootstrap CI
38
+ - Rendered task cards
39
+ - Reconstruction metadata
40
+
41
+ ## What is not included
42
+ We do not redistribute full upstream raw videos or full frame dumps. Full visual reconstruction requires users to obtain the upstream datasets under their original licenses.
43
+
44
+ ## Data Sources
45
+ GM-100, RH20T, REASSEMBLE, and AIST-Bimanual.
46
+
47
+ ## Task Families
48
+ T1--T12 with short descriptions.
49
+
50
+ ## Data Fields
51
+ Explain item_id, source, task_id, input_type, question, choices, answer, visual_ref, source_episode_ref, builder_version.
52
+
53
+ ## Splits
54
+ Strict episode / recording / scene isolation. Report 48,841 SFT and 9,051 eval.
55
+
56
+ ## Metrics
57
+ Accuracy, random baseline, majority baseline, bootstrap CI.
58
+
59
+ ## Human Audit
60
+ 50 sampled items per task, two annotators.
61
+
62
+ ## Intended Use
63
+ Diagnostic evaluation of VLM-side process understanding and process-aware VLM adaptation.
64
+
65
+ ## Out-of-Scope Use
66
+ Not a closed-loop robot safety benchmark, not a deployment certificate, not a generic VLM leaderboard.
67
+
68
+ ## Licenses and Terms
69
+ Code MIT; derived data subject to upstream terms.
70
+
71
+ ## Croissant Metadata
72
+ This release includes Croissant metadata with core and Responsible AI fields.
73
+
74
+ ## Citation
75
+ Anonymous during review.