Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -51,142 +51,50 @@ ProcessBench contains 12 diagnostic question families:
|
|
| 51 |
|
| 52 |
## Repository Contents
|
| 53 |
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
- `splits/`: SFT and Eval QA entries.
|
| 57 |
-
- `metadata/`: split statistics, task distribution, and license notes.
|
| 58 |
-
- `examples/`: rendered representative task cards examples.
|
| 59 |
-
- `ProcessData-SFT-Qwen/`: ProcessData-SFT-Qwen LoRA adapter weights and configurations.
|
| 60 |
-
- `ProcessData-SFT-Qwen_results/`: ProcessData-SFT-Qwen predictions and score summaries.
|
| 61 |
-
- `benchmark_card.md`: benchmark-level documentation.
|
| 62 |
-
- `croissant.json`: Croissant metadata with core and Responsible AI fields.
|
| 63 |
-
|
| 64 |
-
Full upstream videos and full frame dumps are not redistributed in this release.
|
| 65 |
-
|
| 66 |
-
## SFT-Eval Split Reconstruction
|
| 67 |
-
|
| 68 |
-
### Relevant Assets
|
| 69 |
-
|
| 70 |
-
- Public QA rows for `eval` and `sft`.
|
| 71 |
-
- Split manifests and split counts.
|
| 72 |
-
- Public release-relative references for each item.
|
| 73 |
-
|
| 74 |
-
### How to use the public references
|
| 75 |
-
|
| 76 |
-
Each row exposes three reconstruction-oriented fields:
|
| 77 |
-
|
| 78 |
-
- `visual_ref`
|
| 79 |
-
- `source_episode_ref`
|
| 80 |
-
- `reconstruction_key_json`
|
| 81 |
-
|
| 82 |
-
These are designed to be stable, path-free references. They are sufficient to map a released item back to the corresponding source episode or recording once you have obtained the upstream dataset under its original terms.
|
| 83 |
-
|
| 84 |
-
### Source-specific units
|
| 85 |
-
|
| 86 |
-
- `GM-100`: grouped by `(task_id, episode_id)`.
|
| 87 |
-
- `RH20T`: grouped by `recording_id`.
|
| 88 |
-
- `REASSEMBLE`: grouped by `recording_id`.
|
| 89 |
-
- `AIST-Bimanual`: grouped by `recording_id`.
|
| 90 |
-
|
| 91 |
-
### Expected workflow
|
| 92 |
-
|
| 93 |
-
1. Obtain upstream datasets under their original terms.
|
| 94 |
-
2. Use `source`, `source_task_id`, `source_unit_id`, `camera`, and `frame_indices_json` from the public release.
|
| 95 |
-
3. Reconstruct the required visual inputs with the source-specific evaluation scripts in the main codebase.
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
## Public QA fields Schema
|
| 99 |
-
|
| 100 |
-
- `item_id`: Stable public item identifier.
|
| 101 |
-
- `split`: `eval` or `sft`.
|
| 102 |
-
- `source`: `GM-100`, `RH20T`, `REASSEMBLE`, or `AIST-Bimanual`.
|
| 103 |
-
- `source_slug`: Short source slug used in release-relative references.
|
| 104 |
-
- `source_task_id`: Source-side task identifier without local paths.
|
| 105 |
-
- `source_unit_type`: Split-isolation unit type, e.g. `episode` or `recording`.
|
| 106 |
-
- `source_unit_id`: Public unit identifier without local paths.
|
| 107 |
-
- `task_id`: Canonical paper task ID `T1` ... `T12`.
|
| 108 |
-
- `task_name`: Canonical paper task name.
|
| 109 |
-
- `task_type_legacy`: Legacy engineering task name when it differs from paper naming.
|
| 110 |
-
- `input_type`: Public input format description.
|
| 111 |
-
- `question`: Public question text.
|
| 112 |
-
- `choice_A` ... `choice_F`: Flattened answer choices.
|
| 113 |
-
- `answer`: Public answer label in the release schema.
|
| 114 |
-
- `answer_text`: Public answer text mapped from `answer`.
|
| 115 |
-
- `num_choices`: Number of non-null answer choices.
|
| 116 |
-
- `num_frames`: Number of referenced frames.
|
| 117 |
-
- `frame_indices_json`: JSON-encoded frame indices.
|
| 118 |
-
- `display_labels_json`: JSON-encoded panel labels such as `["X", "Y"]` or `["X", "Z", "Y"]`.
|
| 119 |
-
- `camera`: Source camera identifier when available.
|
| 120 |
-
- `arm_type`: Source-side arm configuration tag when available.
|
| 121 |
-
- `visual_ref`: Release-relative visual reference string. This is not a local filesystem path.
|
| 122 |
-
- `source_episode_ref`: Release-relative source unit reference.
|
| 123 |
-
- `reconstruction_key_json`: Minimal JSON payload needed to reconstruct the sample with upstream data and source-specific scripts.
|
| 124 |
-
- `task_meta_in_source`: Whether source-side task metadata/context exists upstream.
|
| 125 |
-
- `task_meta_public`: Always `false` in this release build. Separate structured task-meta fields are intentionally withheld.
|
| 126 |
-
- `split_version`: Current split version.
|
| 127 |
-
- `split_group_id`: Source-side split group identifier.
|
| 128 |
-
- `builder_version`: Public builder version tag.
|
| 129 |
-
- `prompt_version`: Public prompt-serialization version tag.
|
| 130 |
-
- `sft_target`: SFT-only supervised target string. Present only in `processdata_sft.*`.
|
| 131 |
-
|
| 132 |
-
### Notes
|
| 133 |
-
|
| 134 |
-
- `T5` is equivalent to `T_progress`.
|
| 135 |
-
- `T8` is equivalent to `T_temporal`.
|
| 136 |
-
- `T9` is equivalent to `T_binary`.
|
| 137 |
-
- `T8` is normalized into six explicit permutation choices so the public table remains flat and Parquet-friendly.
|
| 138 |
-
- `T9` is normalized to `A/B` choices even when the source data originally used `X/Y`.
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
## Prompt Templates
|
| 143 |
-
|
| 144 |
-
This file documents the public prompt serialization used by the current release build. Separate source-side task-meta prompting is intentionally disabled in this public release.
|
| 145 |
-
|
| 146 |
-
### Standard multiple-choice template
|
| 147 |
|
| 148 |
```text
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
.
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 161 |
```
|
| 162 |
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
```text
|
| 166 |
-
You are shown 3 frames from a robot manipulation task.
|
| 167 |
-
The frames are labeled X, Y, Z (these labels are arbitrary identifiers, not positional, and not time-ordered).
|
| 168 |
-
Determine the correct chronological order of these frames (from earliest to latest).
|
| 169 |
-
Choose exactly one 3-letter permutation, for example: YXZ means Y happened first, then X, then Z.
|
| 170 |
-
|
| 171 |
-
Output protocol:
|
| 172 |
-
- You may include brief reasoning before the final answer.
|
| 173 |
-
- The final line must be exactly: <ANSWER>XYZ</ANSWER>
|
| 174 |
-
- Do not output anything after </ANSWER>.
|
| 175 |
-
```
|
| 176 |
-
|
| 177 |
-
### `T9` Temporal Priority template
|
| 178 |
-
|
| 179 |
-
```text
|
| 180 |
-
A single comparison image shows two labeled robot-manipulation panels from the same episode.
|
| 181 |
-
The left-right placement of the panels and the labels are arbitrary identifiers and do not indicate temporal order.
|
| 182 |
-
Which labeled panel happened earlier in the real manipulation sequence, X or Y?
|
| 183 |
-
Choose exactly one label: X or Y.
|
| 184 |
-
|
| 185 |
-
Output protocol:
|
| 186 |
-
- You may include brief reasoning before the final answer.
|
| 187 |
-
- The final line must be exactly: <ANSWER>X</ANSWER>
|
| 188 |
-
- Do not output anything after </ANSWER>.
|
| 189 |
-
```
|
| 190 |
|
| 191 |
|
| 192 |
## License and Terms
|
|
|
|
| 51 |
|
| 52 |
## Repository Contents
|
| 53 |
|
| 54 |
+
The release is organized as a compact, reviewable benchmark package:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
|
| 56 |
```text
|
| 57 |
+
ProcessBench-Anom/
|
| 58 |
+
├── splits/ # SFT and Eval QA entries, split summaries
|
| 59 |
+
│ ├── processdata_sft.jsonl
|
| 60 |
+
│ ├── processdata_sft.parquet
|
| 61 |
+
│ ├── sft_manifest.jsonl
|
| 62 |
+
│ ├── processdata_eval.jsonl
|
| 63 |
+
│ ├── processdata_eval.parquet
|
| 64 |
+
│ └── eval-manifest.jsonl
|
| 65 |
+
|
| 66 |
+
│
|
| 67 |
+
├── metadata/ # statistics, schema, licenses, prompt templates, reconstruction notes
|
| 68 |
+
│ ├── split_summary.json
|
| 69 |
+
│ ├── task_distribution.csv
|
| 70 |
+
│ ├── asset_licenses.csv
|
| 71 |
+
│ ├── schema.md
|
| 72 |
+
│ ├── prompt_templates.md
|
| 73 |
+
│ └── reconstruction.md
|
| 74 |
+
│
|
| 75 |
+
├── examples/ # rendered representative benchmark cards
|
| 76 |
+
│ └── task_cards/
|
| 77 |
+
│ ├── T1_phase_recognition.png
|
| 78 |
+
│ ├── T2_contact_detection.png
|
| 79 |
+
│ ├── ...
|
| 80 |
+
│ └── T12_primitive_chain_restoration.png
|
| 81 |
+
│
|
| 82 |
+
├── ProcessData-SFT-Qwen/ # LoRA adapter weights and training configuration
|
| 83 |
+
│ ├── adapter_config.json
|
| 84 |
+
│ ├── adapter_model.safetensors
|
| 85 |
+
│ ├── ...
|
| 86 |
+
│ └── training_config.json
|
| 87 |
+
│
|
| 88 |
+
├── ProcessData-SFT-Qwen_results/ # predictions and summary of the post-trained model
|
| 89 |
+
│ ├── ProcessData-SFT-Qwen_predictions.json
|
| 90 |
+
│ └── ProcessData-SFT-Qwen_summary.json
|
| 91 |
+
│
|
| 92 |
+
├── benchmark_card.md # benchmark-level documentation
|
| 93 |
+
├── croissant.json # Croissant core + Responsible AI metadata
|
| 94 |
+
└── README.md
|
| 95 |
```
|
| 96 |
|
| 97 |
+
Full upstream videos and full frame dumps are not redistributed in this release.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 98 |
|
| 99 |
|
| 100 |
## License and Terms
|