Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: TypeError
Message: Couldn't cast array of type string to null
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1779, in _prepare_split_single
for key, table in generator:
^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 299, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2255, in cast_table_to_schema
cast_array_to_feature(
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1804, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2061, in cast_array_to_feature
casted_array_values = _c(array.values, feature.feature)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1806, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2095, in cast_array_to_feature
return array_cast(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1806, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1957, in array_cast
raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
TypeError: Couldn't cast array of type string to null
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 882, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 943, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1832, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
id string | level int64 | template_id int64 | template_type string | hides list | question string | options unknown | answer int64 | acceptance_bounds dict | provenance dict | context dict |
|---|---|---|---|---|---|---|---|---|---|---|
bc83aa2e-25f3-4c5b-ba3f-21b33b3b3bdb | 1 | 1 | predictive | [] | The robot is performing a manipulation task. We want to isolate the approach to the object in the robot's time series. Assuming a fixed window length of 52 timesteps, at which timestamp should the window begin? Answer only with an integer or decimal number, nothing else. | {} | 0 | {
"min": 0,
"max": 296
} | {
"dataset": "aursad",
"episode": "experiment_100",
"subseries_start_index": 57,
"subseries_length": 47,
"phase_name": "0",
"phase_start_in_subseries": 0,
"phase_length": 47,
"relevance": {
"fault_id": 0,
"sampler": "uniform",
"locality": "global",
"validated": true
}
} | {
"time_series_format": {
"description": "Each row in time_series is one timestep encoded as 't=<timestamp>: acronym=value, ...'. Feature names use acronyms defined in provenance.feature_mapping.",
"acronym_mapping": {
"ett0": "effort_target_torque_0",
"ett1": "effort_target_torque_1",
"ett2... |
e1500706-4614-4a1f-869d-c4561f0bc0ea | 1 | 1 | predictive | [] | The robot is performing a manipulation task. We want to isolate the approach to the object in the robot's time series. Assuming a fixed window length of 64 timesteps, at which timestamp should the window begin? Answer only with an integer or decimal number, nothing else. | {} | 0 | {
"min": 0,
"max": 295
} | {
"dataset": "aursad",
"episode": "experiment_1003",
"subseries_start_index": 4,
"subseries_length": 59,
"phase_name": "0",
"phase_start_in_subseries": 0,
"phase_length": 59,
"relevance": {
"fault_id": 0,
"sampler": "uniform",
"locality": "global",
"validated": true
}
} | {
"time_series_format": {
"description": "Each row in time_series is one timestep encoded as 't=<timestamp>: acronym=value, ...'. Feature names use acronyms defined in provenance.feature_mapping.",
"acronym_mapping": {
"ett0": "effort_target_torque_0",
"ett1": "effort_target_torque_1",
"ett2... |
65bace2b-1cd1-4dfd-b56f-549919028994 | 1 | 1 | predictive | [] | The robot is performing a manipulation task. We want to isolate the approach to the object in the robot's time series. Assuming a fixed window length of 42 timesteps, at which timestamp should the window begin? Answer only with an integer or decimal number, nothing else. | {} | 0 | {
"min": 0,
"max": 297
} | {
"dataset": "aursad",
"episode": "experiment_1007",
"subseries_start_index": 27,
"subseries_length": 37,
"phase_name": "0",
"phase_start_in_subseries": 0,
"phase_length": 37,
"relevance": {
"fault_id": 0,
"sampler": "uniform",
"locality": "global",
"validated": true
}
} | {
"time_series_format": {
"description": "Each row in time_series is one timestep encoded as 't=<timestamp>: acronym=value, ...'. Feature names use acronyms defined in provenance.feature_mapping.",
"acronym_mapping": {
"ett0": "effort_target_torque_0",
"ett1": "effort_target_torque_1",
"ett2... |
bb70e981-3d4e-4b65-844d-2ae3d0efabdf | 1 | 1 | predictive | [] | The robot is performing a manipulation task. We want to isolate the approach to the object in the robot's time series. Assuming a fixed window length of 63 timesteps, at which timestamp should the window begin? Answer only with an integer or decimal number, nothing else. | {} | 0 | {
"min": 0,
"max": 297
} | {
"dataset": "aursad",
"episode": "experiment_1011",
"subseries_start_index": 28,
"subseries_length": 58,
"phase_name": "0",
"phase_start_in_subseries": 0,
"phase_length": 58,
"relevance": {
"fault_id": 0,
"sampler": "uniform",
"locality": "global",
"validated": true
}
} | {
"time_series_format": {
"description": "Each row in time_series is one timestep encoded as 't=<timestamp>: acronym=value, ...'. Feature names use acronyms defined in provenance.feature_mapping.",
"acronym_mapping": {
"ett0": "effort_target_torque_0",
"ett1": "effort_target_torque_1",
"ett2... |
b5cb02ae-8ecd-4813-8c94-a67ef7308066 | 1 | 1 | predictive | [] | The robot is performing a manipulation task. We want to isolate the approach to the object in the robot's time series. Assuming a fixed window length of 22 timesteps, at which timestamp should the window begin? Answer only with an integer or decimal number, nothing else. | {} | 0 | {
"min": 0,
"max": 296
} | {
"dataset": "aursad",
"episode": "experiment_1015",
"subseries_start_index": 103,
"subseries_length": 49,
"phase_name": "0",
"phase_start_in_subseries": 0,
"phase_length": 17,
"relevance": {
"fault_id": 0,
"sampler": "uniform",
"locality": "global",
"validated": true
}
} | {
"time_series_format": {
"description": "Each row in time_series is one timestep encoded as 't=<timestamp>: acronym=value, ...'. Feature names use acronyms defined in provenance.feature_mapping.",
"acronym_mapping": {
"ett0": "effort_target_torque_0",
"ett1": "effort_target_torque_1",
"ett2... |
4a39ce6e-67a7-4f23-9173-26d77fd139d6 | 1 | 1 | predictive | [] | The robot is performing a manipulation task. We want to isolate the lift of the object in the robot's time series. Assuming a fixed window length of 37 timesteps, at which timestamp should the window begin? Answer only with an integer or decimal number, nothing else. | {} | 1,680 | {
"min": 1384,
"max": 1977
} | {
"dataset": "aursad",
"episode": "experiment_1015",
"subseries_start_index": 103,
"subseries_length": 49,
"phase_name": "4",
"phase_start_in_subseries": 17,
"phase_length": 32,
"relevance": {
"fault_id": 0,
"sampler": "uniform",
"locality": "global",
"validated": true
}
} | {
"time_series_format": {
"description": "Each row in time_series is one timestep encoded as 't=<timestamp>: acronym=value, ...'. Feature names use acronyms defined in provenance.feature_mapping.",
"acronym_mapping": {
"ett0": "effort_target_torque_0",
"ett1": "effort_target_torque_1",
"ett2... |
fee92993-e86d-449b-b94a-8813ddb67327 | 1 | 1 | predictive | [] | The robot is performing a manipulation task. We want to isolate the approach to the object in the robot's time series. Assuming a fixed window length of 64 timesteps, at which timestamp should the window begin? Answer only with an integer or decimal number, nothing else. | {} | 0 | {
"min": 0,
"max": 296
} | {
"dataset": "aursad",
"episode": "experiment_1017",
"subseries_start_index": 43,
"subseries_length": 59,
"phase_name": "0",
"phase_start_in_subseries": 0,
"phase_length": 59,
"relevance": {
"fault_id": 0,
"sampler": "uniform",
"locality": "global",
"validated": true
}
} | {
"time_series_format": {
"description": "Each row in time_series is one timestep encoded as 't=<timestamp>: acronym=value, ...'. Feature names use acronyms defined in provenance.feature_mapping.",
"acronym_mapping": {
"ett0": "effort_target_torque_0",
"ett1": "effort_target_torque_1",
"ett2... |
d565b976-30e7-4715-b526-a809446b630a | 1 | 1 | predictive | [] | The robot is performing a manipulation task. We want to isolate the approach to the object in the robot's time series. Assuming a fixed window length of 46 timesteps, at which timestamp should the window begin? Answer only with an integer or decimal number, nothing else. | {} | 0 | {
"min": 0,
"max": 297
} | {
"dataset": "aursad",
"episode": "experiment_102",
"subseries_start_index": 55,
"subseries_length": 41,
"phase_name": "0",
"phase_start_in_subseries": 0,
"phase_length": 41,
"relevance": {
"fault_id": 0,
"sampler": "uniform",
"locality": "global",
"validated": true
}
} | {
"time_series_format": {
"description": "Each row in time_series is one timestep encoded as 't=<timestamp>: acronym=value, ...'. Feature names use acronyms defined in provenance.feature_mapping.",
"acronym_mapping": {
"ett0": "effort_target_torque_0",
"ett1": "effort_target_torque_1",
"ett2... |
353af895-c56c-4504-9608-3e9d708f9b3f | 1 | 1 | predictive | [] | The robot is performing a manipulation task. We want to isolate the approach to the object in the robot's time series. Assuming a fixed window length of 43 timesteps, at which timestamp should the window begin? Answer only with an integer or decimal number, nothing else. | {} | 0 | {
"min": 0,
"max": 297
} | {
"dataset": "aursad",
"episode": "experiment_1023",
"subseries_start_index": 11,
"subseries_length": 38,
"phase_name": "0",
"phase_start_in_subseries": 0,
"phase_length": 38,
"relevance": {
"fault_id": 0,
"sampler": "uniform",
"locality": "global",
"validated": true
}
} | {
"time_series_format": {
"description": "Each row in time_series is one timestep encoded as 't=<timestamp>: acronym=value, ...'. Feature names use acronyms defined in provenance.feature_mapping.",
"acronym_mapping": {
"ett0": "effort_target_torque_0",
"ett1": "effort_target_torque_1",
"ett2... |
c79e04dc-0172-4d1a-a066-82f305d5f3bc | 1 | 1 | predictive | [] | The robot is performing a manipulation task. We want to isolate the approach to the object in the robot's time series. Assuming a fixed window length of 43 timesteps, at which timestamp should the window begin? Answer only with an integer or decimal number, nothing else. | {} | 0 | {
"min": 0,
"max": 297
} | {
"dataset": "aursad",
"episode": "experiment_1025",
"subseries_start_index": 45,
"subseries_length": 38,
"phase_name": "0",
"phase_start_in_subseries": 0,
"phase_length": 38,
"relevance": {
"fault_id": 0,
"sampler": "uniform",
"locality": "global",
"validated": true
}
} | {
"time_series_format": {
"description": "Each row in time_series is one timestep encoded as 't=<timestamp>: acronym=value, ...'. Feature names use acronyms defined in provenance.feature_mapping.",
"acronym_mapping": {
"ett0": "effort_target_torque_0",
"ett1": "effort_target_torque_1",
"ett2... |
4ab4ca4e-8c8a-4972-bede-8a21734941ea | 1 | 1 | predictive | [] | The robot is performing a manipulation task. We want to isolate the approach to the object in the robot's time series. Assuming a fixed window length of 44 timesteps, at which timestamp should the window begin? Answer only with an integer or decimal number, nothing else. | {} | 0 | {
"min": 0,
"max": 297
} | {
"dataset": "aursad",
"episode": "experiment_1031",
"subseries_start_index": 48,
"subseries_length": 39,
"phase_name": "0",
"phase_start_in_subseries": 0,
"phase_length": 39,
"relevance": {
"fault_id": 0,
"sampler": "uniform",
"locality": "global",
"validated": true
}
} | {
"time_series_format": {
"description": "Each row in time_series is one timestep encoded as 't=<timestamp>: acronym=value, ...'. Feature names use acronyms defined in provenance.feature_mapping.",
"acronym_mapping": {
"ett0": "effort_target_torque_0",
"ett1": "effort_target_torque_1",
"ett2... |
4392f220-8fc6-4bb9-81e2-97b6abb76770 | 1 | 1 | predictive | [] | The robot is performing a manipulation task. We want to isolate the lift of the object in the robot's time series. Assuming a fixed window length of 17 timesteps, at which timestamp should the window begin? Answer only with an integer or decimal number, nothing else. | {} | 3,756 | {
"min": 3460,
"max": 4053
} | {
"dataset": "aursad",
"episode": "experiment_1033",
"subseries_start_index": 80,
"subseries_length": 50,
"phase_name": "4",
"phase_start_in_subseries": 38,
"phase_length": 12,
"relevance": {
"fault_id": 0,
"sampler": "uniform",
"locality": "global",
"validated": true
}
} | {
"time_series_format": {
"description": "Each row in time_series is one timestep encoded as 't=<timestamp>: acronym=value, ...'. Feature names use acronyms defined in provenance.feature_mapping.",
"acronym_mapping": {
"ett0": "effort_target_torque_0",
"ett1": "effort_target_torque_1",
"ett2... |
FactoryBench
FactoryBench is a benchmark for evaluating machine-behavior reasoning in time-series models and LLMs over industrial robotic telemetry. Question-answer pairs are organised along the four levels of Pearl's causal hierarchy:
| Level | Capability | Example |
|---|---|---|
| L1 — State | Identify the operational state from raw signals | "Which fault, if any, is occurring in this episode?" |
| L2 — Intervention | Predict the effect of an intervention | "How would the joint torques change if the payload were doubled?" |
| L3 — Counterfactual | Reason about alternative histories | "Would the collision still have occurred if the speed had been 50% lower?" |
| L4 — Decision | Engineering decision-making (troubleshooting + optimisation) | "Given this anomaly, what is the most likely root cause and remediation?" |
The benchmark is grounded in FactoryWave, a dense multivariate telemetry dataset collected from a UR3 collaborative robot (125 Hz) and a KUKA KR10 industrial arm (83 Hz), supplemented with the AURSAD and voraus-AD open-source datasets.
Dataset summary
- 70,918 Q&A pairs across four causal levels and three splits (train/validation/test).
- 5 answer formats: single-select MCQ, multi-select MCQ, ranking, tensor/numerical, free-form (judged by an LLM-as-judge voting protocol).
- Telemetry from real industrial robots with systematic fault injection (27 atomic mechanisms across pick-and-place, screwing, and peg-in-hole tasks).
Repository layout
FactoryBench/
├── factorybench_qa/ # Question-answer pairs
│ ├── level_1/{train,validation,test}.jsonl
│ ├── level_2/{train,validation,test}.jsonl
│ ├── level_3/{train,validation,test}.jsonl
│ └── level_4/{train,validation,test}.jsonl
├── knowledge_graph/ # Combined knowledge graph
│ ├── knowledge_graph.json # Machines, grippers, tasks, events, faults, anomalies, error→protocol map, relevance specs
│ └── SCHEMA.md # Field-level schema documentation
└── factorywave/ # Underlying telemetry & metadata
├── episodes.parquet # Episode-level metadata (9,728 episodes)
├── flow.parquet # Task flow definitions
├── kuka_signals.parquet # KUKA KR10 signals (~83 Hz, 1,428 episodes)
├── ur_signals.parquet # UR3 signals (~125 Hz, 3,076 episodes)
├── ur_signals_10hz.parquet # UR3 signals (10 Hz, 3,984 episodes — disjoint from ur_signals)
└── ur_screwdriver_signals.parquet # UR3 screwdriver subset (~125 Hz, 1,240 episodes)
Note on UR coverage.
ur_signals.parquet(~125 Hz) andur_signals_10hz.parquet(10 Hz) cover disjoint UR3 episode subsets — no episode appears in both files. Together they span 7,060 distinct UR3 episodes;ur_screwdriver_signals.parquetadds another 1,240 screwdriver-task episodes. Each row inepisodes.parquetcorresponds to exactly one signal table.
Knowledge graph
knowledge_graph/knowledge_graph.json is the structured "world model" that grounds FactoryBench Q&A items: machine and gripper capability tables, the task/event vocabulary, the fault root-cause catalogue, the observable-anomaly catalogue, the error→protocol mapping used to derive ground-truth answers for L4 troubleshooting items, an anomaly severity ranking, and the relevance specs that drive fault-aware sub-series sampling. See knowledge_graph/SCHEMA.md for field-level documentation.
import json, urllib.request
url = "https://huggingface.co/datasets/FactoryBench/FactoryBench/resolve/main/knowledge_graph/knowledge_graph.json"
kg = json.loads(urllib.request.urlopen(url).read())
# Machine spec lookup
machines_by_id = {m["machine_id"]: m for m in kg["machines"]}
# Error → operator protocol (the ground truth for L4 troubleshooting answers)
protocol_for = {e["root_cause"]: e["ur3_protocol"] for e in kg["root_cause_error_mapping"]}
print(protocol_for["collision_rigid_object"])
Q&A pair counts
| Level | Train | Val | Test | Total |
|---|---|---|---|---|
| L1 | 12,674 | 1,338 | 1,309 | 15,321 |
| L2 | 33,311 | 3,428 | 3,487 | 40,226 |
| L3 | 2,353 | 265 | 321 | 2,939 |
| L4 | 9,949 | 1,251 | 1,232 | 12,432 |
| Total | 58,287 | 6,282 | 6,349 | 70,918 |
Q&A fields
Each line in factorybench_qa/level_*/*.jsonl is a single Q&A item:
| Field | Description |
|---|---|
id |
Unique item identifier |
level |
Causal level (1–4) |
template_id |
Question template the item was generated from |
template_type |
Answer format (single_choice, multi_choice, ranking, tensor, free_form) |
hides |
Channels/fields hidden from the model in this item |
question |
Natural-language question |
options |
Answer options (for MCQ/ranking templates) |
answer |
Ground-truth answer |
root_cause |
Underlying fault/cause (Level 4 only) |
acceptance_bounds |
Tolerance for numerical answers |
provenance |
Source episode(s) and channels used to derive the item |
context |
Time-series and metadata context exposed to the model |
Loading the data
from datasets import load_dataset
# Load a single level/split
ds = load_dataset(
"FactoryBench/FactoryBench",
data_files="factorybench_qa/level_1/test.jsonl",
split="train",
)
# Load underlying telemetry
import pandas as pd
root = "hf://datasets/FactoryBench/FactoryBench/factorywave"
episodes = pd.read_parquet(f"{root}/episodes.parquet")
ur_125 = pd.read_parquet(f"{root}/ur_signals.parquet")
ur_10 = pd.read_parquet(f"{root}/ur_signals_10hz.parquet") # different episodes
kuka = pd.read_parquet(f"{root}/kuka_signals.parquet")
screw = pd.read_parquet(f"{root}/ur_screwdriver_signals.parquet")
# Load the combined knowledge graph (machines, faults, error→protocol, ...)
import json, urllib.request
kg = json.loads(urllib.request.urlopen(
"https://huggingface.co/datasets/FactoryBench/FactoryBench/resolve/main/knowledge_graph/knowledge_graph.json"
).read())
Citation
If you use FactoryBench, please cite the dataset and the two upstream open-source datasets it incorporates (AURSAD and voraus-AD):
@misc{anonymous2026factorybench,
title = {FactoryBench: Evaluating Industrial Machine Understanding},
author = {Anonymous},
year = {2026},
note = {Submission under double-blind review}
}
@article{leporowski2022aursad,
title = {{AURSAD}: Universal Robot Screwdriving Anomaly Detection Dataset},
author = {Leporowski, B{\l}a{\.z}ej and Tola, Daniella and Hansen, Christian and Iosifidis, Alexandros},
journal = {arXiv preprint arXiv:2202.03211},
year = {2022}
}
@misc{brockmann2024vorausad,
title = {voraus-{AD}: A New Dataset for Anomaly Detection in Robot Applications},
author = {Brockmann, Jan Thie{\ss} and Rudolph, Marco and Rosenhahn, Bodo and Wandt, Bastian},
year = {2024},
eprint = {2311.04153},
archivePrefix = {arXiv},
primaryClass = {cs.RO}
}
Intended use & limitations
Intended use. Benchmark evaluation of LLMs and time-series models on structured industrial Q&A reasoning tasks (state, intervention, counterfactual, decision-making).
Limitations. Domain-specific to factory and industrial robotic scenarios; may not generalise to open-domain Q&A. Faults are atomic and drawn from a closed catalogue of 27 physically injected mechanisms — different from compound or gradual real-world faults. The dataset shows a size imbalance between Levels 2 and 3.
Out-of-scope. Not intended for deployment in safety-critical, medical, legal, or financial decision systems without further validation by domain experts.
License
Released under the MIT License.
- Downloads last month
- 69