The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 97, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 184, in _generate_tables
with open(file, "rb") as f:
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/streaming.py", line 73, in wrapper
return function(*args, download_config=download_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 977, in xopen
file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/core.py", line 135, in open
return self.__enter__()
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__
f = self.fs.open(self.path, mode=mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open
f = self._open(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/implementations/zip.py", line 129, in _open
out = self.zip.open(path, mode.strip("b"), force_zip64=self.force_zip_64)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/zipfile/__init__.py", line 1608, in open
zinfo = self.getinfo(name)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/zipfile/__init__.py", line 1536, in getinfo
raise KeyError(
KeyError: "There is no item named 'bigcase/BCSA' in the archive"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
PCCA Industrial CKG Audit Benchmark
This dataset contains the benchmark cases used by the PCCA framework:
PCCA: A Hypothesis-Driven Prompt-Conditioned Causal Auditing Framework for Industrial Knowledge Graphs
The benchmark is designed for auditing industrial causal knowledge graphs (CKGs). Each case provides an initial CKG, generated sensor observations, maintenance-ticket style records, expert-knowledge documents, and ground-truth audit targets.
Dataset Summary
- Domain: industrial causal knowledge graph auditing
- Data type: semi-synthetic graph, time-series, ticket, and text artifacts
- Number of cases: 225
- Scales:
smallcase,mediumcase,bigcase - Scenario groups:
BCSA,CEDA,Mixed - Cases per scale/group: 25
The dataset is semi-synthetic. It is intended to preserve realistic industrial structure without releasing private production records.
Files
The main archive is:
pcca_full_benchmark_cases.zip
Recommended companion files:
benchmark_manifest.csv
dataset_summary.json
After extraction, the expected layout is:
.
|-- bigcase/
| |-- BCSA/
| |-- CEDA/
| `-- Mixed/
|-- mediumcase/
| |-- BCSA/
| |-- CEDA/
| `-- Mixed/
`-- smallcase/
|-- BCSA/
|-- CEDA/
`-- Mixed/
Download
pip install huggingface_hub
huggingface-cli download Yukko-kurisu/PCCA pcca_full_benchmark_cases.zip --repo-type dataset --local-dir data
python -m zipfile -e data/pcca_full_benchmark_cases.zip data/full_benchmark
Python API:
from huggingface_hub import hf_hub_download
zip_path = hf_hub_download(
repo_id="Yukko-kurisu/PCCA",
repo_type="dataset",
filename="pcca_full_benchmark_cases.zip",
)
print(zip_path)
Case Schema
Each case may contain:
causal_knowledge_graph.json: initial CKG nodes, edges, mappings, and blind-spot markerscase_data.json: scenario and case metadatasensor_data.csv: generated time-series sensor observationsfault_tickets.json: generated maintenance-ticket style recordsexpert_summary.json: document coverage summaryground_truth.json: raw ground-truth audit targetsprocessed_ground_truth.json: evaluation-ready evidence zones and target edgesexpert_knowledge/: generated expert documents in text, markdown, and JSON formats
Splits
The benchmark does not define train/validation/test splits. Methods are evaluated by case scale and scenario group. The companion code repository provides the evaluation scripts and paper-result tables.
Code repository:
https://github.com/Yuuko-kurisu/PCCA
Intended Uses
This dataset is intended for:
- reproducing PCCA paper experiments
- evaluating industrial CKG auditing methods
- developing graph-based blind-spot localization and ranking algorithms
- studying hypothesis-conditioned graph neural audit models
Limitations
The benchmark is semi-synthetic and should not be interpreted as raw production data. Results on this dataset should be used for method comparison and reproducibility checks, not as direct evidence of deployed industrial causal validity.
Privacy
The released cases do not contain real personal emails, phone numbers, passwords, or raw partner production logs. Maintenance-ticket and expert-document contents are generated for benchmark use.
Citation
If you use this dataset, please cite:
@article{jiang2026pcca,
title = {PCCA: A Hypothesis-Driven Prompt-Conditioned Causal Auditing Framework for Industrial Knowledge Graphs},
author = {Jiang, Hongwei and You, Jiapeng and Shi, Jiayu and Chen, Zhiyang and Gou, Huaxing and Ming, Xinguo and Sun, Poly Z. H.},
journal = {IEEE Transactions on Industrial Informatics},
year = {2026}
}
- Downloads last month
- 28