Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found dotcausal.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 989, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found dotcausal.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

dotcausal - HuggingFace Dataset Loader

Load .causal binary knowledge graph files as HuggingFace Datasets.

What is .causal?

The .causal format is a binary knowledge graph with embedded deterministic inference. It solves the fundamental problem of AI-assisted discovery: LLMs hallucinate, databases don't reason.

Technology What it does What's missing
SQLite Stores facts No reasoning
Vector RAG Finds similar text No logic
LLMs Reasons creatively Hallucination risk
.causal Stores + Reasons Zero hallucination

Key Features

  • 30-40x faster queries than SQLite
  • 50-200% fact amplification through transitive chains
  • Zero hallucination - pure deterministic logic
  • Full provenance - trace every inference

Installation

pip install datasets dotcausal

Usage

Load from local .causal file

from datasets import load_dataset

# Load your .causal file
ds = load_dataset("chkmie/dotcausal", data_files="knowledge.causal")

print(ds["train"][0])
# {'trigger': 'SARS-CoV-2', 'mechanism': 'damages', 'outcome': 'mitochondria',
#  'confidence': 0.9, 'is_inferred': False, 'source': 'paper_A.pdf', 'provenance': []}

With configuration

# Only explicit triplets (no inferred)
ds = load_dataset(
    "chkmie/dotcausal",
    "explicit_only",
    data_files="knowledge.causal",
)

# High confidence only (>= 0.8)
ds = load_dataset(
    "chkmie/dotcausal",
    "high_confidence",
    data_files="knowledge.causal",
)

Multiple files / splits

ds = load_dataset(
    "chkmie/dotcausal",
    data_files={
        "train": "train_knowledge.causal",
        "test": "test_knowledge.causal",
    },
)

Dataset Schema

Field Type Description
trigger string The cause/trigger entity
mechanism string The relationship type
outcome string The effect/outcome entity
confidence float32 Confidence score (0-1)
is_inferred bool Whether derived or explicit
source string Original source (e.g., paper)
provenance list[string] Source triplets for inferred facts

Creating .causal Files

from dotcausal import CausalWriter

writer = CausalWriter()
writer.add_triplet(
    trigger="SARS-CoV-2",
    mechanism="damages",
    outcome="mitochondria",
    confidence=0.9,
    source="paper_A.pdf",
)
writer.save("knowledge.causal")

References

Citation

@article{foss2026causal,
  author = {Foss, David Tom},
  title = {The .causal Format: Deterministic Inference for AI-Assisted Hypothesis Amplification},
  journal = {Zenodo},
  year = {2026},
  doi = {10.5281/zenodo.18326222}
}

License

MIT

Downloads last month
16