File size: 3,172 Bytes
2cf0650 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 | ---
license: mit
language:
- en
tags:
- knowledge-graph
- causal-inference
- rag
- zero-hallucination
- triplets
pretty_name: dotcausal Dataset Loader
size_categories:
- n<1K
---
# dotcausal - HuggingFace Dataset Loader
Load `.causal` binary knowledge graph files as HuggingFace Datasets.
## What is .causal?
The `.causal` format is a binary knowledge graph with **embedded deterministic inference**. It solves the fundamental problem of AI-assisted discovery: **LLMs hallucinate, databases don't reason**.
| Technology | What it does | What's missing |
|------------|--------------|----------------|
| **SQLite** | Stores facts | No reasoning |
| **Vector RAG** | Finds similar text | No logic |
| **LLMs** | Reasons creatively | Hallucination risk |
| **.causal** | Stores + Reasons | **Zero hallucination** |
### Key Features
- **30-40x faster queries** than SQLite
- **50-200% fact amplification** through transitive chains
- **Zero hallucination** - pure deterministic logic
- **Full provenance** - trace every inference
## Installation
```bash
pip install datasets dotcausal
```
## Usage
### Load from local .causal file
```python
from datasets import load_dataset
# Load your .causal file
ds = load_dataset("chkmie/dotcausal", data_files="knowledge.causal")
print(ds["train"][0])
# {'trigger': 'SARS-CoV-2', 'mechanism': 'damages', 'outcome': 'mitochondria',
# 'confidence': 0.9, 'is_inferred': False, 'source': 'paper_A.pdf', 'provenance': []}
```
### With configuration
```python
# Only explicit triplets (no inferred)
ds = load_dataset(
"chkmie/dotcausal",
"explicit_only",
data_files="knowledge.causal",
)
# High confidence only (>= 0.8)
ds = load_dataset(
"chkmie/dotcausal",
"high_confidence",
data_files="knowledge.causal",
)
```
### Multiple files / splits
```python
ds = load_dataset(
"chkmie/dotcausal",
data_files={
"train": "train_knowledge.causal",
"test": "test_knowledge.causal",
},
)
```
## Dataset Schema
| Field | Type | Description |
|-------|------|-------------|
| `trigger` | string | The cause/trigger entity |
| `mechanism` | string | The relationship type |
| `outcome` | string | The effect/outcome entity |
| `confidence` | float32 | Confidence score (0-1) |
| `is_inferred` | bool | Whether derived or explicit |
| `source` | string | Original source (e.g., paper) |
| `provenance` | list[string] | Source triplets for inferred facts |
## Creating .causal Files
```python
from dotcausal import CausalWriter
writer = CausalWriter()
writer.add_triplet(
trigger="SARS-CoV-2",
mechanism="damages",
outcome="mitochondria",
confidence=0.9,
source="paper_A.pdf",
)
writer.save("knowledge.causal")
```
## References
- **PyPI**: https://pypi.org/project/dotcausal/
- **GitHub**: https://github.com/DT-Foss/dotcausal
- **Whitepaper**: https://doi.org/10.5281/zenodo.18326222
## Citation
```bibtex
@article{foss2026causal,
author = {Foss, David Tom},
title = {The .causal Format: Deterministic Inference for AI-Assisted Hypothesis Amplification},
journal = {Zenodo},
year = {2026},
doi = {10.5281/zenodo.18326222}
}
```
## License
MIT
|