File size: 2,544 Bytes
f4bacef
 
d0c71bb
 
 
 
 
 
 
 
 
 
 
 
 
 
f4bacef
d0c71bb
 
 
0319e23
d0c71bb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---

license: cc-by-4.0
task_categories:
- graph-ml
language:
- en
tags:
- semantic-network
- knowledge-graph
- wikidata
- graph-ml
- semantic-inference
- token-compression
pretty_name: 'zelph Binaries: Semantic Networks from Knowledge Bases (e.g., Wikidata)'
size_categories:
- 100M<n<1B
---


# zelph Binaries Dataset

This dataset provides pre-compiled binary files (.bin) for use with [zelph](https://zelph.org), a sophisticated semantic network system.
These binaries are derived from large knowledge bases like Wikidata, optimized for fast loading and efficient querying.

## Dataset Description

zelph binaries enable users to work with semantic networks without the need to import raw dumps (e.g., JSON files), which can take hours.
Instead, these .bin files load in minutes, though they require substantial RAM.

The dataset includes both full and pruned variants:

- **Full binaries**: Contain the complete network, suitable for comprehensive use but demanding high RAM (Wikidata: ~ 210 GB RAM).
- **Pruned binaries**: Reduced versions with removed domains (e.g., biology, chemistry, astronomy) to lower RAM requirements (~ 16 GB RAM) while preserving core connections.

For detailed information on each binary, including sizes, creation dates, pruning details, and updates, visit [https://zelph.org/binaries](https://zelph.org/binaries).

## How to Use

1. Download the desired .bin file from this dataset.
2. In zelph interactive mode, load it with:
   ```

   .load /path/to/your-file.bin

   ```
3. Run queries, define rules, perform inferences or run complete scripts (see [zelph on GitHub](https://github.com/acrion/zelph?tab=readme-ov-file#performing-inference) for details).

## LLM-Friendly Outputs

zelph can generate rule-based inferences in a compressed text format optimized for LLM training or processing.
This uses a token encoder that maps Wikidata IDs (Q/P) to compact UTF-8 symbols (CJK range), reducing input length while preserving structure.

This feature is currently focused on Wikidata, but it can be adapted for similar use cases.
Use it to export inferences from loaded binaries for LLM datasets – see the [command documentation on GitHub](https://github.com/acrion/zelph?tab=readme-ov-file#exporting-deduced-facts-to-file) for details.

## Citation
If you use this dataset, cite as:
```

@dataset{zelph,

  author = {Stefan Zipproth},

  title = {zelph Binaries Dataset},

  year = {2026},

  url = {https://huggingface.co/datasets/acrion/zelph}

}

```