File size: 4,501 Bytes
99532f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1851359
99532f2
 
 
 
 
 
 
 
 
 
 
 
 
caec115
 
 
 
 
 
 
 
 
 
 
8b93262
 
 
caec115
 
 
99532f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
---
pretty_name: DBpediaOntoTrain
license: cc-by-4.0
language:
  - en
tags:
  - ontology
  - owl
  - turtle
  - llm
  - pretraining
  - dbpedia
size_categories:
  - 1B<n<10B
dataset_info:
  features:
    - name: file_name
      type: string
    - name: text
      type: string
    - name: PD
      type: float
    - name: NTR
      type: float
    - name: SC
      type: float
    - name: PD_norm
      type: float
    - name: NTR_norm
      type: float
    - name: SC_norm
      type: float
    - name: QS
      type: float
    - name: token_count
      type: int
    - name: token_count_acum
      type: int
    - name: percent_token_acum
      type: float
---


# 🧠 DBpediaOntoTrain: A Quality-Segmented Ontology Dataset for LLM Pretraining

## 📘 Overview

**DBpediaOntoTrain** is a dataset of **1,766 OWL ontologies in Turtle format**, extracted from [DBpedia Archivo](https://archivo.dbpedia.org/) and prepared for **continual pretraining of Large Language Models (LLMs)** in ontology generation and completion tasks.

Each ontology is analyzed using a set of **semantic quality metrics**, tokenized using the **LLaMA 3.2 tokenizer**, and sorted by **Quality Score (QS)**. The dataset includes **cumulative token counts and percentages**, allowing precise and reproducible slicing for quality-aware training.

---

## 📦 Dataset Contents

- `data.json`: A JSON file where each entry contains:
  - `File Name`: name of the ontology file (`.ttl`)
  - `plain_text`: raw ontology content in Turtle syntax
  - `PD`: Property Density by Class
  - `NTR`: Non-Taxonomic Relations per Class
  - `SC`: Subclasses per Class
  - `PD_norm`, `NTR_norm`, `SC_norm`: min-max normalized versions of the above metrics
  - `QS`: Quality Score (`PD_norm + NTR_norm + SC_norm`)
  - `Token Count`: number of tokens computed using the **LLaMA 3.2 tokenizer**
  - `Token Count Accumulation`: cumulative token count (sorted by descending QS)
  - `Percentage of Token Count Accumulation`: running percentage of total tokens across all ontologies

The dataset is sorted in descending order by Quality Score (`QS`), enabling easy extraction of quality-based subsets (e.g., Q1, Q1,2, etc.).

---

## ⚠️ Loading the Dataset

The standard `datasets.load_dataset()` function from the Hugging Face `datasets` library **does not work with this dataset**, likely due to format or hosting issues.

However, you can easily load it using Python's built-in `json` module:

```python
import json

with open('path/to/data.json', 'r', encoding='utf-8') as f:
    data = json.load(f)
```

This will give you a list of dictionary entries, each representing one ontology and its associated quality metrics, ready for filtering or slicing based on your training needs.

---

## 📊 Quality Metrics

Each ontology is scored with:

| Metric | Description |
|--------|-------------|
| **PD** | Property Density — properties per class |
| **NTR** | Non-Taxonomic Relations — domain-specific relations per class |
| **SC** | Subclass Count — hierarchical depth |
| **QS** | Sum of normalized PD, NTR, SC |

These metrics reflect **semantic modeling richness** rather than raw size.

---

## 🧪 Intended Use

- Continual pretraining of LLMs on semantic data
- Research in ontology learning, alignment, enrichment
- Studying the effect of data quality on model generalization and reasoning

This dataset supports the research study:

> **Enhancing LLM Ontology Generation: The Role of Quality Semantic Data**  
> Miquel Canal-Esteve, Yoan Gutiérrez, José Abreu-Salas (submitted to *ICT Express*, 2025)

---

## 🛠️ Tokenization

- Tokenized using **LLaMA 3.2-1B tokenizer**
- Total tokens: **1.25 billion**
- Cumulative token fields allow extracting top-N% token subsets based on QS
- Token overlap and LLM input chunking are described in the accompanying paper

---

## 💡 Reproducibility

The repository includes:
- Metric calculation scripts using [`rdflib`](https://github.com/RDFLib/rdflib)
- Tokenization scripts with Hugging Face libraries
- Pretraining configs and logs

Repository:  
👉 [https://github.com/miquelcanalesteve/LLM4Onto/](https://github.com/miquelcanalesteve/LLM4Onto/)

---

## 📄 Citation

```bibtex
@misc{canal2025dbpediaontotrain,
  author    = {Miquel Canal-Esteve and Yoan Gutiérrez and José Abreu-Salas},
  title     = {DBpediaOntoTrain: A Quality-Segmented Ontology Dataset for LLM Pretraining},
  year      = {2025},
  url       = {https://github.com/miquelcanalesteve/LLM4Onto/}
}