Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pretty_name: DBpediaOntoTrain
|
| 3 |
+
license: cc-by-4.0
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
tags:
|
| 7 |
+
- ontology
|
| 8 |
+
- owl
|
| 9 |
+
- turtle
|
| 10 |
+
- llm
|
| 11 |
+
- pretraining
|
| 12 |
+
- dbpedia
|
| 13 |
+
size_categories:
|
| 14 |
+
- 1B<n<10B
|
| 15 |
+
dataset_info:
|
| 16 |
+
features:
|
| 17 |
+
- name: file_name
|
| 18 |
+
type: string
|
| 19 |
+
- name: text
|
| 20 |
+
type: string
|
| 21 |
+
- name: PD
|
| 22 |
+
type: float
|
| 23 |
+
- name: NTR
|
| 24 |
+
type: float
|
| 25 |
+
- name: SC
|
| 26 |
+
type: float
|
| 27 |
+
- name: PD_norm
|
| 28 |
+
type: float
|
| 29 |
+
- name: NTR_norm
|
| 30 |
+
type: float
|
| 31 |
+
- name: SC_norm
|
| 32 |
+
type: float
|
| 33 |
+
- name: QS
|
| 34 |
+
type: float
|
| 35 |
+
- name: token_count
|
| 36 |
+
type: int
|
| 37 |
+
- name: token_count_acum
|
| 38 |
+
type: int
|
| 39 |
+
- name: percent_token_acum
|
| 40 |
+
type: float
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
# 🧠 DBpediaOntoTrain: A Quality-Segmented Ontology Dataset for LLM Pretraining
|
| 45 |
+
|
| 46 |
+
## 📘 Overview
|
| 47 |
+
|
| 48 |
+
**DBpediaOntoTrain** is a dataset of **1,766 OWL ontologies in Turtle format**, extracted from [DBpedia Archivo](https://archivo.dbpedia.org/) and prepared for **continual pretraining of Large Language Models (LLMs)** in ontology generation and completion tasks.
|
| 49 |
+
|
| 50 |
+
Each ontology is analyzed using a set of **semantic quality metrics**, tokenized using the **LLaMA 3.2 tokenizer**, and sorted by **Quality Score (QS)**. The dataset includes **cumulative token counts and percentages**, allowing precise and reproducible slicing for quality-aware training.
|
| 51 |
+
|
| 52 |
+
---
|
| 53 |
+
|
| 54 |
+
## 📦 Dataset Contents
|
| 55 |
+
|
| 56 |
+
- `data.json`: A JSON file where each entry contains:
|
| 57 |
+
- `File Name`: name of the ontology file (`.ttl`)
|
| 58 |
+
- `plain_text`: raw ontology content in Turtle syntax
|
| 59 |
+
- `PD`: Property Density by Class
|
| 60 |
+
- `NTR`: Non-Taxonomic Relations per Class
|
| 61 |
+
- `SC`: Subclasses per Class
|
| 62 |
+
- `PD_norm`, `NTR_norm`, `SC_norm`: min-max normalized versions of the above metrics
|
| 63 |
+
- `QS`: Quality Score (`PD_norm + NTR_norm + SC_norm`)
|
| 64 |
+
- `Token Count`: number of tokens computed using the **LLaMA 3.2 tokenizer**
|
| 65 |
+
- `Token Count Accumulation`: cumulative token count (sorted by descending QS)
|
| 66 |
+
- `Percentage of Token Count Accumulation`: running percentage of total tokens across all ontologies
|
| 67 |
+
|
| 68 |
+
The dataset is sorted in descending order by Quality Score (`QS`), enabling easy extraction of quality-based subsets (e.g., Q1, Q1,2, etc.).
|
| 69 |
+
|
| 70 |
+
---
|
| 71 |
+
|
| 72 |
+
## 📊 Quality Metrics
|
| 73 |
+
|
| 74 |
+
Each ontology is scored with:
|
| 75 |
+
|
| 76 |
+
| Metric | Description |
|
| 77 |
+
|--------|-------------|
|
| 78 |
+
| **PD** | Property Density — properties per class |
|
| 79 |
+
| **NTR** | Non-Taxonomic Relations — domain-specific relations per class |
|
| 80 |
+
| **SC** | Subclass Count — hierarchical depth |
|
| 81 |
+
| **QS** | Sum of normalized PD, NTR, SC |
|
| 82 |
+
|
| 83 |
+
These metrics reflect **semantic modeling richness** rather than raw size.
|
| 84 |
+
|
| 85 |
+
---
|
| 86 |
+
|
| 87 |
+
## 🧪 Intended Use
|
| 88 |
+
|
| 89 |
+
- Continual pretraining of LLMs on semantic data
|
| 90 |
+
- Research in ontology learning, alignment, enrichment
|
| 91 |
+
- Studying the effect of data quality on model generalization and reasoning
|
| 92 |
+
|
| 93 |
+
This dataset supports the research study:
|
| 94 |
+
|
| 95 |
+
> **Enhancing LLM Ontology Generation: The Role of Quality Semantic Data**
|
| 96 |
+
> Miquel Canal-Esteve, Yoan Gutiérrez, José Abreu-Salas (submitted to *ICT Express*, 2025)
|
| 97 |
+
|
| 98 |
+
---
|
| 99 |
+
|
| 100 |
+
## 🛠️ Tokenization
|
| 101 |
+
|
| 102 |
+
- Tokenized using **LLaMA 3.2-1B tokenizer**
|
| 103 |
+
- Total tokens: **1.25 billion**
|
| 104 |
+
- Cumulative token fields allow extracting top-N% token subsets based on QS
|
| 105 |
+
- Token overlap and LLM input chunking are described in the accompanying paper
|
| 106 |
+
|
| 107 |
+
---
|
| 108 |
+
|
| 109 |
+
## 💡 Reproducibility
|
| 110 |
+
|
| 111 |
+
The repository includes:
|
| 112 |
+
- Metric calculation scripts using [`rdflib`](https://github.com/RDFLib/rdflib)
|
| 113 |
+
- Tokenization scripts with Hugging Face libraries
|
| 114 |
+
- Pretraining configs and logs
|
| 115 |
+
|
| 116 |
+
Repository:
|
| 117 |
+
👉 [https://github.com/miquelcanalesteve/LLM4Onto/](https://github.com/miquelcanalesteve/LLM4Onto/)
|
| 118 |
+
|
| 119 |
+
---
|
| 120 |
+
|
| 121 |
+
## 📄 Citation
|
| 122 |
+
|
| 123 |
+
```bibtex
|
| 124 |
+
@misc{canal2025dbpediaontotrain,
|
| 125 |
+
author = {Miquel Canal-Esteve and Yoan Gutiérrez and José Abreu-Salas},
|
| 126 |
+
title = {DBpediaOntoTrain: A Quality-Segmented Ontology Dataset for LLM Pretraining},
|
| 127 |
+
year = {2025},
|
| 128 |
+
url = {https://github.com/miquelcanalesteve/LLM4Onto/}
|
| 129 |
+
}
|