--- pretty_name: DBpediaOntoTrain license: cc-by-4.0 language: - en tags: - ontology - owl - turtle - llm - pretraining - dbpedia size_categories: - 1B **Enhancing LLM Ontology Generation: The Role of Quality Semantic Data** > Miquel Canal-Esteve, Yoan Gutiérrez, José Abreu-Salas (submitted to *ICT Express*, 2025) --- ## 🛠️ Tokenization - Tokenized using **LLaMA 3.2-1B tokenizer** - Total tokens: **1.25 billion** - Cumulative token fields allow extracting top-N% token subsets based on QS - Token overlap and LLM input chunking are described in the accompanying paper --- ## 💡 Reproducibility The repository includes: - Metric calculation scripts using [`rdflib`](https://github.com/RDFLib/rdflib) - Tokenization scripts with Hugging Face libraries - Pretraining configs and logs Repository: 👉 [https://github.com/miquelcanalesteve/LLM4Onto/](https://github.com/miquelcanalesteve/LLM4Onto/) --- ## 📄 Citation ```bibtex @misc{canal2025dbpediaontotrain, author = {Miquel Canal-Esteve and Yoan Gutiérrez and José Abreu-Salas}, title = {DBpediaOntoTrain: A Quality-Segmented Ontology Dataset for LLM Pretraining}, year = {2025}, url = {https://github.com/miquelcanalesteve/LLM4Onto/} }