|
|
|
|
|
--- |
|
|
license: mit |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- OntoLearner |
|
|
- ontology-learning |
|
|
- upper-ontology |
|
|
pretty_name: Upper Ontology |
|
|
--- |
|
|
<div align="center"> |
|
|
<img src="https://raw.githubusercontent.com/sciknoworg/OntoLearner/main/images/logo.png" alt="OntoLearner" |
|
|
style="display: block; margin: 0 auto; width: 500px; height: auto;"> |
|
|
<h1 style="text-align: center; margin-top: 1em;">Upper Ontology Domain Ontologies</h1> |
|
|
<a href="https://github.com/sciknoworg/OntoLearner"><img src="https://img.shields.io/badge/GitHub-OntoLearner-blue?logo=github" /></a> |
|
|
</div> |
|
|
|
|
|
|
|
|
## Overview |
|
|
The upper ontology, also known as a foundational ontology, encompasses a set of highly abstract, domain-independent concepts that serve as the building blocks for more specialized ontologies. These ontologies provide a structured framework for representing fundamental entities such as objects, processes, and relations, facilitating interoperability and semantic integration across diverse domains. By establishing a common vocabulary and set of principles, upper ontologies play a crucial role in enhancing the consistency and coherence of knowledge representation systems. |
|
|
|
|
|
## Ontologies |
|
|
| Ontology ID | Full Name | Classes | Properties | Last Updated | |
|
|
|-------------|-----------|---------|------------|--------------| |
|
|
| BFO | Basic Formal Ontology (BFO) | 84 | 40 | 2020| |
|
|
| DOLCE | Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) | 44 | 70 | None| |
|
|
| FAIR | FAIR Vocabulary (FAIR) | 7 | 1 | None| |
|
|
| GFO | General Formal Ontology (GFO) | 94 | 67 | 2024-11-18| |
|
|
| SIO | Semanticscience Integrated Ontology (SIO) | 1726 | 212 | 03/25/2024| |
|
|
| SUMO | Suggested Upper Merged Ontology (SUMO) | 4525 | 587 | 2025-02-17| |
|
|
|
|
|
## Dataset Files |
|
|
Each ontology directory contains the following files: |
|
|
1. `<ontology_id>.<format>` - The original ontology file |
|
|
2. `term_typings.json` - A Dataset of term-to-type mappings |
|
|
3. `taxonomies.json` - Dataset of taxonomic relations |
|
|
4. `non_taxonomic_relations.json` - Dataset of non-taxonomic relations |
|
|
5. `<ontology_id>.rst` - Documentation describing the ontology |
|
|
|
|
|
## Usage |
|
|
These datasets are intended for ontology learning research and applications. Here's how to use them with OntoLearner: |
|
|
|
|
|
First of all, install the `OntoLearner` library via PiP: |
|
|
|
|
|
```bash |
|
|
pip install ontolearner |
|
|
``` |
|
|
|
|
|
**How to load an ontology or LLM4OL Paradigm tasks datasets?** |
|
|
``` python |
|
|
from ontolearner import BFO |
|
|
|
|
|
ontology = BFO() |
|
|
|
|
|
# Load an ontology. |
|
|
ontology.load() |
|
|
|
|
|
# Load (or extract) LLMs4OL Paradigm tasks datasets |
|
|
data = ontology.extract() |
|
|
``` |
|
|
|
|
|
**How use the loaded dataset for LLM4OL Paradigm task settings?** |
|
|
``` python |
|
|
# Import core modules from the OntoLearner library |
|
|
from ontolearner import BFO, LearnerPipeline, train_test_split |
|
|
|
|
|
# Load the BFO ontology, which contains concepts related to wines, their properties, and categories |
|
|
ontology = BFO() |
|
|
ontology.load() # Load entities, types, and structured term annotations from the ontology |
|
|
data = ontology.extract() |
|
|
|
|
|
# Split into train and test sets |
|
|
train_data, test_data = train_test_split(data, test_size=0.2, random_state=42) |
|
|
|
|
|
# Initialize a multi-component learning pipeline (retriever + LLM) |
|
|
# This configuration enables a Retrieval-Augmented Generation (RAG) setup |
|
|
pipeline = LearnerPipeline( |
|
|
retriever_id='sentence-transformers/all-MiniLM-L6-v2', # Dense retriever model for nearest neighbor search |
|
|
llm_id='Qwen/Qwen2.5-0.5B-Instruct', # Lightweight instruction-tuned LLM for reasoning |
|
|
hf_token='...', # Hugging Face token for accessing gated models |
|
|
batch_size=32, # Batch size for training/prediction if supported |
|
|
top_k=5 # Number of top retrievals to include in RAG prompting |
|
|
) |
|
|
|
|
|
# Run the pipeline: training, prediction, and evaluation in one call |
|
|
outputs = pipeline( |
|
|
train_data=train_data, |
|
|
test_data=test_data, |
|
|
evaluate=True, # Compute metrics like precision, recall, and F1 |
|
|
task='term-typing' # Specifies the task |
|
|
# Other options: "taxonomy-discovery" or "non-taxonomy-discovery" |
|
|
) |
|
|
|
|
|
# Print final evaluation metrics |
|
|
print("Metrics:", outputs['metrics']) |
|
|
|
|
|
# Print the total time taken for the full pipeline execution |
|
|
print("Elapsed time:", outputs['elapsed_time']) |
|
|
|
|
|
# Print all outputs (including predictions) |
|
|
print(outputs) |
|
|
``` |
|
|
|
|
|
For more detailed documentation, see the [](https://ontolearner.readthedocs.io) |
|
|
|
|
|
|
|
|
## Citation |
|
|
|
|
|
If you find our work helpful, feel free to give us a cite. |
|
|
|
|
|
|
|
|
```bibtex |
|
|
@inproceedings{babaei2023llms4ol, |
|
|
title={LLMs4OL: Large language models for ontology learning}, |
|
|
author={Babaei Giglou, Hamed and D’Souza, Jennifer and Auer, S{\"o}ren}, |
|
|
booktitle={International Semantic Web Conference}, |
|
|
pages={408--427}, |
|
|
year={2023}, |
|
|
organization={Springer} |
|
|
} |
|
|
``` |
|
|
|
|
|
|