cosmetic changes to docs
Browse files
README.md
CHANGED
|
@@ -9,19 +9,13 @@ tags:
|
|
| 9 |
- web_and_internet
|
| 10 |
pretty_name: Web And Internet
|
| 11 |
---
|
| 12 |
-
<div>
|
| 13 |
<img src="https://raw.githubusercontent.com/sciknoworg/OntoLearner/main/images/logo.png" alt="OntoLearner"
|
| 14 |
style="display: block; margin: 0 auto; width: 500px; height: auto;">
|
| 15 |
<h1 style="text-align: center; margin-top: 1em;">Web And Internet Domain Ontologies</h1>
|
|
|
|
| 16 |
</div>
|
| 17 |
|
| 18 |
-
<div align="center">
|
| 19 |
-
|
| 20 |
-
[](https://github.com/sciknoworg/OntoLearner)
|
| 21 |
-
[](https://pypi.org/project/OntoLearner/)
|
| 22 |
-
[](https://ontolearner.readthedocs.io/benchmarking/benchmark.html)
|
| 23 |
-
|
| 24 |
-
</div>
|
| 25 |
|
| 26 |
## Overview
|
| 27 |
The "web_and_internet" domain encompasses ontologies that articulate the structure and semantics of web technologies, including the intricate relationships and protocols that underpin linked data, web services, and online communication standards. This domain is pivotal in advancing knowledge representation by enabling the seamless integration and interoperability of diverse data sources, thereby facilitating more intelligent and dynamic web interactions. Through precise modeling of web semantics, it supports the development of robust frameworks for data exchange and enhances the semantic web's capacity to deliver contextually relevant information.
|
|
@@ -35,7 +29,7 @@ The "web_and_internet" domain encompasses ontologies that articulate the structu
|
|
| 35 |
## Dataset Files
|
| 36 |
Each ontology directory contains the following files:
|
| 37 |
1. `<ontology_id>.<format>` - The original ontology file
|
| 38 |
-
2. `term_typings.json` - Dataset of term
|
| 39 |
3. `taxonomies.json` - Dataset of taxonomic relations
|
| 40 |
4. `non_taxonomic_relations.json` - Dataset of non-taxonomic relations
|
| 41 |
5. `<ontology_id>.rst` - Documentation describing the ontology
|
|
@@ -43,15 +37,31 @@ Each ontology directory contains the following files:
|
|
| 43 |
## Usage
|
| 44 |
These datasets are intended for ontology learning research and applications. Here's how to use them with OntoLearner:
|
| 45 |
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
-
|
| 52 |
-
ontology.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
|
| 54 |
-
|
|
|
|
| 55 |
data = ontology.extract()
|
| 56 |
|
| 57 |
# Split into train and test sets
|
|
@@ -59,10 +69,10 @@ train_data, test_data = train_test_split(data, test_size=0.2)
|
|
| 59 |
|
| 60 |
# Create a learning pipeline (for RAG-based learning)
|
| 61 |
pipeline = LearnerPipeline(
|
| 62 |
-
task="term-typing", # Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
|
| 63 |
-
retriever_id="sentence-transformers/all-MiniLM-L6-v2",
|
| 64 |
-
llm_id="mistralai/Mistral-7B-Instruct-v0.1",
|
| 65 |
-
hf_token="your_huggingface_token" # Only needed for gated models
|
| 66 |
)
|
| 67 |
|
| 68 |
# Train and evaluate
|
|
@@ -74,5 +84,21 @@ results, metrics = pipeline.fit_predict_evaluate(
|
|
| 74 |
)
|
| 75 |
```
|
| 76 |
|
| 77 |
-
For more detailed
|
| 78 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
- web_and_internet
|
| 10 |
pretty_name: Web And Internet
|
| 11 |
---
|
| 12 |
+
<div align="center">
|
| 13 |
<img src="https://raw.githubusercontent.com/sciknoworg/OntoLearner/main/images/logo.png" alt="OntoLearner"
|
| 14 |
style="display: block; margin: 0 auto; width: 500px; height: auto;">
|
| 15 |
<h1 style="text-align: center; margin-top: 1em;">Web And Internet Domain Ontologies</h1>
|
| 16 |
+
<a href="https://github.com/sciknoworg/OntoLearner"><img src="https://img.shields.io/badge/GitHub-OntoLearner-blue?logo=github" /></a>
|
| 17 |
</div>
|
| 18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
## Overview
|
| 21 |
The "web_and_internet" domain encompasses ontologies that articulate the structure and semantics of web technologies, including the intricate relationships and protocols that underpin linked data, web services, and online communication standards. This domain is pivotal in advancing knowledge representation by enabling the seamless integration and interoperability of diverse data sources, thereby facilitating more intelligent and dynamic web interactions. Through precise modeling of web semantics, it supports the development of robust frameworks for data exchange and enhances the semantic web's capacity to deliver contextually relevant information.
|
|
|
|
| 29 |
## Dataset Files
|
| 30 |
Each ontology directory contains the following files:
|
| 31 |
1. `<ontology_id>.<format>` - The original ontology file
|
| 32 |
+
2. `term_typings.json` - A Dataset of term-to-type mappings
|
| 33 |
3. `taxonomies.json` - Dataset of taxonomic relations
|
| 34 |
4. `non_taxonomic_relations.json` - Dataset of non-taxonomic relations
|
| 35 |
5. `<ontology_id>.rst` - Documentation describing the ontology
|
|
|
|
| 37 |
## Usage
|
| 38 |
These datasets are intended for ontology learning research and applications. Here's how to use them with OntoLearner:
|
| 39 |
|
| 40 |
+
First of all, install the `OntoLearner` library via PiP:
|
| 41 |
+
|
| 42 |
+
```bash
|
| 43 |
+
pip install ontolearner
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
**How to load an ontology or LLM4OL Paradigm tasks datasets?**
|
| 47 |
+
``` python
|
| 48 |
+
from ontolearner import SAREF
|
| 49 |
+
|
| 50 |
+
ontology = SAREF()
|
| 51 |
+
|
| 52 |
+
# Load an ontology.
|
| 53 |
+
ontology.load()
|
| 54 |
|
| 55 |
+
# Load (or extract) LLMs4OL Paradigm tasks datasets
|
| 56 |
+
data = ontology.extract()
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
**How use the loaded dataset for LLM4OL Paradigm task settings?**
|
| 60 |
+
``` python
|
| 61 |
+
from ontolearner import SAREF, LearnerPipeline, train_test_split
|
| 62 |
|
| 63 |
+
ontology = SAREF()
|
| 64 |
+
ontology.load()
|
| 65 |
data = ontology.extract()
|
| 66 |
|
| 67 |
# Split into train and test sets
|
|
|
|
| 69 |
|
| 70 |
# Create a learning pipeline (for RAG-based learning)
|
| 71 |
pipeline = LearnerPipeline(
|
| 72 |
+
task = "term-typing", # Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
|
| 73 |
+
retriever_id = "sentence-transformers/all-MiniLM-L6-v2",
|
| 74 |
+
llm_id = "mistralai/Mistral-7B-Instruct-v0.1",
|
| 75 |
+
hf_token = "your_huggingface_token" # Only needed for gated models
|
| 76 |
)
|
| 77 |
|
| 78 |
# Train and evaluate
|
|
|
|
| 84 |
)
|
| 85 |
```
|
| 86 |
|
| 87 |
+
For more detailed documentation, see the [](https://ontolearner.readthedocs.io)
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
## Citation
|
| 91 |
+
|
| 92 |
+
If you find our work helpful, feel free to give us a cite.
|
| 93 |
+
|
| 94 |
+
|
| 95 |
+
```bibtex
|
| 96 |
+
@inproceedings{babaei2023llms4ol,
|
| 97 |
+
title={LLMs4OL: Large language models for ontology learning},
|
| 98 |
+
author={Babaei Giglou, Hamed and D’Souza, Jennifer and Auer, S{\"o}ren},
|
| 99 |
+
booktitle={International Semantic Web Conference},
|
| 100 |
+
pages={408--427},
|
| 101 |
+
year={2023},
|
| 102 |
+
organization={Springer}
|
| 103 |
+
}
|
| 104 |
+
```
|