minor update to readme
Browse files
README.md
CHANGED
|
@@ -6,7 +6,7 @@ language:
|
|
| 6 |
tags:
|
| 7 |
- OntoLearner
|
| 8 |
- ontology-learning
|
| 9 |
-
-
|
| 10 |
pretty_name: Web And Internet
|
| 11 |
---
|
| 12 |
<div align="center">
|
|
@@ -18,7 +18,7 @@ pretty_name: Web And Internet
|
|
| 18 |
|
| 19 |
|
| 20 |
## Overview
|
| 21 |
-
The
|
| 22 |
|
| 23 |
## Ontologies
|
| 24 |
| Ontology ID | Full Name | Classes | Properties | Last Updated |
|
|
@@ -58,30 +58,44 @@ data = ontology.extract()
|
|
| 58 |
|
| 59 |
**How use the loaded dataset for LLM4OL Paradigm task settings?**
|
| 60 |
``` python
|
|
|
|
| 61 |
from ontolearner import SAREF, LearnerPipeline, train_test_split
|
| 62 |
|
|
|
|
| 63 |
ontology = SAREF()
|
| 64 |
-
ontology.load()
|
| 65 |
data = ontology.extract()
|
| 66 |
|
| 67 |
# Split into train and test sets
|
| 68 |
-
train_data, test_data = train_test_split(data, test_size=0.2)
|
| 69 |
|
| 70 |
-
#
|
|
|
|
| 71 |
pipeline = LearnerPipeline(
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
|
|
|
| 76 |
)
|
| 77 |
|
| 78 |
-
#
|
| 79 |
-
|
| 80 |
train_data=train_data,
|
| 81 |
test_data=test_data,
|
| 82 |
-
|
| 83 |
-
|
|
|
|
| 84 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
```
|
| 86 |
|
| 87 |
For more detailed documentation, see the [](https://ontolearner.readthedocs.io)
|
|
|
|
| 6 |
tags:
|
| 7 |
- OntoLearner
|
| 8 |
- ontology-learning
|
| 9 |
+
- web-and-internet
|
| 10 |
pretty_name: Web And Internet
|
| 11 |
---
|
| 12 |
<div align="center">
|
|
|
|
| 18 |
|
| 19 |
|
| 20 |
## Overview
|
| 21 |
+
The web and internet domain encompasses ontologies that articulate the structure and semantics of web technologies, including the intricate relationships and protocols that underpin linked data, web services, and online communication standards. This domain is pivotal in advancing knowledge representation by enabling the seamless integration and interoperability of diverse data sources, thereby facilitating more intelligent and dynamic web interactions. Through precise modeling of web semantics, it supports the development of robust frameworks for data exchange and enhances the semantic web's capacity to deliver contextually relevant information.
|
| 22 |
|
| 23 |
## Ontologies
|
| 24 |
| Ontology ID | Full Name | Classes | Properties | Last Updated |
|
|
|
|
| 58 |
|
| 59 |
**How use the loaded dataset for LLM4OL Paradigm task settings?**
|
| 60 |
``` python
|
| 61 |
+
# Import core modules from the OntoLearner library
|
| 62 |
from ontolearner import SAREF, LearnerPipeline, train_test_split
|
| 63 |
|
| 64 |
+
# Load the SAREF ontology, which contains concepts related to wines, their properties, and categories
|
| 65 |
ontology = SAREF()
|
| 66 |
+
ontology.load() # Load entities, types, and structured term annotations from the ontology
|
| 67 |
data = ontology.extract()
|
| 68 |
|
| 69 |
# Split into train and test sets
|
| 70 |
+
train_data, test_data = train_test_split(data, test_size=0.2, random_state=42)
|
| 71 |
|
| 72 |
+
# Initialize a multi-component learning pipeline (retriever + LLM)
|
| 73 |
+
# This configuration enables a Retrieval-Augmented Generation (RAG) setup
|
| 74 |
pipeline = LearnerPipeline(
|
| 75 |
+
retriever_id='sentence-transformers/all-MiniLM-L6-v2', # Dense retriever model for nearest neighbor search
|
| 76 |
+
llm_id='Qwen/Qwen2.5-0.5B-Instruct', # Lightweight instruction-tuned LLM for reasoning
|
| 77 |
+
hf_token='...', # Hugging Face token for accessing gated models
|
| 78 |
+
batch_size=32, # Batch size for training/prediction if supported
|
| 79 |
+
top_k=5 # Number of top retrievals to include in RAG prompting
|
| 80 |
)
|
| 81 |
|
| 82 |
+
# Run the pipeline: training, prediction, and evaluation in one call
|
| 83 |
+
outputs = pipeline(
|
| 84 |
train_data=train_data,
|
| 85 |
test_data=test_data,
|
| 86 |
+
evaluate=True, # Compute metrics like precision, recall, and F1
|
| 87 |
+
task='term-typing' # Specifies the task
|
| 88 |
+
# Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
|
| 89 |
)
|
| 90 |
+
|
| 91 |
+
# Print final evaluation metrics
|
| 92 |
+
print("Metrics:", outputs['metrics'])
|
| 93 |
+
|
| 94 |
+
# Print the total time taken for the full pipeline execution
|
| 95 |
+
print("Elapsed time:", outputs['elapsed_time'])
|
| 96 |
+
|
| 97 |
+
# Print all outputs (including predictions)
|
| 98 |
+
print(outputs)
|
| 99 |
```
|
| 100 |
|
| 101 |
For more detailed documentation, see the [](https://ontolearner.readthedocs.io)
|