minor update to readme
Browse files
README.md
CHANGED
|
@@ -6,7 +6,7 @@ language:
|
|
| 6 |
tags:
|
| 7 |
- OntoLearner
|
| 8 |
- ontology-learning
|
| 9 |
-
-
|
| 10 |
pretty_name: General Knowledge
|
| 11 |
---
|
| 12 |
<div align="center">
|
|
@@ -17,7 +17,7 @@ pretty_name: General Knowledge
|
|
| 17 |
</div>
|
| 18 |
|
| 19 |
## Overview
|
| 20 |
-
The
|
| 21 |
|
| 22 |
## Ontologies
|
| 23 |
| Ontology ID | Full Name | Classes | Properties | Last Updated |
|
|
@@ -66,30 +66,44 @@ data = ontology.extract()
|
|
| 66 |
|
| 67 |
**How use the loaded dataset for LLM4OL Paradigm task settings?**
|
| 68 |
``` python
|
|
|
|
| 69 |
from ontolearner import CCO, LearnerPipeline, train_test_split
|
| 70 |
|
|
|
|
| 71 |
ontology = CCO()
|
| 72 |
-
ontology.load()
|
| 73 |
data = ontology.extract()
|
| 74 |
|
| 75 |
# Split into train and test sets
|
| 76 |
-
train_data, test_data = train_test_split(data, test_size=0.2)
|
| 77 |
|
| 78 |
-
#
|
|
|
|
| 79 |
pipeline = LearnerPipeline(
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
|
|
|
| 84 |
)
|
| 85 |
|
| 86 |
-
#
|
| 87 |
-
|
| 88 |
train_data=train_data,
|
| 89 |
test_data=test_data,
|
| 90 |
-
|
| 91 |
-
|
|
|
|
| 92 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
```
|
| 94 |
|
| 95 |
For more detailed documentation, see the [](https://ontolearner.readthedocs.io)
|
|
|
|
| 6 |
tags:
|
| 7 |
- OntoLearner
|
| 8 |
- ontology-learning
|
| 9 |
+
- general-knowledge
|
| 10 |
pretty_name: General Knowledge
|
| 11 |
---
|
| 12 |
<div align="center">
|
|
|
|
| 17 |
</div>
|
| 18 |
|
| 19 |
## Overview
|
| 20 |
+
The general knowledge domain encompasses broad-scope ontologies and upper vocabularies designed for cross-disciplinary semantic modeling and knowledge representation. This domain is pivotal in facilitating interoperability and data integration across diverse fields by providing a foundational framework for organizing and linking information. Its significance lies in enabling the seamless exchange and understanding of knowledge across varied contexts, thereby supporting advanced data analysis, information retrieval, and decision-making processes.
|
| 21 |
|
| 22 |
## Ontologies
|
| 23 |
| Ontology ID | Full Name | Classes | Properties | Last Updated |
|
|
|
|
| 66 |
|
| 67 |
**How use the loaded dataset for LLM4OL Paradigm task settings?**
|
| 68 |
``` python
|
| 69 |
+
# Import core modules from the OntoLearner library
|
| 70 |
from ontolearner import CCO, LearnerPipeline, train_test_split
|
| 71 |
|
| 72 |
+
# Load the CCO ontology, which contains concepts related to wines, their properties, and categories
|
| 73 |
ontology = CCO()
|
| 74 |
+
ontology.load() # Load entities, types, and structured term annotations from the ontology
|
| 75 |
data = ontology.extract()
|
| 76 |
|
| 77 |
# Split into train and test sets
|
| 78 |
+
train_data, test_data = train_test_split(data, test_size=0.2, random_state=42)
|
| 79 |
|
| 80 |
+
# Initialize a multi-component learning pipeline (retriever + LLM)
|
| 81 |
+
# This configuration enables a Retrieval-Augmented Generation (RAG) setup
|
| 82 |
pipeline = LearnerPipeline(
|
| 83 |
+
retriever_id='sentence-transformers/all-MiniLM-L6-v2', # Dense retriever model for nearest neighbor search
|
| 84 |
+
llm_id='Qwen/Qwen2.5-0.5B-Instruct', # Lightweight instruction-tuned LLM for reasoning
|
| 85 |
+
hf_token='...', # Hugging Face token for accessing gated models
|
| 86 |
+
batch_size=32, # Batch size for training/prediction if supported
|
| 87 |
+
top_k=5 # Number of top retrievals to include in RAG prompting
|
| 88 |
)
|
| 89 |
|
| 90 |
+
# Run the pipeline: training, prediction, and evaluation in one call
|
| 91 |
+
outputs = pipeline(
|
| 92 |
train_data=train_data,
|
| 93 |
test_data=test_data,
|
| 94 |
+
evaluate=True, # Compute metrics like precision, recall, and F1
|
| 95 |
+
task='term-typing' # Specifies the task
|
| 96 |
+
# Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
|
| 97 |
)
|
| 98 |
+
|
| 99 |
+
# Print final evaluation metrics
|
| 100 |
+
print("Metrics:", outputs['metrics'])
|
| 101 |
+
|
| 102 |
+
# Print the total time taken for the full pipeline execution
|
| 103 |
+
print("Elapsed time:", outputs['elapsed_time'])
|
| 104 |
+
|
| 105 |
+
# Print all outputs (including predictions)
|
| 106 |
+
print(outputs)
|
| 107 |
```
|
| 108 |
|
| 109 |
For more detailed documentation, see the [](https://ontolearner.readthedocs.io)
|