hamedbabaeigiglou commited on
Commit
84e1166
·
verified ·
1 Parent(s): 86c5047

cosmetic changes to docs

Browse files
Files changed (1) hide show
  1. README.md +48 -23
README.md CHANGED
@@ -9,18 +9,11 @@ tags:
9
  - general_knowledge
10
  pretty_name: General Knowledge
11
  ---
12
- <div>
13
  <img src="https://raw.githubusercontent.com/sciknoworg/OntoLearner/main/images/logo.png" alt="OntoLearner"
14
  style="display: block; margin: 0 auto; width: 500px; height: auto;">
15
  <h1 style="text-align: center; margin-top: 1em;">General Knowledge Domain Ontologies</h1>
16
- </div>
17
-
18
- <div align="center">
19
-
20
- [![GitHub](https://img.shields.io/badge/GitHub-OntoLearner-blue?logo=github)](https://github.com/sciknoworg/OntoLearner)
21
- [![PyPI](https://img.shields.io/badge/PyPI-OntoLearner-blue?logo=pypi)](https://pypi.org/project/OntoLearner/)
22
- [![Documentation](https://img.shields.io/badge/Docs-ReadTheDocs-blue)](https://ontolearner.readthedocs.io/benchmarking/benchmark.html)
23
-
24
  </div>
25
 
26
  ## Overview
@@ -44,7 +37,7 @@ The general_knowledge domain encompasses broad-scope ontologies and upper vocabu
44
  ## Dataset Files
45
  Each ontology directory contains the following files:
46
  1. `<ontology_id>.<format>` - The original ontology file
47
- 2. `term_typings.json` - Dataset of term to type mappings
48
  3. `taxonomies.json` - Dataset of taxonomic relations
49
  4. `non_taxonomic_relations.json` - Dataset of non-taxonomic relations
50
  5. `<ontology_id>.rst` - Documentation describing the ontology
@@ -52,15 +45,31 @@ Each ontology directory contains the following files:
52
  ## Usage
53
  These datasets are intended for ontology learning research and applications. Here's how to use them with OntoLearner:
54
 
55
- ```python
56
- from ontolearner.ontology import Wine
57
- from ontolearner.utils.train_test_split import train_test_split
58
- from ontolearner.learner_pipeline import LearnerPipeline
59
 
60
- ontology = Wine()
61
- ontology.load() # Automatically downloads from Hugging Face
 
 
 
 
 
 
 
62
 
63
- # Extract the dataset
 
 
 
 
 
 
 
 
 
 
 
 
64
  data = ontology.extract()
65
 
66
  # Split into train and test sets
@@ -68,10 +77,10 @@ train_data, test_data = train_test_split(data, test_size=0.2)
68
 
69
  # Create a learning pipeline (for RAG-based learning)
70
  pipeline = LearnerPipeline(
71
- task="term-typing", # Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
72
- retriever_id="sentence-transformers/all-MiniLM-L6-v2",
73
- llm_id="mistralai/Mistral-7B-Instruct-v0.1",
74
- hf_token="your_huggingface_token" # Only needed for gated models
75
  )
76
 
77
  # Train and evaluate
@@ -83,5 +92,21 @@ results, metrics = pipeline.fit_predict_evaluate(
83
  )
84
  ```
85
 
86
- For more detailed examples, see the [OntoLearner documentation](https://ontolearner.readthedocs.io/).
87
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - general_knowledge
10
  pretty_name: General Knowledge
11
  ---
12
+ <div align="center">
13
  <img src="https://raw.githubusercontent.com/sciknoworg/OntoLearner/main/images/logo.png" alt="OntoLearner"
14
  style="display: block; margin: 0 auto; width: 500px; height: auto;">
15
  <h1 style="text-align: center; margin-top: 1em;">General Knowledge Domain Ontologies</h1>
16
+ <a href="https://github.com/sciknoworg/OntoLearner"><img src="https://img.shields.io/badge/GitHub-OntoLearner-blue?logo=github" /></a>
 
 
 
 
 
 
 
17
  </div>
18
 
19
  ## Overview
 
37
  ## Dataset Files
38
  Each ontology directory contains the following files:
39
  1. `<ontology_id>.<format>` - The original ontology file
40
+ 2. `term_typings.json` - A Dataset of term-to-type mappings
41
  3. `taxonomies.json` - Dataset of taxonomic relations
42
  4. `non_taxonomic_relations.json` - Dataset of non-taxonomic relations
43
  5. `<ontology_id>.rst` - Documentation describing the ontology
 
45
  ## Usage
46
  These datasets are intended for ontology learning research and applications. Here's how to use them with OntoLearner:
47
 
48
+ First of all, install the `OntoLearner` library via PiP:
 
 
 
49
 
50
+ ```bash
51
+ pip install ontolearner
52
+ ```
53
+
54
+ **How to load an ontology or LLM4OL Paradigm tasks datasets?**
55
+ ``` python
56
+ from ontolearner import CCO
57
+
58
+ ontology = CCO()
59
 
60
+ # Load an ontology.
61
+ ontology.load()
62
+
63
+ # Load (or extract) LLMs4OL Paradigm tasks datasets
64
+ data = ontology.extract()
65
+ ```
66
+
67
+ **How use the loaded dataset for LLM4OL Paradigm task settings?**
68
+ ``` python
69
+ from ontolearner import CCO, LearnerPipeline, train_test_split
70
+
71
+ ontology = CCO()
72
+ ontology.load()
73
  data = ontology.extract()
74
 
75
  # Split into train and test sets
 
77
 
78
  # Create a learning pipeline (for RAG-based learning)
79
  pipeline = LearnerPipeline(
80
+ task = "term-typing", # Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
81
+ retriever_id = "sentence-transformers/all-MiniLM-L6-v2",
82
+ llm_id = "mistralai/Mistral-7B-Instruct-v0.1",
83
+ hf_token = "your_huggingface_token" # Only needed for gated models
84
  )
85
 
86
  # Train and evaluate
 
92
  )
93
  ```
94
 
95
+ For more detailed documentation, see the [![Documentation](https://img.shields.io/badge/Documentation-ontolearner.readthedocs.io-blue)](https://ontolearner.readthedocs.io)
96
+
97
+
98
+ ## Citation
99
+
100
+ If you find our work helpful, feel free to give us a cite.
101
+
102
+
103
+ ```bibtex
104
+ @inproceedings{babaei2023llms4ol,
105
+ title={LLMs4OL: Large language models for ontology learning},
106
+ author={Babaei Giglou, Hamed and D’Souza, Jennifer and Auer, S{\"o}ren},
107
+ booktitle={International Semantic Web Conference},
108
+ pages={408--427},
109
+ year={2023},
110
+ organization={Springer}
111
+ }
112
+ ```