hamedbabaeigiglou commited on
Commit
27a9782
·
verified ·
1 Parent(s): b2f2d5f

minor update to readme

Browse files
Files changed (1) hide show
  1. README.md +57 -27
README.md CHANGED
@@ -18,7 +18,7 @@ pretty_name: Industry
18
  </div>
19
 
20
  ## Overview
21
- The "industry" domain encompasses ontologies that systematically represent and model the complex structures, processes, and interactions within industrial settings, including manufacturing systems, smart buildings, and equipment. This domain is pivotal in advancing knowledge representation by enabling the integration, interoperability, and automation of industrial processes, thereby facilitating improved efficiency, innovation, and decision-making. Through precise semantic frameworks, it supports the digital transformation and intelligent management of industrial operations.
22
 
23
  | Ontology ID | Full Name | Classes | Properties | Individuals |
24
  |-------------|-----------|---------|------------|-------------|
@@ -41,50 +41,80 @@ Each ontology directory contains the following files:
41
  ## Usage
42
  These datasets are intended for ontology learning research and applications. Here's how to use them with OntoLearner:
43
 
44
- ```python
45
- from ontolearner import LearnerPipeline, AutoLearnerLLM, Wine, train_test_split
46
 
47
- # Load ontology (automatically downloads from Hugging Face)
48
- ontology = Wine()
49
- ontology.load()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
- # Extract the dataset
 
 
52
  data = ontology.extract()
53
 
54
  # Split into train and test sets
55
- train_data, test_data = train_test_split(data, test_size=0.2)
56
 
57
- # Create a learning pipeline (for RAG-based learning)
 
58
  pipeline = LearnerPipeline(
59
- task="term-typing", # Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
60
- retriever_id="sentence-transformers/all-MiniLM-L6-v2",
61
- llm_id="mistralai/Mistral-7B-Instruct-v0.1",
62
- hf_token="your_huggingface_token" # Only needed for gated models
 
63
  )
64
 
65
- # Train and evaluate
66
- results, metrics = pipeline.fit_predict_evaluate(
67
  train_data=train_data,
68
  test_data=test_data,
69
- top_k=3,
70
- test_limit=10
 
71
  )
 
 
 
 
 
 
 
 
 
72
  ```
73
 
74
  For more detailed examples, see the [OntoLearner documentation](https://ontolearner.readthedocs.io/).
75
 
76
  ## Citation
77
- If you use these ontologies in your research, please cite:
 
78
 
79
  ```bibtex
80
- @software{babaei_giglou_2025,
81
- author = {Babaei Giglou, Hamed and D'Souza, Jennifer and Aioanei, Andrei and Mihindukulasooriya, Nandana and Auer, Sören},
82
- title = {OntoLearner: A Modular Python Library for Ontology Learning with LLMs},
83
- month = may,
84
- year = 2025,
85
- publisher = {Zenodo},
86
- version = {v1.0.1},
87
- doi = {10.5281/zenodo.15399783},
88
- url = {https://doi.org/10.5281/zenodo.15399783},
89
  }
90
  ```
 
18
  </div>
19
 
20
  ## Overview
21
+ The industry domain encompasses ontologies that systematically represent and model the complex structures, processes, and interactions within industrial settings, including manufacturing systems, smart buildings, and equipment. This domain is pivotal in advancing knowledge representation by enabling the integration, interoperability, and automation of industrial processes, thereby facilitating improved efficiency, innovation, and decision-making. Through precise semantic frameworks, it supports the digital transformation and intelligent management of industrial operations.
22
 
23
  | Ontology ID | Full Name | Classes | Properties | Individuals |
24
  |-------------|-----------|---------|------------|-------------|
 
41
  ## Usage
42
  These datasets are intended for ontology learning research and applications. Here's how to use them with OntoLearner:
43
 
44
+ First of all, install the `OntoLearner` library via PiP:
 
45
 
46
+ ```bash
47
+ pip install ontolearner
48
+ ```
49
+
50
+ **How to load an ontology or LLM4OL Paradigm tasks datasets?**
51
+ ``` python
52
+ from ontolearner import AUTO
53
+
54
+ ontology = AUTO()
55
+
56
+ # Load an ontology.
57
+ ontology.load()
58
+
59
+ # Load (or extract) LLMs4OL Paradigm tasks datasets
60
+ data = ontology.extract()
61
+ ```
62
+
63
+ **How use the loaded dataset for LLM4OL Paradigm task settings?**
64
+ ``` python
65
+ # Import core modules from the OntoLearner library
66
+ from ontolearner import AUTO, LearnerPipeline, train_test_split
67
 
68
+ # Load the AUTO ontology, which contains concepts related to wines, their properties, and categories
69
+ ontology = AUTO()
70
+ ontology.load() # Load entities, types, and structured term annotations from the ontology
71
  data = ontology.extract()
72
 
73
  # Split into train and test sets
74
+ train_data, test_data = train_test_split(data, test_size=0.2, random_state=42)
75
 
76
+ # Initialize a multi-component learning pipeline (retriever + LLM)
77
+ # This configuration enables a Retrieval-Augmented Generation (RAG) setup
78
  pipeline = LearnerPipeline(
79
+ retriever_id='sentence-transformers/all-MiniLM-L6-v2', # Dense retriever model for nearest neighbor search
80
+ llm_id='Qwen/Qwen2.5-0.5B-Instruct', # Lightweight instruction-tuned LLM for reasoning
81
+ hf_token='...', # Hugging Face token for accessing gated models
82
+ batch_size=32, # Batch size for training/prediction if supported
83
+ top_k=5 # Number of top retrievals to include in RAG prompting
84
  )
85
 
86
+ # Run the pipeline: training, prediction, and evaluation in one call
87
+ outputs = pipeline(
88
  train_data=train_data,
89
  test_data=test_data,
90
+ evaluate=True, # Compute metrics like precision, recall, and F1
91
+ task='term-typing' # Specifies the task
92
+ # Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
93
  )
94
+
95
+ # Print final evaluation metrics
96
+ print("Metrics:", outputs['metrics'])
97
+
98
+ # Print the total time taken for the full pipeline execution
99
+ print("Elapsed time:", outputs['elapsed_time'])
100
+
101
+ # Print all outputs (including predictions)
102
+ print(outputs)
103
  ```
104
 
105
  For more detailed examples, see the [OntoLearner documentation](https://ontolearner.readthedocs.io/).
106
 
107
  ## Citation
108
+
109
+ If you find our work helpful, feel free to give us a cite.
110
 
111
  ```bibtex
112
+ @inproceedings{babaei2023llms4ol,
113
+ title={LLMs4OL: Large language models for ontology learning},
114
+ author={Babaei Giglou, Hamed and D’Souza, Jennifer and Auer, S{\"o}ren},
115
+ booktitle={International Semantic Web Conference},
116
+ pages={408--427},
117
+ year={2023},
118
+ organization={Springer}
 
 
119
  }
120
  ```