PeterGordon commited on
Commit
ac6321a
·
verified ·
1 Parent(s): c5bed78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -3
README.md CHANGED
@@ -1,3 +1,29 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Card for Nexa Temp Mapping
2
+
3
+ ## Model Description
4
+ This model, named Nexa Temp Mapping, is fine-tuned from the Mistral-7B-Instruct-v0.2 model for specialized tasks in creating test cases for Temperature Mapping of areas. It incorporates enhancements using PEFT (Pretrained Encoder Fine-Tuning) techniques to optimize performance for specific applications.
5
+
6
+ ## Training Data
7
+ Describe the dataset used for training the model:
8
+ - **Source:** [Specify the source of the training data]
9
+ - **Size:** 50 Datapoints
10
+ - **Details:** Brief description of the dataset characteristics.
11
+
12
+ ## Intended Use
13
+ This model is intended for use in the creation of test cases to qualify equipment such as fridges, freezers, autoclaves and ovens. It is designed to improve the code model by including domain knowledge over Supplement 8 Temperature mapping of storage areas Technical supplement to WHO Technical Report Series, No. 961, 2011. Annex 9: Model guidance for the stoage and transport of time- and temperature-sensitive pharmaceutcial products.
14
+
15
+ ## How to Use
16
+ ```python
17
+ from transformers import AutoTokenizer, AutoModelForCausalLM
18
+
19
+ tokenizer = AutoTokenizer.from_pretrained("PeterGordon/nexa-temp-mapping")
20
+ model = AutoModelForCausalLM.from_pretrained("PeterGordon/nexa-temp-mapping")
21
+
22
+ text = "Your input text here"
23
+ encoded_input = tokenizer(text, return_tensors='pt')
24
+ output = model.generate(**encoded_input)
25
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
26
+
27
+ ---
28
+ license: apache-2.0
29
+ ---