arham-15 commited on
Commit
e16daa8
·
verified ·
1 Parent(s): af76e5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -6
README.md CHANGED
@@ -12,12 +12,41 @@ language:
12
  - en
13
  ---
14
 
15
- # Uploaded model
16
 
17
- - **Developed by:** arham-15
18
- - **License:** apache-2.0
19
- - **Finetuned from model :** unsloth/llama-2-7b-chat-bnb-4bit
20
 
21
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
12
  - en
13
  ---
14
 
15
+ ### Llama 2 7B Physics
16
 
17
+ A large language model specialized for quantum physics related queries. It has been fine tuned from llama 2 7B which is a chat model. The model was fine-tuned using the unsloth library in python.
 
 
18
 
19
+ ### Usage
20
+
21
+ You can import and use the model using unsloth:
22
+
23
+ ```python
24
+
25
+ from unsloth import FastLanguageModel
26
+
27
+ max_seq_length = 2048
28
+
29
+ model, tokenizer = FastLanguageModel.from_pretrained(
30
+ model_name = "arham-15/llama2_7B_qphysics",
31
+ max_seq_length = max_seq_length,
32
+ dtype = None,
33
+ load_in_4bit = True,
34
+ )
35
+ ```
36
+
37
+ Or you can use the hugging face transformers library if you wish to, totally up to you.
38
+
39
+ ```python
40
+
41
+ from transformers import AutoModelForCausalLM, AutoTokenizer
42
+
43
+ model_name = "arham-15/llama2_7B_qphysics"
44
+
45
+ model = AutoModelForCausalLM.from_pretrained(model_name)
46
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
47
+ ```
48
+
49
+ ### Results
50
+
51
+ The model has been evaluated with its base model by perplexity score. The model has shown significant improvement on quantum physics related queries. Out of 200 test questions, the model outperformed the base model on 126 with a lower perplexity score.
52