File size: 1,293 Bytes
387f198
af76e5a
 
387f198
 
 
 
 
 
 
 
 
 
 
e16daa8
387f198
e16daa8
387f198
e16daa8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
387f198
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
base_model:
- meta-llama/Llama-2-7b-chat-hf
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---

### Llama 2 7B Physics

A large language model specialized for quantum physics related queries. It has been fine tuned from llama 2 7B which is a chat model. The model was fine-tuned using the unsloth library in python.

### Usage

You can import and use the model using unsloth:

```python

from unsloth import FastLanguageModel

max_seq_length = 2048

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "arham-15/llama2_7B_qphysics",
    max_seq_length = max_seq_length,
    dtype = None,
    load_in_4bit = True,
)
```

Or you can use the hugging face transformers library if you wish to, totally up to you.

```python

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "arham-15/llama2_7B_qphysics"

model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```

### Results

The model has been evaluated with its base model by perplexity score. The model has shown significant improvement on quantum physics related queries. Out of 200 test questions, the model outperformed the base model on 126 with a lower perplexity score.