File size: 3,697 Bytes
4fff5a9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a36f28e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
library_name: peft
---
## Training procedure


The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16

The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions

- PEFT 0.5.0

- PEFT 0.5.0
#  Inference Code

### Install required libraries

```python
!pip install transformers peft
```
### Login
```python
from huggingface_hub import login

token = "Your Key"
login(token)
```
#### Import necessary modules
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from transformers import BitsAndBytesConfig
from peft import prepare_model_for_kbit_training
```
#### Load PEFT model and configuration
```python
config = PeftConfig.from_pretrained("Shreyas45/Llama2_Text-to-SQL_Fintuned")
peft_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
peft_model = PeftModel.from_pretrained(peft_model, "Shreyas45/Llama2_Text-to-SQL_Fintuned")
```
### Load trained model and tokenizer
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import prepare_model_for_kbit_training

trained_model_tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, trust_remote_code=True)
trained_model_tokenizer.pad_token = trained_model_tokenizer.eos_token
```

### Define a SQL query
```python
query = '''In the table named management with columns (department_id VARCHAR, temporary_acting VARCHAR); 
            CREATE TABLE department (name VARCHAR, num_employees VARCHAR, department_id VARCHAR), 
            Show the name and number of employees for the departments managed by heads whose temporary acting value is 'Yes'?'''
```

### Construct prompt
```python
prompt = f'''### Instruction: Below is an instruction that describes a task and the schema of the table in the database. 
            Write a response that generates a request in the form of a SQL query. 
            Here the schema of the table is mentioned first followed by the question for which the query needs to be generated. 
            And the question is: {query}
###Output: '''
```
### Tokenize the prompt
```python
encodings = trained_model_tokenizer(prompt, return_tensors='pt')
```
#### Configure generation parameters
```python

generation_config = peft_model.generation_config
generation_config.max_new_token = 1024
generation_config.temperature = 0.7
generation_config.top_p = 0.7
generation_config.num_return_sequence = 1
generation_config.pad_token_id = trained_model_tokenizer.pad_token_id
generation_config.eos_token_id = trained_model_tokenizer.eos_token_id
```
### Generate SQL query using the model
```python
with torch.inference_mode():
    outputs = peft_model.generate(
        input_ids=encodings.input_ids,
        attention_mask=encodings.attention_mask,
        generation_config=generation_config,
        max_new_tokens=100
    )
```
### Decode and print the generated SQL query
```python
generated_query = trained_model_tokenizer.decode(outputs[0])
print("Generated SQL Query:")
print(generated_query)
```