File size: 970 Bytes
22e6b89 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
# aaa-2-sql
This is a finetuned version of [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) using LoRA with LitGPT.
## Training Details
- **Base Model:** mistralai/Mistral-7B-Instruct-v0.3
- **Framework:** LitGPT
- **Finetuning Method:** Low-Rank Adaptation (LoRA)
- **LoRA Parameters:**
- Rank (r): 16
- Alpha: 32
- Dropout: 0.05
- **Quantization:** bnb.nf4
- **Context Length:** 4098 tokens
- **Training Steps:** 2000
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("exaler/aaa-2-sql")
tokenizer = AutoTokenizer.from_pretrained("exaler/aaa-2-sql")
# Create prompt
prompt = "Your prompt here"
# Generate text
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=1024)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
```
|