aaa-2-sql / README.md
noom83's picture
Upload folder using huggingface_hub
22e6b89 verified

aaa-2-sql

This is a finetuned version of Mistral-7B-Instruct-v0.3 using LoRA with LitGPT.

Training Details

  • Base Model: mistralai/Mistral-7B-Instruct-v0.3
  • Framework: LitGPT
  • Finetuning Method: Low-Rank Adaptation (LoRA)
  • LoRA Parameters:
    • Rank (r): 16
    • Alpha: 32
    • Dropout: 0.05
  • Quantization: bnb.nf4
  • Context Length: 4098 tokens
  • Training Steps: 2000

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("exaler/aaa-2-sql")
tokenizer = AutoTokenizer.from_pretrained("exaler/aaa-2-sql")

# Create prompt
prompt = "Your prompt here"

# Generate text
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=1024)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)