Uploaded model

  • Developed by: Majipa
  • License: apache-2.0
  • Finetuned from model : unsloth/phi-3-mini-4k-instruct-bnb-4bit

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Using the model

from transformers import AutoModelForCausalLM, BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(load_in_4bit=True)

model = AutoModelForCausalLM.from_pretrained("Majipa/text-to-SQL",
                                             device_map="cuda",
                                             torch_dtype="auto",
                                             quantization_config=quantization_config)
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline 

tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct") 

messages = [ 
    {"role": "system", "content": "You are a helpful text-to-SQL assistant."}, 
    {"role": "user", "content": "question: How many heads of the departments are older than 56 ? context: CREATE TABLE head (age INTEGER)"}, 
] 

pipe = pipeline( 
    "text-generation", 
    model=model, 
    tokenizer=tokenizer, 
) 

generation_args = { 
    "max_new_tokens": 500, 
    "temperature": 0.7, 
} 

output = pipe(messages, **generation_args) 
print(output[0]['generated_text'])
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support