Instructions to use Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL") model = AutoModelForCausalLM.from_pretrained("Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL
- SGLang
How to use Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL with Docker Model Runner:
docker model run hf.co/Sharpaxis/Llama-2-7b-chat-finetune_TEXT2SQL
datasets:
- ekshat/text-2-sql-with-context language:
- en library_name: transformers pipeline_tag: text-generation tags:
- text-2-sql
- text-generation
- text2sql
Inference
!pip install transformers accelerate xformers bitsandbytes
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("ekshat/Llama-2-7b-chat-finetune-for-text2sql")
# Loading model in 4 bit precision
model = AutoModelForCausalLM.from_pretrained("ekshat/Llama-2-7b-chat-finetune-for-text2sql", load_in_4bit=True)
context = "CREATE TABLE head (name VARCHAR, born_state VARCHAR, age VARCHAR)"
question = "List the name, born state and age of the heads of departments ordered by age."
prompt = f"""Below is an context that describes a sql query, paired with an question that provides further information. Write an answer that appropriately completes the request.
### Context:
{context}
### Question:
{question}
### Answer:"""
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)
result = pipe(prompt)
print(result[0]['generated_text'])
Model Information
model_name = "NousResearch/Llama-2-7b-chat-hf"
dataset_name = "ekshat/text-2-sql-with-context"
QLoRA parameters
lora_r = 64
lora_alpha = 16
lora_dropout = 0.1
BitsAndBytes parameters
use_4bit = True
bnb_4bit_compute_dtype = "float16"
bnb_4bit_quant_type = "nf4"
use_nested_quant = False
Training Arguments parameters
num_train_epochs = 1
fp16 = False
bf16 = False
per_device_train_batch_size = 8
per_device_eval_batch_size = 4
gradient_accumulation_steps = 1
gradient_checkpointing = True
max_grad_norm = 0.3
learning_rate = 2e-4
weight_decay = 0.001
optim = "paged_adamw_32bit"
lr_scheduler_type = "cosine"
max_steps = -1
warmup_ratio = 0.03
group_by_length = True
save_steps = 0
logging_steps = 25
SFT parameters
max_seq_length = None
packing = False
- Downloads last month
- 1