Instructions to use budecosystem/sql-millennials-7b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use budecosystem/sql-millennials-7b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="budecosystem/sql-millennials-7b")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("budecosystem/sql-millennials-7b") model = AutoModelForCausalLM.from_pretrained("budecosystem/sql-millennials-7b") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use budecosystem/sql-millennials-7b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "budecosystem/sql-millennials-7b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "budecosystem/sql-millennials-7b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/budecosystem/sql-millennials-7b
- SGLang
How to use budecosystem/sql-millennials-7b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "budecosystem/sql-millennials-7b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "budecosystem/sql-millennials-7b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "budecosystem/sql-millennials-7b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "budecosystem/sql-millennials-7b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use budecosystem/sql-millennials-7b with Docker Model Runner:
docker model run hf.co/budecosystem/sql-millennials-7b
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
library_name: transformers
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
+
## Introduction 🎉
|
| 9 |
+
|
| 10 |
+
A model finetuned specifically for the text-to-SQL tasks. The model is finetuned on mistral 7B with a curated dataset of 100k SQL query generation instructions.
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
## Generate responses
|
| 14 |
+
|
| 15 |
+
Now that your model is fine-tuned, you're ready to generate responses. You can do this using our generate.py script, which runs inference from the Hugging Face model hub and inference on a specified input. Here's an example of usage:
|
| 16 |
+
|
| 17 |
+
```python
|
| 18 |
+
import torch
|
| 19 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 20 |
+
|
| 21 |
+
tokenizer = AutoTokenizer.from_pretrained("budecosystem/sql-millennials-7b")
|
| 22 |
+
model = AutoModelForCausalLM.from_pretrained("budecosystem/sql-millennials-7b")
|
| 23 |
+
|
| 24 |
+
prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
|
| 25 |
+
USER: Create SQL query for the given table schema and question ASSISTANT:"
|
| 26 |
+
|
| 27 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
| 28 |
+
sample = model.generate(**inputs, max_length=128)
|
| 29 |
+
print(tokenizer.decode(sample[0]))
|
| 30 |
+
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
## Training details
|
| 35 |
+
|
| 36 |
+
The model is trained of 4 A100 80GB for approximately 30hrs.
|
| 37 |
+
|
| 38 |
+
| Hyperparameters | Value |
|
| 39 |
+
| :----------------------------| :-----: |
|
| 40 |
+
| per_device_train_batch_size | 4 |
|
| 41 |
+
| gradient_accumulation_steps | 1 |
|
| 42 |
+
| epoch | 3 |
|
| 43 |
+
| steps | 19206 |
|
| 44 |
+
| learning_rate | 2e-5 |
|
| 45 |
+
| lr schedular type | cosine |
|
| 46 |
+
| warmup steps | 2000 |
|
| 47 |
+
| optimizer | adamw |
|
| 48 |
+
| fp16 | True |
|
| 49 |
+
| GPU | 4 A100 80GB |
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
### Acknowledgments
|
| 53 |
+
|
| 54 |
+
We'd like to thank the open-source community and the researchers whose foundational work laid the path to this model.
|