Spaces:
Sleeping
title: SQLGPT
sdk: docker
emoji: 🚀
colorFrom: yellow
colorTo: indigo
short_description: Give the context of the table and ask question from model.
SQLGPT
SQLGPT is a powerful model designed to generate SQL queries based on your table information and specific questions. Simply provide the context of your table, ask a question, and SQLGPT will generate the corresponding SQL query for you.
Live
You can interact with it live here: https://sqlgpt-hazel.vercel.app/
But as its deployed on huggingface spaces with 1 thread available and its running on CPU so be patient ;) it can take time (secret!! it can take upto 1 min)
Features
- SQL Query Generation: Input table details and your query; the model generates the appropriate SQL command.
- Fine-Tunning: The model is fine-tuned on Google's Gemma 2b using the dataset available here on Hugging Face.
- Model Availability: The model is available on both Kaggle and Hugging Face.
- Quantization: The finetunned model is being quantized to 4-bit in GGUF format using llama.cpp
Getting Started
Running the UI Interface on Unix Distributions (Linux, macOS)
Clone the Repository:
git clone https://github.com/awaistahseen009/SQLGPTInstall the Requirements:
pip install -r requirements.txtDownload the Quantized Model Setup:
Download the quantized model from Hugging Face.
Run the UI Interface:
Update the API request URL in
App.jsx:// Change this line in App.jsx const apiUrl = "http://localhost:8000/query";Start the server:
uvicorn main:app
Launch the UI: Run npm run dev in ui folder's terminal and Open the UI in your browser to interact with the model. on
http://localhost:8000
Windows Users
If you're using Windows, the llama-cpp package is not available, so you will need to follow these steps:
Clone the llama.cpp Repository:
git clone https://github.com/ggerganov/llama.cppDownload the Quantized Model:
Download the quantized model from Hugging Face.
Run the Model:
In your terminal, execute the following command:
./llama.cpp/llama-cli -m ./quantized_model/sql_gpt_quantized.gguf -n 256 -p "### QUESTION:\n{question_here}\n\n### CONTEXT:\n{context_here}\n\n### [RESPONSE]:\n"Prompt Template:
Use the following prompt template when interacting with the model:
### QUESTION: {question_here} ### CONTEXT: {context_here} ### [RESPONSE]:
Fine-Tuned and Quantization Files
You can download the fine-tuned model and quantization files from the SQLGPT Fine Tune Material Repository.
Contributing
Contributions are welcome! Feel free to fork the project, make improvements, and submit a pull request.
Happy querying with SQLGPT!