Instructions to use AIAT/EXP-mindblow with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AIAT/EXP-mindblow with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="AIAT/EXP-mindblow")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("AIAT/EXP-mindblow") model = AutoModelForCausalLM.from_pretrained("AIAT/EXP-mindblow") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use AIAT/EXP-mindblow with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "AIAT/EXP-mindblow" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AIAT/EXP-mindblow", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/AIAT/EXP-mindblow
- SGLang
How to use AIAT/EXP-mindblow with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "AIAT/EXP-mindblow" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AIAT/EXP-mindblow", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "AIAT/EXP-mindblow" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AIAT/EXP-mindblow", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use AIAT/EXP-mindblow with Docker Model Runner:
docker model run hf.co/AIAT/EXP-mindblow
πΉπ EXP-mindblow 13b 1.0.0
Overview :
This model is fine-tuned version of openthaigpt/openthaigpt-1.0.0-13b-chat on text-2-sql datasets to generate SQL query.
Recommendation :
This model should be used with other LLMs to summarize or synthesize response as the model fine-tuned heavily on SQL may cause some hallucinations when it comes to generating thai and eng texts
Prompting Guide :
The model was trained by this context :
[INST]<> You are a question answering assistant. Answer the question as truthful and helpful as possible<> You are a SQLite expert. Given an input question,create a syntactically correct SQLite query to run. You can order the results to return the most informative data in the database. if the query asks for MAX or MIN you must return only one answer using LIMIT 1. Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers. Pay attention to use only the column names you can see in the table below. Be careful to not query for columns that do not exist. DO WRAP EVERY COLUMN NAME WITH ("). For example: DO: "Market Cap". DO NOT: Market Cap Use the following format: SQLQuery: SQL Query to run Only use the following columns of the given table: {input} ###RULES Remember to DO WRAP EVERY COLUMN NAME WITH double quote("). For example: DO: "Market Cap". DO NOT: Market Cap" IF the question is not related to the columns or table. Just say I don't know. Question: {instruction} \n
This is example of the table : {head of dataframe}
SQL Query: [/INST]
- Downloads last month
- 14


docker model run hf.co/AIAT/EXP-mindblow