Instructions to use nobodynosql/sql_codellama with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nobodynosql/sql_codellama with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="nobodynosql/sql_codellama")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("nobodynosql/sql_codellama") model = AutoModelForCausalLM.from_pretrained("nobodynosql/sql_codellama") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use nobodynosql/sql_codellama with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "nobodynosql/sql_codellama" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nobodynosql/sql_codellama", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/nobodynosql/sql_codellama
- SGLang
How to use nobodynosql/sql_codellama with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "nobodynosql/sql_codellama" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nobodynosql/sql_codellama", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "nobodynosql/sql_codellama" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nobodynosql/sql_codellama", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use nobodynosql/sql_codellama with Docker Model Runner:
docker model run hf.co/nobodynosql/sql_codellama
sql_codellama 介绍
SQL-Codellama是一个用于text2SQL的模型。
模型底座
它是基于codellama模型构建的,该模型通过使用qlora进行训练。
训练数据
训练数据包含了spider、starcode等数据集。这个模型的目标是将自然语言查询转换为SQL查询。
功能
Text to SQL( 以下简称Text2SQL),是将自然语言文本(Text)转换成结构化查询语言SQL的过程,属于自然语言处理-语义分析(Semantic Parsing)领域中的子任务。 它的目的可以简单概括为:“打破人与结构化数据之间的壁垒”,即普通用户可以通过自然语言描述完成复杂数据库的查询工作,得到想要的结果。 它通过学习语法、语义和查询意图来理解用户的问题,并根据对应的数据库结构生成相应的SQL查询语句。SQL-Codellama的训练过程经过了大量的数据预处理、特征提取和模型训练,以提高其准确性和性能。 它可以应用于各种领域,如数据分析、数据库查询优化等。SQL-Codellama的设计和训练过程是为了使其能够处理复杂的查询,并产生高质量的SQL查询结果。它的目标是为用户提供准确、高效的文本到SQL转换,从而帮助用户更轻松地进行数据库查询和数据分析。
git clone https://www.modelscope.cn/tomatoModelScope/sql_codellama.git
- Downloads last month
- 18
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "nobodynosql/sql_codellama"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nobodynosql/sql_codellama", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'