How to use from
vLLMUse Docker
docker model run hf.co/mlx-community/defog-sqlcoder-7b-2Quick Links
mlx-community/defog-sqlcoder-7b-2
This model was converted to MLX format from defog/sqlcoder-7b-2.
Refer to the original model card for more details on the model.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/defog-sqlcoder-7b-2")
response = generate(model, tokenizer, prompt="hello", verbose=True)
- Downloads last month
- 55
Hardware compatibility
Log In to add your hardware
Quantized
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "mlx-community/defog-sqlcoder-7b-2"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "mlx-community/defog-sqlcoder-7b-2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'