How to use tmnam20/codellama-13b-text2sql-gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="tmnam20/codellama-13b-text2sql-gguf", filename="ggml-model-q4_k_m.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
How to use tmnam20/codellama-13b-text2sql-gguf with llama.cpp:
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf tmnam20/codellama-13b-text2sql-gguf:Q4_K_M # Run inference directly in the terminal: llama-cli -hf tmnam20/codellama-13b-text2sql-gguf:Q4_K_M
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf tmnam20/codellama-13b-text2sql-gguf:Q4_K_M # Run inference directly in the terminal: llama-cli -hf tmnam20/codellama-13b-text2sql-gguf:Q4_K_M
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf tmnam20/codellama-13b-text2sql-gguf:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf tmnam20/codellama-13b-text2sql-gguf:Q4_K_M
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf tmnam20/codellama-13b-text2sql-gguf:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf tmnam20/codellama-13b-text2sql-gguf:Q4_K_M
docker model run hf.co/tmnam20/codellama-13b-text2sql-gguf:Q4_K_M
How to use tmnam20/codellama-13b-text2sql-gguf with Ollama:
ollama run hf.co/tmnam20/codellama-13b-text2sql-gguf:Q4_K_M
How to use tmnam20/codellama-13b-text2sql-gguf with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for tmnam20/codellama-13b-text2sql-gguf to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for tmnam20/codellama-13b-text2sql-gguf to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for tmnam20/codellama-13b-text2sql-gguf to start chatting
How to use tmnam20/codellama-13b-text2sql-gguf with Docker Model Runner:
How to use tmnam20/codellama-13b-text2sql-gguf with Lemonade:
# Download Lemonade from https://lemonade-server.ai/ lemonade pull tmnam20/codellama-13b-text2sql-gguf:Q4_K_M
lemonade run user.codellama-13b-text2sql-gguf-Q4_K_M
lemonade list
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
Log in or Sign Up to review the conditions and access this model content.
4-bit