Text Generation
Transformers
Safetensors
English
llama
code
text-generation-inference
4-bit precision
awq
Instructions to use TheBloke/Code-290k-13B-AWQ with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TheBloke/Code-290k-13B-AWQ with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="TheBloke/Code-290k-13B-AWQ")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TheBloke/Code-290k-13B-AWQ") model = AutoModelForCausalLM.from_pretrained("TheBloke/Code-290k-13B-AWQ") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use TheBloke/Code-290k-13B-AWQ with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "TheBloke/Code-290k-13B-AWQ" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TheBloke/Code-290k-13B-AWQ", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/TheBloke/Code-290k-13B-AWQ
- SGLang
How to use TheBloke/Code-290k-13B-AWQ with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "TheBloke/Code-290k-13B-AWQ" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TheBloke/Code-290k-13B-AWQ", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "TheBloke/Code-290k-13B-AWQ" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TheBloke/Code-290k-13B-AWQ", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use TheBloke/Code-290k-13B-AWQ with Docker Model Runner:
docker model run hf.co/TheBloke/Code-290k-13B-AWQ
Noobs's question
#1
by kekawia - opened
can i run this ?
-- RTX 3060 12go vram / 32 ram --
kekawia changed discussion status to closed
kekawia changed discussion status to open
@kekawia
yeah you can 100% run this but probably not at 290k context. Awq relies mostly on vram so not a lot of context(maybe 8k?) Gguf quants with llama.cpp should help you run much more context since it can utilize cpu and it might reach like 60k context? definitely much more then 8k however