How to use Phind/Phind-CodeLlama-34B-v2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Phind/Phind-CodeLlama-34B-v2")
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Phind/Phind-CodeLlama-34B-v2") model = AutoModelForCausalLM.from_pretrained("Phind/Phind-CodeLlama-34B-v2")
How to use Phind/Phind-CodeLlama-34B-v2 with vLLM:
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Phind/Phind-CodeLlama-34B-v2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Phind/Phind-CodeLlama-34B-v2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'
docker model run hf.co/Phind/Phind-CodeLlama-34B-v2
How to use Phind/Phind-CodeLlama-34B-v2 with SGLang:
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Phind/Phind-CodeLlama-34B-v2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Phind/Phind-CodeLlama-34B-v2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Phind/Phind-CodeLlama-34B-v2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Phind/Phind-CodeLlama-34B-v2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'
How to use Phind/Phind-CodeLlama-34B-v2 with Docker Model Runner:
This is excellent, really impressive! Are there any plans to release smaller model sizes? For example 3B or 7B trained against the same data set could be useful for speculative decoding optimization. TIA
Β· Sign up or log in to comment