Instructions to use aws-neuron/CodeLlama-7b-hf-neuron-8xlarge with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use aws-neuron/CodeLlama-7b-hf-neuron-8xlarge with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="aws-neuron/CodeLlama-7b-hf-neuron-8xlarge")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("aws-neuron/CodeLlama-7b-hf-neuron-8xlarge") model = AutoModelForCausalLM.from_pretrained("aws-neuron/CodeLlama-7b-hf-neuron-8xlarge") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use aws-neuron/CodeLlama-7b-hf-neuron-8xlarge with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "aws-neuron/CodeLlama-7b-hf-neuron-8xlarge" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "aws-neuron/CodeLlama-7b-hf-neuron-8xlarge", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/aws-neuron/CodeLlama-7b-hf-neuron-8xlarge
- SGLang
How to use aws-neuron/CodeLlama-7b-hf-neuron-8xlarge with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "aws-neuron/CodeLlama-7b-hf-neuron-8xlarge" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "aws-neuron/CodeLlama-7b-hf-neuron-8xlarge", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "aws-neuron/CodeLlama-7b-hf-neuron-8xlarge" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "aws-neuron/CodeLlama-7b-hf-neuron-8xlarge", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use aws-neuron/CodeLlama-7b-hf-neuron-8xlarge with Docker Model Runner:
docker model run hf.co/aws-neuron/CodeLlama-7b-hf-neuron-8xlarge
Upload folder using huggingface_hub
Upload folder using huggingface_hub
Multi commit ID: 72c77c89ebd8a4c9e276fa7c6a6f003bbe4a100d74841daf11ea949d873482a8
Scheduled commits:
- Upload 21 file(s) totalling 2.0G (56f1ef29f3bed83fa1ea8accdfdc2df54607937df4d16f96c4a212d271bc719e)
- Upload 24 file(s) totalling 2.1G (e8ed5629da0a4861487b1bf8c5385fe71e808a2cb0f849ead36f407bb0fa0610)
- Upload 22 file(s) totalling 2.1G (8f0df90739d90402b4a18c252948f915a3c0b46c5fdfd6d510d7915159500a0e)
- Upload 23 file(s) totalling 2.1G (5ea8ae4b66c2b96996e5272ae2b832b94ce3e60aa5e3a41d71612240888e78be)
- Upload 25 file(s) totalling 2.1G (511604691ce5583b8f7941ce59eaab055fa54d147e23411a3435d4ce979dc2b4)
- Upload 25 file(s) totalling 2.1G (d1be15f80ba9319496ee91217b3199fcef40726c0c0ec2a17342dd088bf3b9c4)
- Upload 20 file(s) totalling 2.0G (717c1316d0a17e651b6e09c8b79ff91fe04b3b2e3cf78e60b7f6f24996af85e7)
- Upload 20 file(s) totalling 2.0G (2ef9e0e6fa270c62ec588ba8830c331f39bd830de3ccfb2cd683582006384a6c)
- Upload 26 file(s) totalling 2.1G (2af91306f926c3e84b6effb1fcb8d9bcfe67dfce5560bffa96cf3a1ef76c24da)
- Upload 20 file(s) totalling 2.1G (defab67865fba11805e36a141c23ae2c30c9d68a48bf3cd8627425abe6e6ec40)
- Upload 23 file(s) totalling 2.1G (0b13ca37028a01945c617c1322d4f935ba0ccd820a899ae964fdd8f4631872ee)
- Upload 20 file(s) totalling 2.0G (74148c1005566e92c07022708b0cd9ee4c09b74c8c197fc829ef2a218973f201)
- Upload 22 file(s) totalling 2.0G (f30dfef0b32e688f7a1e8c8e728e734c9276ca18e5a205a6ef21bad7ea9bd541)
- Upload 19 file(s) totalling 208.1M (78a62f058d9e53fcf90e101a9bb780dac1b53f59e8e22c33eefc0fbfb8171323)
This is a PR opened using the huggingface_hub library in the context of a multi-commit. PR can be commented as a usual PR. However, please be aware that manually updating the PR description, changing the PR status, or pushing new commits, is not recommended as it might corrupt the commit process. Learn more about multi-commits in this guide.
Multi-commit is now completed! You can ping the repo owner to review the changes. This PR can now be commented or modified without risking to corrupt it.
This is a comment posted using the huggingface_hub library in the context of a multi-commit. Learn more about multi-commits in this guide.
Hey @CyranoB , these neffs are for a 12 core compilation. This model card is a 2 core compilation. The 12 core is https://huggingface.co/aws-neuron/CodeLlama-7b-hf-neuron-24xlarge
However, I think the files are already there based on the file names.