Instructions to use RedHatAI/Meta-Llama-3-8B-Instruct-FP8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use RedHatAI/Meta-Llama-3-8B-Instruct-FP8 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="RedHatAI/Meta-Llama-3-8B-Instruct-FP8") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("RedHatAI/Meta-Llama-3-8B-Instruct-FP8") model = AutoModelForCausalLM.from_pretrained("RedHatAI/Meta-Llama-3-8B-Instruct-FP8") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use RedHatAI/Meta-Llama-3-8B-Instruct-FP8 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "RedHatAI/Meta-Llama-3-8B-Instruct-FP8" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RedHatAI/Meta-Llama-3-8B-Instruct-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/RedHatAI/Meta-Llama-3-8B-Instruct-FP8
- SGLang
How to use RedHatAI/Meta-Llama-3-8B-Instruct-FP8 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "RedHatAI/Meta-Llama-3-8B-Instruct-FP8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RedHatAI/Meta-Llama-3-8B-Instruct-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "RedHatAI/Meta-Llama-3-8B-Instruct-FP8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RedHatAI/Meta-Llama-3-8B-Instruct-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use RedHatAI/Meta-Llama-3-8B-Instruct-FP8 with Docker Model Runner:
docker model run hf.co/RedHatAI/Meta-Llama-3-8B-Instruct-FP8
Fails to run with nm-vllm
Hello,
python version: 3.11
os: WSL
Only other dependencies are those specified by nm-vllm. Followed the instructions here:
https://github.com/neuralmagic/nm-vllm
Ran the model as such, after installing all of the dependencies in a fresh conda env:
(nm-vllm) unix@rog-zephyrus:/code/structure$ pip install nm-vllm[sparse]/code/structure$ python -m vllm.entrypoints.openai.api_server --model neuralmagic/Meta-Llama-3-8B-Instruct-FP8 --sparsity sparse_w16a16
(nm-vllm) unix@rog-zephyrus:
INFO 05-11 23:41:26 api_server.py:149] vLLM API server version 0.2.0
warnings.warn(
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/home/unix/miniconda3/envs/nm-vllm/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 157, in
engine = AsyncLLMEngine.from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/unix/miniconda3/envs/nm-vllm/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 331, in from_engine_args
engine_configs = engine_args.create_engine_configs()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/unix/miniconda3/envs/nm-vllm/lib/python3.11/site-packages/vllm/engine/arg_utils.py", line 405, in create_engine_configs
model_config = ModelConfig(
^^^^^^^^^^^^
File "/home/unix/miniconda3/envs/nm-vllm/lib/python3.11/site-packages/vllm/config.py", line 133, in init
self._verify_quantization()
File "/home/unix/miniconda3/envs/nm-vllm/lib/python3.11/site-packages/vllm/config.py", line 234, in _verify_quantization
raise ValueError(
ValueError: Unknown quantization method: fp8. Must be one of ['awq', 'gptq', 'squeezellm', 'marlin'].
am I doing something wrong here? I also tried to run the model in standard vllm, v0.4.2. Performance was great, but about 30% of responses were bizarre, lots of responses containing "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!".
Fp8 will be supported in our next release of nm-vllm
There is a bug in v0.4.2 for fp8 static quantization. It is resolved on the latest main and will be fixed in v0.4.3
So I would suggest installing vllm from source