Text Generation
Transformers
Safetensors
English
gemma2
gemma
fp8
vllm
conversational
text-generation-inference
Instructions to use RedHatAI/gemma-2-9b-it-FP8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use RedHatAI/gemma-2-9b-it-FP8 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="RedHatAI/gemma-2-9b-it-FP8") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("RedHatAI/gemma-2-9b-it-FP8") model = AutoModelForCausalLM.from_pretrained("RedHatAI/gemma-2-9b-it-FP8") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use RedHatAI/gemma-2-9b-it-FP8 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "RedHatAI/gemma-2-9b-it-FP8" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RedHatAI/gemma-2-9b-it-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/RedHatAI/gemma-2-9b-it-FP8
- SGLang
How to use RedHatAI/gemma-2-9b-it-FP8 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "RedHatAI/gemma-2-9b-it-FP8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RedHatAI/gemma-2-9b-it-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "RedHatAI/gemma-2-9b-it-FP8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RedHatAI/gemma-2-9b-it-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use RedHatAI/gemma-2-9b-it-FP8 with Docker Model Runner:
docker model run hf.co/RedHatAI/gemma-2-9b-it-FP8
AttributeError: 'Gemma2Config' object has no attribute 'interleaved_sliding_window' Traceback (most recent call last):
#3
by samos123 - opened
Getting error below when trying to use this model in vLLM v0.7.1
INFO 02-03 18:25:17 llm_engine.py:232] Initializing a V0 LLM engine (v0.7.1) with config: model='neuralmagic/gemma-2-9b-it-FP8', speculative_config=None, tokenizer='neuralmagic/gemma-2-9b-it-FP8', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=fp8, enforce_eager=False, kv_cache_dtype=fp8, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=gemma-2-9b-it-fp8-l4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True,
INFO 02-03 18:25:19 cuda.py:169] Using FlashInfer backend.
INFO 02-03 18:25:19 model_runner.py:1111] Starting to load model neuralmagic/gemma-2-9b-it-FP8...
ERROR 02-03 18:25:20 engine.py:387] 'Gemma2Config' object has no attribute 'interleaved_sliding_window'
ERROR 02-03 18:25:20 engine.py:387] Traceback (most recent call last):
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine
ERROR 02-03 18:25:20 engine.py:387] engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Process SpawnProcess-1:
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 121, in from_engine_args
ERROR 02-03 18:25:20 engine.py:387] return cls(ipc_path=ipc_path,
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 73, in __init__
ERROR 02-03 18:25:20 engine.py:387] self.engine = LLMEngine(*args, **kwargs)
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 271, in __init__
ERROR 02-03 18:25:20 engine.py:387] self.model_executor = executor_class(vllm_config=vllm_config, )
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 49, in __init__
ERROR 02-03 18:25:20 engine.py:387] self._init_executor()
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/uniproc_executor.py", line 40, in _init_executor
ERROR 02-03 18:25:20 engine.py:387] self.collective_rpc("load_model")
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/uniproc_executor.py", line 49, in collective_rpc
ERROR 02-03 18:25:20 engine.py:387] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/utils.py", line 2208, in run_method
ERROR 02-03 18:25:20 engine.py:387] return func(*args, **kwargs)
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 182, in load_model
ERROR 02-03 18:25:20 engine.py:387] self.model_runner.load_model()
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1113, in load_model
ERROR 02-03 18:25:20 engine.py:387] self.model = get_model(vllm_config=self.vllm_config)
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
ERROR 02-03 18:25:20 engine.py:387] return loader.load_model(vllm_config=vllm_config)
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 377, in load_model
ERROR 02-03 18:25:20 engine.py:387] model = _initialize_model(vllm_config=vllm_config)
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 119, in _initialize_model
ERROR 02-03 18:25:20 engine.py:387] return model_class(vllm_config=vllm_config, prefix=prefix)
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gemma2.py", line 412, in __init__
ERROR 02-03 18:25:20 engine.py:387] self.model = Gemma2Model(vllm_config=vllm_config,
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py", line 149, in __init__
ERROR 02-03 18:25:20 engine.py:387] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gemma2.py", line 261, in __init__
ERROR 02-03 18:25:20 engine.py:387] self.start_layer, self.end_layer, self.layers = make_layers(
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 556, in make_layers
ERROR 02-03 18:25:20 engine.py:387] maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gemma2.py", line 263, in <lambda>
ERROR 02-03 18:25:20 engine.py:387] lambda prefix: Gemma2DecoderLayer(
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gemma2.py", line 187, in __init__
ERROR 02-03 18:25:20 engine.py:387] self.self_attn = Gemma2Attention(
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gemma2.py", line 148, in __init__
ERROR 02-03 18:25:20 engine.py:387] config.interleaved_sliding_window is not None)
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 211, in __getattribute__
ERROR 02-03 18:25:20 engine.py:387] return super().__getattribute__(key)
ERROR 02-03 18:25:20 engine.py:387] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-03 18:25:20 engine.py:387] AttributeError: 'Gemma2Config' object has no attribute 'interleaved_sliding_window'
Traceback (most recent call last):
File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
Maybe you need to update your version of transformers? I was able to load this checkpoint with vllm serve neuralmagic/gemma-2-9b-it-FP8 with these deps:
transformers==4.48.2
vllm==0.7.1
I was just using the upstream vLLM container image for v0.7.1. I can try building a custom image with specific transformers version.
The upstream image is already using transformers==4.48.2
so that probably wouldn't help much. Weird that you can't reproduce it.
Can you try using this image?
docker run -ti --entrypoint /bin/bash vllm/vllm-openai:v0.7.1