Instructions to use NopenAI/gemma-4-31B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use NopenAI/gemma-4-31B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="NopenAI/gemma-4-31B")# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("NopenAI/gemma-4-31B") model = AutoModelForImageTextToText.from_pretrained("NopenAI/gemma-4-31B") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use NopenAI/gemma-4-31B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "NopenAI/gemma-4-31B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "NopenAI/gemma-4-31B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/NopenAI/gemma-4-31B
- SGLang
How to use NopenAI/gemma-4-31B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "NopenAI/gemma-4-31B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "NopenAI/gemma-4-31B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "NopenAI/gemma-4-31B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "NopenAI/gemma-4-31B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use NopenAI/gemma-4-31B with Docker Model Runner:
docker model run hf.co/NopenAI/gemma-4-31B
| { | |
| "audio_token": "<|audio|>", | |
| "backend": "tokenizers", | |
| "boa_token": "<|audio>", | |
| "boi_token": "<|image>", | |
| "bos_token": "<bos>", | |
| "eoa_token": "<audio|>", | |
| "eoc_token": "<channel|>", | |
| "eoi_token": "<image|>", | |
| "eos_token": "<eos>", | |
| "eot_token": "<turn|>", | |
| "escape_token": "<|\"|>", | |
| "etc_token": "<tool_call|>", | |
| "etd_token": "<tool|>", | |
| "etr_token": "<tool_response|>", | |
| "extra_special_tokens": [ | |
| "<|video|>" | |
| ], | |
| "image_token": "<|image|>", | |
| "mask_token": "<mask>", | |
| "model_max_length": 1000000000000000019884624838656, | |
| "pad_token": "<pad>", | |
| "padding_side": "left", | |
| "processor_class": "Gemma4Processor", | |
| "soc_token": "<|channel>", | |
| "sot_token": "<|turn>", | |
| "stc_token": "<|tool_call>", | |
| "std_token": "<|tool>", | |
| "str_token": "<|tool_response>", | |
| "think_token": "<|think|>", | |
| "tokenizer_class": "GemmaTokenizer", | |
| "unk_token": "<unk>" | |
| } | |