Instructions to use alwaysgood/gemma4-it with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use alwaysgood/gemma4-it with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="alwaysgood/gemma4-it")# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("alwaysgood/gemma4-it") model = AutoModelForImageTextToText.from_pretrained("alwaysgood/gemma4-it") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use alwaysgood/gemma4-it with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "alwaysgood/gemma4-it" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "alwaysgood/gemma4-it", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/alwaysgood/gemma4-it
- SGLang
How to use alwaysgood/gemma4-it with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "alwaysgood/gemma4-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "alwaysgood/gemma4-it", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "alwaysgood/gemma4-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "alwaysgood/gemma4-it", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Unsloth Studio new
How to use alwaysgood/gemma4-it with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for alwaysgood/gemma4-it to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for alwaysgood/gemma4-it to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for alwaysgood/gemma4-it to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="alwaysgood/gemma4-it", max_seq_length=2048, ) - Docker Model Runner
How to use alwaysgood/gemma4-it with Docker Model Runner:
docker model run hf.co/alwaysgood/gemma4-it
| { | |
| "best_global_step": null, | |
| "best_metric": null, | |
| "best_model_checkpoint": null, | |
| "epoch": 1.0, | |
| "eval_steps": 500, | |
| "global_step": 90, | |
| "is_hyper_param_search": false, | |
| "is_local_process_zero": true, | |
| "is_world_process_zero": true, | |
| "log_history": [ | |
| { | |
| "epoch": 0.11204481792717087, | |
| "grad_norm": 29.25, | |
| "learning_rate": 1e-05, | |
| "loss": 1.3767690658569336, | |
| "step": 10 | |
| }, | |
| { | |
| "epoch": 0.22408963585434175, | |
| "grad_norm": 23.0, | |
| "learning_rate": 9.628619846344453e-06, | |
| "loss": 1.287515354156494, | |
| "step": 20 | |
| }, | |
| { | |
| "epoch": 0.33613445378151263, | |
| "grad_norm": 18.0, | |
| "learning_rate": 8.569648672789496e-06, | |
| "loss": 1.2844683647155761, | |
| "step": 30 | |
| }, | |
| { | |
| "epoch": 0.4481792717086835, | |
| "grad_norm": 14.3125, | |
| "learning_rate": 6.980398830195785e-06, | |
| "loss": 1.2283113479614258, | |
| "step": 40 | |
| }, | |
| { | |
| "epoch": 0.5602240896358543, | |
| "grad_norm": 14.8125, | |
| "learning_rate": 5.096956658859122e-06, | |
| "loss": 1.1659849166870118, | |
| "step": 50 | |
| }, | |
| { | |
| "epoch": 0.6722689075630253, | |
| "grad_norm": 15.125, | |
| "learning_rate": 3.1991113759764493e-06, | |
| "loss": 1.2485424995422363, | |
| "step": 60 | |
| }, | |
| { | |
| "epoch": 0.7843137254901961, | |
| "grad_norm": 12.875, | |
| "learning_rate": 1.5687918106563326e-06, | |
| "loss": 1.2000310897827149, | |
| "step": 70 | |
| }, | |
| { | |
| "epoch": 0.896358543417367, | |
| "grad_norm": 17.5, | |
| "learning_rate": 4.481852951692672e-07, | |
| "loss": 1.219639205932617, | |
| "step": 80 | |
| }, | |
| { | |
| "epoch": 1.0, | |
| "grad_norm": 53.0, | |
| "learning_rate": 3.760237478849793e-09, | |
| "loss": 1.250431442260742, | |
| "step": 90 | |
| }, | |
| { | |
| "epoch": 1.0, | |
| "eval_loss": 1.125481128692627, | |
| "eval_runtime": 7.4107, | |
| "eval_samples_per_second": 9.716, | |
| "eval_steps_per_second": 1.214, | |
| "step": 90 | |
| }, | |
| { | |
| "epoch": 1.0, | |
| "step": 90, | |
| "total_flos": 1.054823986446048e+16, | |
| "train_loss": 1.251299254099528, | |
| "train_runtime": 241.5055, | |
| "train_samples_per_second": 14.753, | |
| "train_steps_per_second": 0.373 | |
| } | |
| ], | |
| "logging_steps": 10, | |
| "max_steps": 90, | |
| "num_input_tokens_seen": 0, | |
| "num_train_epochs": 1, | |
| "save_steps": 200, | |
| "stateful_callbacks": { | |
| "TrainerControl": { | |
| "args": { | |
| "should_epoch_stop": false, | |
| "should_evaluate": false, | |
| "should_log": false, | |
| "should_save": true, | |
| "should_training_stop": true | |
| }, | |
| "attributes": {} | |
| } | |
| }, | |
| "total_flos": 1.054823986446048e+16, | |
| "train_batch_size": 10, | |
| "trial_name": null, | |
| "trial_params": null | |
| } | |