Instructions to use ariG23498/mod-g4-e2b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ariG23498/mod-g4-e2b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="ariG23498/mod-g4-e2b")# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("ariG23498/mod-g4-e2b") model = AutoModelForImageTextToText.from_pretrained("ariG23498/mod-g4-e2b") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ariG23498/mod-g4-e2b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ariG23498/mod-g4-e2b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ariG23498/mod-g4-e2b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/ariG23498/mod-g4-e2b
- SGLang
How to use ariG23498/mod-g4-e2b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ariG23498/mod-g4-e2b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ariG23498/mod-g4-e2b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ariG23498/mod-g4-e2b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ariG23498/mod-g4-e2b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use ariG23498/mod-g4-e2b with Docker Model Runner:
docker model run hf.co/ariG23498/mod-g4-e2b
Upload Gemma4ForConditionalGeneration
Browse files- config.json +7 -7
- model.safetensors +2 -2
config.json
CHANGED
|
@@ -15,7 +15,7 @@
|
|
| 15 |
"dtype": "bfloat16",
|
| 16 |
"gradient_clipping": 10000000000.0,
|
| 17 |
"hidden_act": "silu",
|
| 18 |
-
"hidden_size":
|
| 19 |
"id2label": {
|
| 20 |
"0": "LABEL_0",
|
| 21 |
"1": "LABEL_1"
|
|
@@ -27,7 +27,7 @@
|
|
| 27 |
"LABEL_1": 1
|
| 28 |
},
|
| 29 |
"model_type": "gemma4_audio",
|
| 30 |
-
"num_attention_heads":
|
| 31 |
"num_hidden_layers": 1,
|
| 32 |
"output_attentions": false,
|
| 33 |
"output_hidden_states": false,
|
|
@@ -65,7 +65,7 @@
|
|
| 65 |
"global_head_dim": 512,
|
| 66 |
"head_dim": 256,
|
| 67 |
"hidden_activation": "gelu_pytorch_tanh",
|
| 68 |
-
"hidden_size":
|
| 69 |
"hidden_size_per_layer_input": 256,
|
| 70 |
"initializer_range": 0.02,
|
| 71 |
"intermediate_size": 6144,
|
|
@@ -109,10 +109,10 @@
|
|
| 109 |
"max_position_embeddings": 131072,
|
| 110 |
"model_type": "gemma4_text",
|
| 111 |
"moe_intermediate_size": null,
|
| 112 |
-
"num_attention_heads":
|
| 113 |
"num_experts": null,
|
| 114 |
"num_global_key_value_heads": null,
|
| 115 |
-
"num_hidden_layers":
|
| 116 |
"num_key_value_heads": 1,
|
| 117 |
"num_kv_shared_layers": 20,
|
| 118 |
"pad_token_id": 0,
|
|
@@ -151,7 +151,7 @@
|
|
| 151 |
"global_head_dim": 64,
|
| 152 |
"head_dim": 64,
|
| 153 |
"hidden_activation": "gelu_pytorch_tanh",
|
| 154 |
-
"hidden_size":
|
| 155 |
"id2label": {
|
| 156 |
"0": "LABEL_0",
|
| 157 |
"1": "LABEL_1"
|
|
@@ -165,7 +165,7 @@
|
|
| 165 |
},
|
| 166 |
"max_position_embeddings": 131072,
|
| 167 |
"model_type": "gemma4_vision",
|
| 168 |
-
"num_attention_heads":
|
| 169 |
"num_hidden_layers": 1,
|
| 170 |
"num_key_value_heads": 12,
|
| 171 |
"output_attentions": false,
|
|
|
|
| 15 |
"dtype": "bfloat16",
|
| 16 |
"gradient_clipping": 10000000000.0,
|
| 17 |
"hidden_act": "silu",
|
| 18 |
+
"hidden_size": 8,
|
| 19 |
"id2label": {
|
| 20 |
"0": "LABEL_0",
|
| 21 |
"1": "LABEL_1"
|
|
|
|
| 27 |
"LABEL_1": 1
|
| 28 |
},
|
| 29 |
"model_type": "gemma4_audio",
|
| 30 |
+
"num_attention_heads": 1,
|
| 31 |
"num_hidden_layers": 1,
|
| 32 |
"output_attentions": false,
|
| 33 |
"output_hidden_states": false,
|
|
|
|
| 65 |
"global_head_dim": 512,
|
| 66 |
"head_dim": 256,
|
| 67 |
"hidden_activation": "gelu_pytorch_tanh",
|
| 68 |
+
"hidden_size": 8,
|
| 69 |
"hidden_size_per_layer_input": 256,
|
| 70 |
"initializer_range": 0.02,
|
| 71 |
"intermediate_size": 6144,
|
|
|
|
| 109 |
"max_position_embeddings": 131072,
|
| 110 |
"model_type": "gemma4_text",
|
| 111 |
"moe_intermediate_size": null,
|
| 112 |
+
"num_attention_heads": 1,
|
| 113 |
"num_experts": null,
|
| 114 |
"num_global_key_value_heads": null,
|
| 115 |
+
"num_hidden_layers": 35,
|
| 116 |
"num_key_value_heads": 1,
|
| 117 |
"num_kv_shared_layers": 20,
|
| 118 |
"pad_token_id": 0,
|
|
|
|
| 151 |
"global_head_dim": 64,
|
| 152 |
"head_dim": 64,
|
| 153 |
"hidden_activation": "gelu_pytorch_tanh",
|
| 154 |
+
"hidden_size": 8,
|
| 155 |
"id2label": {
|
| 156 |
"0": "LABEL_0",
|
| 157 |
"1": "LABEL_1"
|
|
|
|
| 165 |
},
|
| 166 |
"max_position_embeddings": 131072,
|
| 167 |
"model_type": "gemma4_vision",
|
| 168 |
+
"num_attention_heads": 1,
|
| 169 |
"num_hidden_layers": 1,
|
| 170 |
"num_key_value_heads": 12,
|
| 171 |
"output_attentions": false,
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ca8f8870c7dc56af8f1d333201bfa9585a107eab0f1b662edac405ea5a267507
|
| 3 |
+
size 4719736990
|