Instructions to use Daizee/Gemma3-Callous-Calla-4B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Daizee/Gemma3-Callous-Calla-4B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="Daizee/Gemma3-Callous-Calla-4B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("Daizee/Gemma3-Callous-Calla-4B") model = AutoModelForImageTextToText.from_pretrained("Daizee/Gemma3-Callous-Calla-4B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Daizee/Gemma3-Callous-Calla-4B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Daizee/Gemma3-Callous-Calla-4B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Daizee/Gemma3-Callous-Calla-4B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/Daizee/Gemma3-Callous-Calla-4B
- SGLang
How to use Daizee/Gemma3-Callous-Calla-4B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Daizee/Gemma3-Callous-Calla-4B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Daizee/Gemma3-Callous-Calla-4B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Daizee/Gemma3-Callous-Calla-4B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Daizee/Gemma3-Callous-Calla-4B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use Daizee/Gemma3-Callous-Calla-4B with Docker Model Runner:
docker model run hf.co/Daizee/Gemma3-Callous-Calla-4B
merged-model-output
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using google/gemma-3-4b-it as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: ties
base_model: google/gemma-3-4b-it
models:
- model: mlabonne/gemma-3-4b-it-abliterated
parameters:
density: 0.5
weight: 0.6
- model: soob3123/Veiled-Calla-4B
parameters:
density: 0.5
weight: 0.7
dtype: bfloat16
tokenizer_source: union
chat_template: auto
SAMPLE Elara shifted, pulling her woolen shawl tighter around her shoulders. The heat from the hearth was welcome after the damp chill of the pass. Her fingers curled around the rim of her tankard, the amber liquid within doing little to warm her skin. She watched Rhys across the small table – a study in contained energy. His dark hair was ruffled by the wind that still snaked through the gaps in the shutters, and his jaw was set, a stubborn line etched across his face. She remembered the way his eyes had narrowed when a bandit had tried to relieve them of their horses; the quick, brutal efficiency with which he'd dispatched the man. He looks like a predator, all coiled muscle and watchful intent.
“Tell me about the city,” he said, his voice low and gravelly, like stones tumbling down a hillside.
Elara took a slow sip of wine. "It's crowded. People push and shove for every scrap of space. There are fountains filled with spiced water, and merchants hawking silks and jewels until your ears ring." She let her gaze drift over to his hand, resting casually on the table - long, slender fingers
- Downloads last month
- 6
docker model run hf.co/Daizee/Gemma3-Callous-Calla-4B