Instructions to use aimeri/spoomplesmaxx-27b-4500-mlx-6bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use aimeri/spoomplesmaxx-27b-4500-mlx-6bit with MLX:
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("aimeri/spoomplesmaxx-27b-4500-mlx-6bit") config = load_config("aimeri/spoomplesmaxx-27b-4500-mlx-6bit") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
spoomplesmaxx-27b-4500-mlx-6bit
MLX 6-bit quantized version of aimeri/spoomplesmaxx-27b-4500, converted with mlx-vlm.
About the base model
spoomplesmaxx-27b-4500 is a continued pretraining (CPT) of Google's Gemma 3 27B PT, trained for 4,500 steps on text data using Unsloth. The SigLIP vision encoder is preserved from the original Gemma 3 architecture — the model retains multimodal (text + image) capability. See the base model card for full training details.
Quantization details
| Parameter | Value |
|---|---|
| Bits | 6 |
| Group size | 64 |
| Mode | Affine |
Usage
pip install -U mlx-vlm
Text + image:
python -m mlx_vlm.generate \
--model aimeri/spoomplesmaxx-27b-4500-mlx-6bit \
--prompt "Describe this image." \
--image path/to/image.jpg \
--max-tokens 200
Text only:
python -m mlx_vlm.generate \
--model aimeri/spoomplesmaxx-27b-4500-mlx-6bit \
--prompt "Once upon a time" \
--max-tokens 200
Python:
from mlx_vlm import load, generate
model, processor = load("aimeri/spoomplesmaxx-27b-4500-mlx-6bit")
output = generate(model, processor, prompt="Describe this image.", image="path/to/image.jpg", max_tokens=200)
print(output)
- Downloads last month
- 5
Model size
7B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
6-bit
Model tree for aimeri/spoomplesmaxx-27b-4500-mlx-6bit
Base model
aimeri/spoomplesmaxx-base-gemma3-27b-4500 Finetuned
aimeri/spoomplesmaxx-27b-4500