codewithdark's picture
Upload model via QuantLLM
0a76aa1 verified
---
license: apache-2.0
base_model: google/functiongemma-270m-it
library_name: mlx
language:
- en
tags:
- quantllm
- mlx
- mlx-lm
- apple-silicon
- transformers
- q4_k_m
---
<div align="center">
# 🍎 functiongemma-270m-it-4bit-mlx
**google/functiongemma-270m-it** converted to **MLX** format
[![QuantLLM](https://img.shields.io/badge/πŸš€_Made_with-QuantLLM-orange?style=for-the-badge)](https://github.com/codewithdark-git/QuantLLM)
[![Format](https://img.shields.io/badge/Format-MLX-blue?style=for-the-badge)]()
[![Quantization](https://img.shields.io/badge/Quant-Q4_K_M-green?style=for-the-badge)]()
<a href="https://github.com/codewithdark-git/QuantLLM">⭐ Star QuantLLM on GitHub</a>
</div>
---
## πŸ“– About This Model
This model is **[google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it)** converted to **MLX** format optimized for Apple Silicon (M1/M2/M3/M4) Macs with native acceleration.
| Property | Value |
|----------|-------|
| **Base Model** | [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it) |
| **Format** | MLX |
| **Quantization** | Q4_K_M |
| **License** | apache-2.0 |
| **Created With** | [QuantLLM](https://github.com/codewithdark-git/QuantLLM) |
## πŸš€ Quick Start
### Generate Text with mlx-lm
```python
from mlx_lm import load, generate
# Load the model
model, tokenizer = load("QuantLLM/functiongemma-270m-it-4bit-mlx")
# Simple generation
prompt = "Explain quantum computing in simple terms"
messages = [{"role": "user", "content": prompt}]
prompt_formatted = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True
)
# Generate response
text = generate(model, tokenizer, prompt=prompt_formatted, verbose=True)
print(text)
```
### Streaming Generation
```python
from mlx_lm import load, stream_generate
model, tokenizer = load("QuantLLM/functiongemma-270m-it-4bit-mlx")
prompt = "Write a haiku about coding"
messages = [{"role": "user", "content": prompt}]
prompt_formatted = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True
)
# Stream tokens as they're generated
for token in stream_generate(model, tokenizer, prompt=prompt_formatted, max_tokens=200):
print(token, end="", flush=True)
```
### Command Line Interface
```bash
# Install mlx-lm
pip install mlx-lm
# Generate text
python -m mlx_lm.generate --model QuantLLM/functiongemma-270m-it-4bit-mlx --prompt "Hello!"
# Interactive chat
python -m mlx_lm.chat --model QuantLLM/functiongemma-270m-it-4bit-mlx
```
### System Requirements
| Requirement | Minimum |
|-------------|---------|
| **Chip** | Apple Silicon (M1/M2/M3/M4) |
| **macOS** | 13.0 (Ventura) or later |
| **Python** | 3.10+ |
| **RAM** | 8GB+ (16GB recommended) |
```bash
# Install dependencies
pip install mlx-lm
```
## πŸ“Š Model Details
| Property | Value |
|----------|-------|
| **Original Model** | [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it) |
| **Format** | MLX |
| **Quantization** | Q4_K_M |
| **License** | `apache-2.0` |
| **Export Date** | 2025-12-21 |
| **Exported By** | [QuantLLM v2.0](https://github.com/codewithdark-git/QuantLLM) |
---
## πŸš€ Created with QuantLLM
<div align="center">
[![QuantLLM](https://img.shields.io/badge/πŸš€_QuantLLM-Ultra--fast_LLM_Quantization-orange?style=for-the-badge)](https://github.com/codewithdark-git/QuantLLM)
**Convert any model to GGUF, ONNX, or MLX in one line!**
```python
from quantllm import turbo
# Load any HuggingFace model
model = turbo("google/functiongemma-270m-it")
# Export to any format
model.export("mlx", quantization="Q4_K_M")
# Push to HuggingFace
model.push("your-repo", format="mlx")
```
<a href="https://github.com/codewithdark-git/QuantLLM">
<img src="https://img.shields.io/github/stars/codewithdark-git/QuantLLM?style=social" alt="GitHub Stars">
</a>
**[πŸ“š Documentation](https://github.com/codewithdark-git/QuantLLM#readme)** Β·
**[πŸ› Report Issue](https://github.com/codewithdark-git/QuantLLM/issues)** Β·
**[πŸ’‘ Request Feature](https://github.com/codewithdark-git/QuantLLM/issues)**
</div>