| --- |
| license: gemma |
| tags: |
| - gemma3 |
| - gemma |
| - google |
| - functiongemma |
| - mlx |
| pipeline_tag: text-generation |
| library_name: mlx |
| extra_gated_heading: Access Gemma on Hugging Face |
| extra_gated_prompt: To access FunctionGemma on Hugging Face, you’re required to review |
| and agree to Google’s usage license. To do this, please ensure you’re logged in |
| to Hugging Face and click below. Requests are processed immediately. |
| extra_gated_button_content: Acknowledge license |
| base_model: google/functiongemma-270m-it |
| --- |
| |
| # arthurcollet/functiongemma-270m-it |
|
|
| This model [arthurcollet/functiongemma-270m-it](https://huggingface.co/arthurcollet/functiongemma-270m-it) was |
| converted to MLX format from [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it) |
| using mlx-lm version **0.30.6**. |
|
|
| ## Use with mlx |
|
|
| ```bash |
| pip install mlx-lm |
| ``` |
|
|
| ```python |
| from mlx_lm import load, generate |
| |
| model, tokenizer = load("arthurcollet/functiongemma-270m-it") |
| |
| prompt = "hello" |
| |
| if tokenizer.chat_template is not None: |
| messages = [{"role": "user", "content": prompt}] |
| prompt = tokenizer.apply_chat_template( |
| messages, add_generation_prompt=True, return_dict=False, |
| ) |
| |
| response = generate(model, tokenizer, prompt=prompt, verbose=True) |
| ``` |
|
|