| license: gemma | |
| library_name: transformers | |
| tags: | |
| - mlx | |
| extra_gated_heading: Access CodeGemma on Hugging Face | |
| extra_gated_prompt: To access CodeGemma on Hugging Face, you’re required to review | |
| and agree to Google’s usage license. To do this, please ensure you’re logged-in | |
| to Hugging Face and click below. Requests are processed immediately. | |
| extra_gated_button_content: Acknowledge license | |
| pipeline_tag: text-generation | |
| widget: | |
| - text: '<start_of_turn>user Write a Python function to calculate the nth fibonacci | |
| number.<end_of_turn> <start_of_turn>model | |
| ' | |
| inference: | |
| parameters: | |
| max_new_tokens: 200 | |
| license_link: https://ai.google.dev/gemma/terms | |
| # mlx-community/codegemma-7b-it-8bit | |
| This model was converted to MLX format from [`google/codegemma-7b-it`]() using mlx-lm version **0.8.0**. | |
| Refer to the [original model card](https://huggingface.co/google/codegemma-7b-it) for more details on the model. | |
| ## Use with mlx | |
| ```bash | |
| pip install mlx-lm | |
| ``` | |
| ```python | |
| from mlx_lm import load, generate | |
| model, tokenizer = load("mlx-community/codegemma-7b-it-8bit") | |
| response = generate(model, tokenizer, prompt="hello", verbose=True) | |
| ``` | |