introvoyz041's picture
Upload README.md with huggingface_hub
6f6c459 verified
metadata
license: gemma
library_name: transformers
tags:
  - mlx
  - mlx
  - mlx-my-repo
extra_gated_heading: Access CodeGemma on Hugging Face
extra_gated_prompt: >-
  To access CodeGemma on Hugging Face, you’re required to review and agree to
  Google’s usage license. To do this, please ensure you’re logged-in to Hugging
  Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
pipeline_tag: text-generation
widget:
  - text: >
      <start_of_turn>user Write a Python function to calculate the nth fibonacci
      number.<end_of_turn> <start_of_turn>model
inference:
  parameters:
    max_new_tokens: 200
license_link: https://ai.google.dev/gemma/terms
base_model: mlx-community/codegemma-7b-it-8bit

introvoyz041/codegemma-7b-it-8bit-mlx-4Bit

The Model introvoyz041/codegemma-7b-it-8bit-mlx-4Bit was converted to MLX format from mlx-community/codegemma-7b-it-8bit using mlx-lm version 0.28.3.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("introvoyz041/codegemma-7b-it-8bit-mlx-4Bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)