| base_model: google/gemma-2-2b-it | |
| library_name: transformers | |
| license: gemma | |
| pipeline_tag: text-generation | |
| tags: | |
| - conversational | |
| - mlc-ai | |
| - MLC-Weight-Conversion | |
| extra_gated_heading: Access Gemma on Hugging Face | |
| extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and | |
| agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging | |
| Face and click below. Requests are processed immediately. | |
| extra_gated_button_content: Acknowledge license | |
| # AMKCode/gemma-2-2b-it-q4f32_1-MLC | |
| This model was compiled using MLC-LLM with q4f32_1 quantization from [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it). | |
| The conversion was done using the [MLC-Weight-Conversion](https://huggingface.co/spaces/mlc-ai/MLC-Weight-Conversion) space. | |
| To run this model, please first install [MLC-LLM](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages). | |
| To chat with the model on your terminal: | |
| ```bash | |
| mlc_llm chat HF://AMKCode/gemma-2-2b-it-q4f32_1-MLC | |
| ``` | |
| For more information on how to use MLC-LLM, please visit the MLC-LLM [documentation](https://llm.mlc.ai/docs/index.html). | |