| | --- |
| | license: apache-2.0 |
| | inference: false |
| | base_model: ibm-granite/granite-4-micro-ov |
| | base_model_relation: quantized |
| | tags: [green, llmware-chat, p2, ov] |
| | --- |
| | |
| | # granite-4-micro-ov |
| |
|
| | <!-- Provide a quick summary of what the model is/does. --> |
| |
|
| | **granite-4-micro-ov** is an OpenVINO int4 quantized version of [granite-4-micro](https://www.huggingface.co/ibm-granite/granite-4-micro), providing a fast, small inference implementation, optimized for AI PCs. |
| |
|
| |
|
| | ### Model Description |
| |
|
| | - **Developed by:** ibm-granite |
| | - **Quantized by:** llmware |
| | - **Model type:** granitemoehybrid |
| | - **Parameters:** 2 billion |
| | - **Model Parent:** ibm-granite/granite-4-micro |
| | - **Language(s) (NLP):** English |
| | - **License:** Apache 2.0 |
| | - **Uses:** Chat, general-purpose LLM |
| | - **Quantization:** int4 |
| | |
| |
|
| | ## Model Card Contact |
| |
|
| | [llmware on hf](https://www.huggingface.co/llmware) |
| |
|
| | [llmware website](https://www.llmware.ai) |
| |
|