| license: apache-2.0 | |
| language: | |
| - en | |
| base_model: janhq/Jan-v3-4B-base-instruct | |
| base_model_relation: quantized | |
| pipeline_tag: text-generation | |
| library_name: mnn | |
| tags: | |
| - code | |
| - mnn | |
| This model [DeProgrammer/Jan-v3-4B-base-instruct-MNN](https://huggingface.co/DeProgrammer/Jan-v3-4B-base-instruct-MNN) was | |
| converted to MNN format from [janhq/Jan-v3-4B-base-instruct](https://huggingface.co/janhq/Jan-v3-4B-base-instruct) | |
| using [llmexport.py](https://github.com/alibaba/MNN/issues/4153#issuecomment-3866182869) in [MNN version **3.4.0**](https://github.com/alibaba/MNN/commit/a874b302f094599e2838a9186e5ce2cf6a81a7a7) with default settings (4-bit quantization). | |
| Inference can be run via MNN, e.g., MNN Chat on Android. |