File size: 734 Bytes
515d581
 
 
 
 
 
 
 
 
 
 
 
 
ae8182c
515d581
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
license: apache-2.0
language:
- en
base_model: janhq/Jan-v3-4B-base-instruct
base_model_relation: quantized
pipeline_tag: text-generation
library_name: mnn
tags:
- code
- mnn
---

This model [DeProgrammer/Jan-v3-4B-base-instruct-MNN-Q8](https://huggingface.co/DeProgrammer/Jan-v3-4B-base-instruct-MNN-Q8) was
converted to MNN format from [janhq/Jan-v3-4B-base-instruct](https://huggingface.co/janhq/Jan-v3-4B-base-instruct)
using [llmexport.py](https://github.com/alibaba/MNN/issues/4153#issuecomment-3866182869) in [MNN version **3.4.0**](https://github.com/alibaba/MNN/commit/a874b302f094599e2838a9186e5ce2cf6a81a7a7) with `--quant_bit 8` but otherwise default settings.

Inference can be run via MNN, e.g., MNN Chat on Android.