File size: 411 Bytes
7469638
 
 
 
 
 
ebf7e08
 
 
 
 
 
 
 
 
 
 
 
fe53831
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
license: apache-2.0
base_model:
- intfloat/multilingual-e5-small
---

the original model is OriginalModel.mlpackage with float32

# the quantization model info :

## quantization and input maxlength

coreML: using linear quantize nbit=8

input max = 128

## note

i tried turn it into float 16, but it changed too much for its prediction. quantization using linear nbit=8, it works almost like the original.