Add/update the quantized ONNX model files and README.md for Transformers.js v3

#7
by whitphx HF Staff - opened

Applied Quantizations

✅ Based on sentence_transformers.onnx with slimming

↳ ✅ fp16: sentence_transformers_fp16.onnx (added)
↳ ✅ int8: sentence_transformers_int8.onnx (added)
↳ ✅ uint8: sentence_transformers_uint8.onnx (added)
↳ ✅ q4: sentence_transformers_q4.onnx (added)
↳ ✅ q4f16: sentence_transformers_q4f16.onnx (added)
↳ ✅ bnb4: sentence_transformers_bnb4.onnx (added)

✅ Based on sentence_transformers.onnx with slimming

↳ ✅ fp16: sentence_transformers_fp16.onnx (added)
↳ ✅ int8: sentence_transformers_int8.onnx (added)
↳ ✅ uint8: sentence_transformers_uint8.onnx (added)
↳ ✅ q4: sentence_transformers_q4.onnx (added)
↳ ✅ q4f16: sentence_transformers_q4f16.onnx (added)
↳ ✅ bnb4: sentence_transformers_bnb4.onnx (added)

✅ Based on model.onnx with slimming

↳ ✅ int8: model_int8.onnx (added)
↳ ✅ uint8: model_uint8.onnx (added)
↳ ✅ q4: model_q4.onnx (added)
↳ ✅ q4f16: model_q4f16.onnx (added)
↳ ✅ bnb4: model_bnb4.onnx (added)

✅ Based on model.onnx with slimming

↳ ✅ int8: model_int8.onnx (added)
↳ ✅ uint8: model_uint8.onnx (added)
↳ ✅ q4: model_q4.onnx (added)
↳ ✅ q4f16: model_q4f16.onnx (added)
↳ ✅ bnb4: model_bnb4.onnx (added)

Xenova changed pull request status to merged

Sign up or log in to comment