all-MiniLM-L6-v2 (INT8, ONNX)
This repository contains an INT8-quantized version of all-MiniLM-L6-v2. The quantize dynamic quantization method was used for maximum cross-platform compatibility.
Based on the original model: https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
Post-training INT8 quantization
Optimized for cross-platform compatibility
Suitable for embeddings, semantic search, and text classification
Note: This is a derivative work with quantization only.
- Downloads last month
- 60
Model tree for TrendHD/all-MiniLM-L6-v2-int8
Base model
sentence-transformers/all-MiniLM-L6-v2