all-MiniLM-L6-v2 (INT8, ONNX)

This repository contains an INT8-quantized version of all-MiniLM-L6-v2. The quantize dynamic quantization method was used for maximum cross-platform compatibility.

Based on the original model: https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2

Post-training INT8 quantization

Optimized for cross-platform compatibility

Suitable for embeddings, semantic search, and text classification

Note: This is a derivative work with quantization only.

Downloads last month
60
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TrendHD/all-MiniLM-L6-v2-int8

Quantized
(59)
this model