nyu-mll/glue
Viewer • Updated • 1.49M • 458k • 495
How to use Intel/albert-base-v2-sst2-int8-dynamic-inc with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Intel/albert-base-v2-sst2-int8-dynamic-inc") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Intel/albert-base-v2-sst2-int8-dynamic-inc")
model = AutoModelForSequenceClassification.from_pretrained("Intel/albert-base-v2-sst2-int8-dynamic-inc")This is an INT8 ONNX model quantized with Intel® Neural Compressor.
The original fp32 model comes from the fine-tuned model Alireza1044/albert-base-v2-sst2.
| INT8 | FP32 | |
|---|---|---|
| Accuracy (eval-accuracy) | 0.9186 | 0.9232 |
| Model size (MB) | 59 | 45 |
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/albert-base-v2-sst2-int8-dynamic')