Twitter RoBERTa Sentiment (ONNX, int8 Quantized)
Production-ready ONNX conversion of cardiffnlp/twitter-roberta-base-sentiment-latest for in-browser sentiment analysis — zero server cost, zero latency, complete privacy.
Highlights
- 3-class sentiment — negative, neutral, positive
- ~124 MB quantized — trained specifically on social media text
- transformers.js compatible — drop-in
pipeline('text-classification') - TweetEval benchmark leader — RoBERTa fine-tuned on ~124M tweets
Quick Start
import { pipeline } from '@huggingface/transformers';
const classifier = await pipeline(
'text-classification',
'affectively-ai/twitter-roberta-base-sentiment-onnx',
{ dtype: 'q8' }
);
const result = await classifier('This new feature is absolutely amazing!');
// [{ label: 'positive', score: 0.96 }]
Labels
| Label | Description |
|---|---|
| negative | Negative sentiment (criticism, complaints, frustration) |
| neutral | Neutral or factual statements |
| positive | Positive sentiment (praise, excitement, satisfaction) |
Conversion Details
| Property | Value |
|---|---|
| Base model | cardiffnlp/twitter-roberta-base-sentiment-latest |
| Pre-training | ~124M tweets |
| Export | PyTorch → ONNX via Optimum |
| Quantization | int8 dynamic (ORTQuantizer, avx512_vnni) |
| Quantized size | ~124 MB |
Use Cases
This model powers sentiment analysis in Edgework.ai — bringing fast, cheap, and private inference as close to the user as possible. Best for:
- Social media monitoring and brand sentiment
- Customer feedback triage (positive/negative/neutral)
- Community health dashboards
- Real-time mood tracking from short-form text
About
Published by AFFECTIVELY · Managed by @buley
We convert, quantize, and publish production-ready ONNX models for edge and in-browser inference. Every release is tested for correctness and stability before publication.
- All models · GitHub · Edgework.ai
- Downloads last month
- 40