Text Classification
Transformers
Safetensors
Icelandic
roberta
icelandic
sentiment-analysis
sequence-classification
social-media
text-embeddings-inference
Instructions to use AMBJ24/icelandic-sentiment with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AMBJ24/icelandic-sentiment with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="AMBJ24/icelandic-sentiment")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("AMBJ24/icelandic-sentiment") model = AutoModelForSequenceClassification.from_pretrained("AMBJ24/icelandic-sentiment") - Notebooks
- Google Colab
- Kaggle
Task: 3-class sentiment analysis → ["negative", "neutral", "positive"]
Base model: mideind/IceBERT-igc (Icelandic RoBERTa)
TL;DR
A small Icelandic RoBERTa fine-tuned for 3-way sentiment on non-ironic text. Pairs well after an irony gate (first run the irony model; only classify sentiment if not_ironic).
How to use
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_id = "ambj24/icelandic-sentiment"
tok = AutoTokenizer.from_pretrained(model_id)
mod = AutoModelForSequenceClassification.from_pretrained(model_id)
text = "Þjónustan var frábær!"
inputs = tok(text, return_tensors="pt")
probs = mod(**inputs).logits.softmax(-1).tolist()[0]
labels = ["negative", "neutral", "positive"]
print(dict(zip(labels, probs)))
Input length: short posts; trained with max length ~128 tokens.
Data: social-media style Icelandic.
Domain shift: trained on short, informal posts.
Positive/neutral/negative labels; only examples judged not ironic.
Typical setup: 3 epochs, LR ≈ 2e-5, batch ≈ 16, max length 128.
- Downloads last month
- 11