Text Emotion Recognition Model
A text-based emotion recognition model trained on the MELD dataset. The model operates on dialogue utterances and serves as both a strong text-only baseline and the text encoder in a multimodal emotion recognition framework.
Model Summary
- Task: Text-based Emotion Recognition
- Dataset: MELD
- Backbone:
bert-base-uncased - Pooling: [CLS] token representation
- Classifier: MLP with class-weighted loss
- Classes: 7 emotion categories
Architecture
Text Encoder
- Pretrained BERT (
bert-base-uncased) - Outputs contextualized token embeddings
- Pretrained BERT (
Utterance Representation
[CLS]token embedding- Represents the full utterance semantics
MLP Classifier
- Fully connected layers
- ReLU activation and dropout
- Softmax output layer
Class Imbalance Handling
The MELD dataset is highly imbalanced across emotion classes. To address this, class weights are applied in the cross-entropy loss function, improving macro-level performance on minority emotions.
Training Details
- Tokenization: BERT tokenizer
- Max sequence length: 128 tokens
- Optimizer: Adam
- Loss: CrossEntropyLoss (with class weights)
- Metrics: Accuracy, Macro F1, Weighted F1
Usage
- Standalone text emotion classifier
- Text branch for early and late fusion in multimodal emotion recognition
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support