File size: 1,801 Bytes
6f759c5 4354b76 6f759c5 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | ---
language: en
license: apache-2.0
datasets:
- empathic reactions to news stories
model-index:
- name: roberta-base-empathy
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Reaction to News Stories
type: Reaction to News Stories
config: sst2
split: validation
metrics:
- name: MSE loss
type: MSE loss
value: 7.07853364944458
- name: Pearson's R (empathy)
type: Pearson's R (empathy)
value: 0.4336383660597612
- name: Pearson's R (distress)
type: Pearson's R (distress)
value: 0.40006974689041663
---
# Roberta base finetuned on a dataset of empathic reactions to news stories (Buechel et al., 2018; Tafreshi et al., 2021, 2022)
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
## Model Details
**Model Description:** This model is a fine-tuned checkpoint of [RoBERTA-base](https://huggingface.co/roberta-base), fine-tuned for Track 1 of the[WASSA 2022 Shared Task](https://aclanthology.org/2022.wassa-1.20.pdf) - predicting empathy and distress scores on a dataset of reactions to news stories.
This model attained an average Pearson's correlation (r) of 0.416854 on the dev set (for comparison, the top team had an average r of .54 on the test set ).
# Training
#### Training Data
An extended version of the [empathic reactions to news stories dataset](https://codalab.lisn.upsaclay.fr/competitions/834#learn_the_details-datasets)
###### Fine-tuning hyper-parameters
- learning_rate = 1e-5
- batch_size = 32
- warmup = 600
- max_seq_length = 128
- num_train_epochs = 3.0
|