Jiyog commited on
Commit
9721043
·
verified ·
1 Parent(s): d109557

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -6
README.md CHANGED
@@ -4,6 +4,10 @@ license: mit
4
  base_model: FacebookAI/roberta-base
5
  tags:
6
  - generated_from_trainer
 
 
 
 
7
  metrics:
8
  - accuracy
9
  - f1
@@ -25,17 +29,23 @@ It achieves the following results on the evaluation set:
25
 
26
  ## Model description
27
 
28
- More information needed
29
 
30
  ## Intended uses & limitations
31
 
32
- More information needed
33
 
34
  ## Training and evaluation data
35
 
36
- More information needed
37
-
38
- ## Training procedure
 
 
 
 
 
 
 
39
 
40
  ### Training hyperparameters
41
 
@@ -63,4 +73,4 @@ The following hyperparameters were used during training:
63
  - Transformers 5.0.0
64
  - Pytorch 2.10.0+cu128
65
  - Datasets 4.0.0
66
- - Tokenizers 0.22.2
 
4
  base_model: FacebookAI/roberta-base
5
  tags:
6
  - generated_from_trainer
7
+ - debagreement
8
+ - stance
9
+ - debate
10
+ - disagreement
11
  metrics:
12
  - accuracy
13
  - f1
 
29
 
30
  ## Model description
31
 
 
32
 
33
  ## Intended uses & limitations
34
 
35
+ This model is intended for detecting stance (agreement, disagreement, neutrality) in Reddit comment reply pairs. It was trained on political subreddit data from the DEBAGREEMENT dataset and may not generalize well to other domains or platforms.
36
 
37
  ## Training and evaluation data
38
 
39
+ - **Base model**: [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) (125M parameters)
40
+ - **Dataset**: [Jiyog/debagreement-cp](https://huggingface.co/datasets/Jiyog/debagreement-cp) (DEBAGREEMENT)
41
+ - **Task**: 3-class sequence classification (sentence-pair input)
42
+ - **Input format**: `body_parent` (premise) + `body_child` (hypothesis)
43
+ - **Epochs**: 3
44
+ - **Batch size**: 16
45
+ - **Max sequence length**: 512 tokens
46
+ - **Optimizer**: AdamW (default HuggingFace Trainer)
47
+ - **Weight decay**: 0.01
48
+ - **Best model selected by**: Weighted F1
49
 
50
  ### Training hyperparameters
51
 
 
73
  - Transformers 5.0.0
74
  - Pytorch 2.10.0+cu128
75
  - Datasets 4.0.0
76
+ - Tokenizers 0.22.2