Genius1237 commited on
Commit
364cc0b
·
1 Parent(s): d948b6d

Add initial readme

Browse files
Files changed (1) hide show
  1. README.md +66 -2
README.md CHANGED
@@ -1,2 +1,66 @@
1
- # TyDiP
2
- TyDiP Multilingual Politeness dataset and code
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TyDiP: Investigating Politeness in Nine Typologically Diverse Languages
2
+ ## Data
3
+ The TyDiP dataset is licensed under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
4
+
5
+ The `data` folder contains the different files we release as part of the TyDiP dataset. The TyDiP dataset comprises of an English train set and English test set that are adapted from the Stanford Politeness Corpus, and test data in 9 more languages (Hindi, Korean, Spanish, Tamil, French, Vietnamese, Russian, Afrikaans, Hungarian) that we annotated.
6
+
7
+ ```
8
+ data/
9
+ ├── all
10
+ ├── binary
11
+ └── unlabelled_train_sets
12
+ ```
13
+ `data/all` consists of the complete train and test sets.
14
+ `data/binary` is a filtered version of the above where sentences from the top and bottom 25 percentile of scores is only present. This is the data that we used for training and evaluation in the paper.
15
+ `data/unlabelled_train_sets`
16
+
17
+ If you use the English train or test data, please cite the Stanford Politeness Dataset
18
+ ```
19
+ @inproceedings{danescu-niculescu-mizil-etal-2013-computational,
20
+ title = "A computational approach to politeness with application to social factors",
21
+ author = "Danescu-Niculescu-Mizil, Cristian and
22
+ Sudhof, Moritz and
23
+ Jurafsky, Dan and
24
+ Leskovec, Jure and
25
+ Potts, Christopher",
26
+ booktitle = "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
27
+ month = aug,
28
+ year = "2013",
29
+ address = "Sofia, Bulgaria",
30
+ publisher = "Association for Computational Linguistics",
31
+ url = "https://aclanthology.org/P13-1025",
32
+ pages = "250--259",
33
+ }
34
+ ```
35
+ If you use the test data from the 9 target languages, please cite our paper
36
+ TODO - Replace with actual
37
+ ```
38
+ @inproceedings{srinivasan-choi-2022-tydip,
39
+ title = "TyDiP: Investigating Politeness in Nine Typologically Diverse Languages",
40
+ author = "Srinivasan, Anirudh and
41
+ Choi, Eunsol",
42
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
43
+ month = dec,
44
+ year = "2022",
45
+ }
46
+
47
+ ```
48
+
49
+ ## Code
50
+ `politeness_regresor.py` is used for training and evaluation of transformer models
51
+
52
+ To train a model
53
+ ```
54
+ python politeness_regressor.py --train_file data/binary/en_train_binary.csv --test_file data/binary/en_test_binary.csv --model_save_location model.pt --pretrained_model xlm-roberta-large --gpus 1 --batch_size 4 --accumulate_grad_batches 8 --max_epochs 5 --checkpoint_callback False --logger False --precision 16 --train --test --binary --learning_rate 5e-6
55
+ ```
56
+
57
+ To test this trained model on $lang
58
+ ```
59
+ python politeness_regressor.py --test_file data/binary/${lang}_test_binary.csv --load_model model.pt --gpus 1 --batch_size 32 --test --binary
60
+ ```
61
+
62
+ ## Politeness Strategies
63
+ `strategies` contains the processed strategy lexicon for different languages. `strategies/learnt_strategies.xlsx` contains the human edited strategies for 4 langauges
64
+
65
+ ## Annotation Interface
66
+ `annotation.html` contains the UI used for conducting data annotation