de-francophones commited on
Commit
feb2b93
·
verified ·
1 Parent(s): f82b130

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - de
4
+ - en
5
+ - es
6
+ - fr
7
+ configs:
8
+ - config_name: Norm_Dup
9
+ data_files:
10
+ - split: train
11
+ path: trainset_norm_dup.csv
12
+ - split: test
13
+ path: testset_norm_dup.csv
14
+ - config_name: Norm_Dedup
15
+ data_files:
16
+ - split: train
17
+ path: trainset_norm_dedup.csv
18
+ - split: test
19
+ path: testset_norm_dedup.csv
20
+ - config_name: Proc_Dup
21
+ data_files:
22
+ - split: train
23
+ path: trainset_proc_dup.csv
24
+ - split: test
25
+ path: testset_proc_dup.csv
26
+ - config_name: Proc_Dedup
27
+ data_files:
28
+ - split: train
29
+ path: trainset_proc_dup.csv
30
+ - split: test
31
+ path: testset_proc_dup.csv
32
+ ---
33
+
34
+ # Mulve
35
+
36
+ Multi-Language Vocabulary Evaluation Data Set (MuLVE) is a data set consisting of vocabulary cards and real-life user answers, labeled whether the user answer is correct or incorrect. The data's source is user learning data from the Phase6 vocabulary trainer. The data set contains vocabulary questions in German and English, Spanish, and French as target language and is available in four different variations regarding pre-processing and deduplication.
37
+
38
+ It is split up into four tab-separated files, one for each variation, per train and test set. The files include the following columns:
39
+
40
+ cardId - numeric card ID
41
+ question - vocabulary card question
42
+ answer - vocabulary card answer
43
+ userAnswer - answer the user input
44
+ Label - True if user answer is correct, False if not
45
+ language - target language (English, French or Spanish)
46
+
47
+ The processed data set variations do not include the include \textbf{userAnswer} columns but the following additional columns:
48
+
49
+ question_norm - question normalized
50
+ answer_norm - answer normalized
51
+ userAnswer_norm - user answer normalized
52
+
53
+
54
+ # Reference
55
+
56
+ ```
57
+ @inproceedings{jacobsen-etal-2022-mulve,
58
+ title = "{M}u{LVE}, A Multi-Language Vocabulary Evaluation Data Set",
59
+ author = {Jacobsen, Anik and
60
+ Mohtaj, Salar and
61
+ M{\"o}ller, Sebastian},
62
+ editor = "Calzolari, Nicoletta and
63
+ B{\'e}chet, Fr{\'e}d{\'e}ric and
64
+ Blache, Philippe and
65
+ Choukri, Khalid and
66
+ Cieri, Christopher and
67
+ Declerck, Thierry and
68
+ Goggi, Sara and
69
+ Isahara, Hitoshi and
70
+ Maegaard, Bente and
71
+ Mariani, Joseph and
72
+ Mazo, H{\'e}l{\`e}ne and
73
+ Odijk, Jan and
74
+ Piperidis, Stelios",
75
+ booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
76
+ month = jun,
77
+ year = "2022",
78
+ address = "Marseille, France",
79
+ publisher = "European Language Resources Association",
80
+ url = "https://aclanthology.org/2022.lrec-1.70",
81
+ pages = "673--679",
82
+ abstract = "Vocabulary learning is vital to foreign language learning. Correct and adequate feedback is essential to successful and satisfying vocabulary training. However, many vocabulary and language evaluation systems perform on simple rules and do not account for real-life user learning data. This work introduces Multi-Language Vocabulary Evaluation Data Set (MuLVE), a data set consisting of vocabulary cards and real-life user answers, labeled indicating whether the user answer is correct or incorrect. The data source is user learning data from the Phase6 vocabulary trainer. The data set contains vocabulary questions in German and English, Spanish, and French as target language and is available in four different variations regarding pre-processing and deduplication. We experiment to fine-tune pre-trained BERT language models on the downstream task of vocabulary evaluation with the proposed MuLVE data set. The results provide outstanding results of {\textgreater} 95.5 accuracy and F2-score. The data set is available on the European Language Grid.",
83
+ }
84
+ ```