metadata
language:
- fra
license: cc-by-nc-4.0
configs:
- config_name: game_annotation
data_files:
- split: train
path: RigorMortis_game_annotation.txt
- config_name: bonus_annotation
data_files:
- split: train
path: RigorMortis_bonus_annotation.txt
Dataset origin: https://github.com/gwaps4nlp/rigor-mortis/tree/master/LREC_2020
Description
This page presents the annotated data described in the paper Rigor Mortis:Annotating MWEs with a Gamified Platform presented at LREC 2020 in Marseille.
The annotation are described in two files:
RigorMortis_game_annotation.txtwith the 504 sentences of the Annotation partRigorMortis_bonus_annotation.txtwith the 743 sentences of the Bonus Annotation part
The annotations are presented by sentences, separated by a empty line with the format below:
# text : Lui et Bonassoli sont férus de science et avides de publicité .
# number of players : 13
# no mwe - 8 players (61.54%)
# 1 : sont férus - 3 players (23.08%)
# 2 : férus de - 2 players (15.38%)
# 3 : avides de - 2 players (15.38%)
1 Lui _
2 et _
3 Bonassoli _
4 sont _ 1
5 férus _ 1;2
6 de _ 2
7 science _
8 et _
9 avides _ 3
10 de _ 3
11 publicité _
12 . _
Citation
@inproceedings{fort-etal-2020-rigor,
title = "Rigor Mortis: Annotating {MWE}s with a Gamified Platform",
author = {Fort, Kar{\"e}n and
Guillaume, Bruno and
Pilatte, Yann-Alan and
Constant, Mathieu and
Lef{\`e}bvre, Nicolas},
editor = "Calzolari, Nicoletta and
B{\'e}chet, Fr{\'e}d{\'e}ric and
Blache, Philippe and
Choukri, Khalid and
Cieri, Christopher and
Declerck, Thierry and
Goggi, Sara and
Isahara, Hitoshi and
Maegaard, Bente and
Mariani, Joseph and
Mazo, H{\'e}l{\`e}ne and
Moreno, Asuncion and
Odijk, Jan and
Piperidis, Stelios",
booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.541",
pages = "4395--4401",
abstract = "We present here Rigor Mortis, a gamified crowdsourcing platform designed to evaluate the intuition of the speakers, then train them to annotate multi-word expressions (MWEs) in French corpora. We previously showed that the speakers{'} intuition is reasonably good (65{\%} in recall on non-fixed MWE). We detail here the annotation results, after a training phase using some of the tests developed in the PARSEME-FR project.",
language = "English",
ISBN = "979-10-95546-34-4",
}