ACL-OCL / Base_JSON /prefixA /json /alta /2020.alta-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:36.353374Z"
},
"title": "Overview of the 2020 ALTA Shared Task: Assess Human Behaviour",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Moll\u00e1",
"suffix": "",
"affiliation": {},
"email": "diego.molla-aliod@mq.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The 2020 ALTA shared task is the 11th instance of a series of shared tasks organised by ALTA since 2010. The task is to classify texts posted in social media according to human judgements expressed in them. The data used for this task is a subset of SemEval 2018 AIT DISC, which has been annotated by domain experts for this task. In this paper we introduce the task, describe the data and present the results of participating systems.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The 2020 ALTA shared task is the 11th instance of a series of shared tasks organised by ALTA since 2010. The task is to classify texts posted in social media according to human judgements expressed in them. The data used for this task is a subset of SemEval 2018 AIT DISC, which has been annotated by domain experts for this task. In this paper we introduce the task, describe the data and present the results of participating systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Human behaviour can be negatively or positively assessed based on a reference set of social norms. When judgement is explicitly stated in narratives, e.g., \"They are hard-working and honest.\", we can attempt to encounter appraisal words such as \"hardworking\" and \"honest\" used between interlocutors for advancing their judgement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Attitude positioning plays an important role in Martin and White's (2005) Appraisal framework 1 (AF) for analysing someone's use of evaluative language to negotiate solidarity.",
"cite_spans": [
{
"start": 48,
"end": 73,
"text": "Martin and White's (2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, no prior work has attempted to automatically codify text using the AF judgement categories. The goal of the 2020 ALTA shared task is to develop a computational model that can identify and classify judgements expressed in textual segments. Participants are challenged to predict the judgement appraised by classifying each short-text message into one or more label candidates (or none): normality, capacity, tenacity, veracity, propriety.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The 2020 ALTA Shared Task is the 11th of the shared tasks organised by the Australasian Lan-1 https://www.grammatics.com/appraisal/ guage Technology Association (ALTA). As in previous shared tasks, it targets university students with programming experience, but it is also open to graduates and professionals. The general objective of these shared tasks is to introduce interested people to the sort of problems that are the subject of active research in a field of natural language processing. Depending on the availability of data, the tasks have ranged from classic but challenging tasks to tasks linked to very hot topics of research. Details of the 2020 ALTA Shared task and past tasks can be found in the 2020 ALTA Shared Task website. 2 There are no limitations on the size of the teams or the means that they may use to solve the problem. We provide training data but participants are free to use additional data and resources. The only constraint in the approach is that the processing must be fully automatic -there should be no human intervention.",
"cite_spans": [
{
"start": 742,
"end": 743,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The 2020 ALTA Shared Task",
"sec_num": "2"
},
{
"text": "As in past ALTA shared tasks, there are two categories: a student category and an open category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The 2020 ALTA Shared Task",
"sec_num": "2"
},
{
"text": "\u2022 All the members of teams from the student category must be university students. The teams cannot have members that are full-time employed or that have completed a PhD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The 2020 ALTA Shared Task",
"sec_num": "2"
},
{
"text": "\u2022 Any other teams fall into the open category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The 2020 ALTA Shared Task",
"sec_num": "2"
},
{
"text": "The prize is awarded to the team that performs best on the private test set -a subset of the evaluation data for which participant scores are only revealed at the end of the evaluation period.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The 2020 ALTA Shared Task",
"sec_num": "2"
},
{
"text": "The Appraisal framework (AF) is concerned with the use of linguistic markers for identifying and track the ways attitudes are invoked in authored Figure 1 : Overview of appraisal resources (Martin and White, 2005, p38) text. The framework defines three subsystems for evaluative meaning making (1) ATTITUDE; (2) ENGAGEMENT; and (3) GRADUATION. Each of these are further divided in to other subsystems ( Figure 1 ). In particular, The ATTITUDE framework is divided into three subsystems: (1) AFFECT (registering of emotions); (2) APPRECIATION (evaluations of natural and semiotic phenomena); and (3) JUDGEMENT (evaluations of people and their behaviour).",
"cite_spans": [
{
"start": 189,
"end": 218,
"text": "(Martin and White, 2005, p38)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 146,
"end": 154,
"text": "Figure 1",
"ref_id": null
},
{
"start": 403,
"end": 411,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Appraisal Framework",
"sec_num": "3"
},
{
"text": "The judgement subsystem has two regions: social esteem and social sanction. The subcategories of each of these two regions form the target labels for the 2020 ALTA Shared Task. In particular:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Appraisal Framework",
"sec_num": "3"
},
{
"text": "Social esteem tends to function as admiration or criticism and can be subdivided into three subcategories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Appraisal Framework",
"sec_num": "3"
},
{
"text": "Normality (how unusual one is): \"He is oldfashioned\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Appraisal Framework",
"sec_num": "3"
},
{
"text": "Capacity (how capable one is): \"Self-driven 12 year old is a maths genius\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Appraisal Framework",
"sec_num": "3"
},
{
"text": "Tenacity (how resolute one is): \"They are hardworking and honest\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Appraisal Framework",
"sec_num": "3"
},
{
"text": "Social sanction functions as praise or condemnation and can be subdivided into two subcategories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Appraisal Framework",
"sec_num": "3"
},
{
"text": "Veracity (how honest/truthful one is): \"They are hard-working and honest\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Appraisal Framework",
"sec_num": "3"
},
{
"text": "Propriety (how ethical one is): \"She is too arrogant to learn the error of her ways\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Appraisal Framework",
"sec_num": "3"
},
{
"text": "The judgement system is used to assess human behaviour and their position on certain social norms. Further details and examples can be found in The Appraisal Website. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Appraisal Framework",
"sec_num": "3"
},
{
"text": "The source data of the 2020 ALTA Shared Task is a subset of the SemEval 2018 AIT DISC dataset. 4 A total of 300 tweets have been manually annotated in a two-stage process. The annotation was first annotated by two linguists from two Australian universities (University of Wollongong and University of New South Wales) and then double-checked by two other linguists from the same two universities. The data were subsequently split into a training set of 200 tweets, and a test set of 100 tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "Each tweet was annotated with one or more (or none) of the following labels: normality, capacity, tenacity, veracity, propriety. Table 1 shows artificial examples of text messages and their annotations.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "As in previous ALTA shared tasks, the task was managed as a Kaggle in Class competition. This year's task name was \"ALTA 2020 Challenge\". 5 The Kaggle-in-Class platform enabled the participants to download the data, submit their runs, and observe the results of their submissions in a leaderboard instantly.",
"cite_spans": [
{
"start": 138,
"end": 139,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "As is common in Kaggle competitions, when a participant team submits their results, the public leaderboard shows the evaluation results of part of the test data, and the results of the remaining test data are held for the final ranking. By following the public leaderboard, a team can then gauge the performance of their system in comparison with that of other systems in the same public test set. A team can choose up to two of their runs for the final ranking. If a team chooses runs for the final ranking, the best results on these runs on the private partition of the test data will be used. does not choose any runs, the private evaluation results of the run with the best results on the public partition will be chosen. The systems were evaluated using the mean of the F1 score over the test samples (1), ys,\u0177s) P (ys,\u0177s)+R(ys,\u0177s)",
"cite_spans": [
{
"start": 811,
"end": 817,
"text": "ys,\u0177s)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "F 1 := 1 |S| s\u2208S F \u03b2 (y s ,\u0177 s ) F 1 (y s ,\u0177 s ) := 2 P (ys,\u0177s)\u00d7R(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "P (y s ,\u0177 s ) := |ys\u2229\u0177s| |ys| R(y s ,\u0177 s ) := |ys\u2229\u0177s| |\u0177s| (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "where y s is the set of predicted labels in sample s, y s ) is the set of true labels in the sample, and S is the set of samples. If there were no true or no predicted labels, F 1 (y s ,\u0177 s ) := 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "In total 5 teams registered for the competitions, all of them in the student category. Of these, 3 teams submitted runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participating Systems",
"sec_num": "6"
},
{
"text": "Team NLP-CIC experimented with logistic regression and Roberta (Aroyehun and Gelbukh, 2020). Whereas the logistic regression classifier obtained the best results in the public leaderboard, it performed much worse in the private leaderboard. In contrast, the Roberta classifier obtained consistent results in both the public and private leaderboards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participating Systems",
"sec_num": "6"
},
{
"text": "Team OrangutanV2 designed classifiers using ALBERT and transfer learning (Parameswaran et al., 2020) . After observing that 22 tweets from the test set are also in the training set, they also incorporated a component that performed cosine similarity with the samples from the training data.",
"cite_spans": [
{
"start": 73,
"end": 100,
"text": "(Parameswaran et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Participating Systems",
"sec_num": "6"
},
{
"text": "Team NITS experimented with ensemble approaches (Khilji et al., 2020) . They obtained pretrained word embeddings and incorporated polynomial features. These features were fed to decision tree and Extreme Gradient Boosting (XGBoost) classifiers. The results indicate that this task has been particularly challenging and there is room for improvement. A possible reason for the difficulty of this task is the small number (200) of annotated samples available. Another reason for the low results is the relatively large percentage of samples with empty judgements. In particular, 60% of the test data had empty judgements. According to Formula (1), the F1 score of test samples with no annotations is 0. This means that the upper bound with this test data is 0.4.",
"cite_spans": [
{
"start": 48,
"end": 69,
"text": "(Khilji et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Participating Systems",
"sec_num": "6"
},
{
"text": "The aim of the 2020 ALTA shared task was to predict the judgement of short texts according to Martin and White's (2005) Appraisal framework. The task proved challenging, presumably due to the small amount of annotated data and the sparse annotations in the data.",
"cite_spans": [
{
"start": 94,
"end": 119,
"text": "Martin and White's (2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "http://www.alta.asn.au/events/ sharedtask2020/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.grammatics.com/ appraisal/appraisalguide/unframed/ stage2-attitude-judgement.htm 4 https://competitions.codalab. org/competitions/17751#learn_the_ details-datasets 5 https://www.kaggle.com/c/ alta-2020-challenge/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous sponsor who donated the data for this shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatically predicting judgement dimensions of human behaviour",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Segun Taofeek Aroyehun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Segun Taofeek Aroyehun and Alexander Gelbukh. 2020. Automatically predicting judgement dimen- sions of human behaviour. In Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Human behavior assessment using ensemble models",
"authors": [
{
"first": "Abdullah",
"middle": [],
"last": "Faiz Ur Rahman Khilji",
"suffix": ""
},
{
"first": "Rituparna",
"middle": [],
"last": "Khaund",
"suffix": ""
},
{
"first": "Utkarsh",
"middle": [],
"last": "Sinha",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdullah Faiz Ur Rahman Khilji, Rituparna Khaund, and Utkarsh Sinha. 2020. Human behavior assess- ment using ensemble models. In Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Language of Evaluation Appraisal in English",
"authors": [
{
"first": "J",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Martin and P. White. 2005. The Language of Evalua- tion Appraisal in English. Palgrave Macmillan, UK.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Classifying JUDGEMENTS using transfer learning",
"authors": [
{
"first": "Pradeesh",
"middle": [],
"last": "Parameswaran",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Trotman",
"suffix": ""
},
{
"first": "Veronica",
"middle": [],
"last": "Liesaputra",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Eyers",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradeesh Parameswaran, Andrew Trotman, Veronica Liesaputra, and David Eyers. 2020. Classifying JUDGEMENTS using transfer learning. In Pro- ceedings of the 18th Annual Workshop of the Aus- tralasian Language Technology Association.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": "Artificial examples of texts and their annotations."
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>Team</td><td>F1</td><td>p</td></tr><tr><td>NLP-CIC</td><td>0.155</td><td/></tr><tr><td colspan=\"3\">OrangutanV2 0.105 0.313</td></tr><tr><td>NITS</td><td colspan=\"2\">0.053 0.010</td></tr></table>",
"num": null,
"html": null,
"text": "shows the results of the systems in the private leaderboard."
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": "Results of the participating teams according to the private leaderboard. Column p indicates the Wilcoxon Signed Rank test between a team and the top team after removing ties."
}
}
}
}