ACL-OCL / Base_JSON /prefixH /json /hcinlp /2022.hcinlp-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:35:33.905467Z"
},
"title": "Teaching Interactively to Learn Emotions in Natural Language",
"authors": [
{
"first": "Rajesh",
"middle": [],
"last": "Titung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rochester Institute of Technology New York",
"location": {
"country": "USA"
}
},
"email": ""
},
{
"first": "Cecilia",
"middle": [
"O"
],
"last": "Alm",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Motivated by prior literature, we provide a proof of concept simulation study for an understudied interactive machine learning method, machine teaching (MT), for the text-based emotion prediction task. We compare this method experimentally against a more well-studied technique, active learning (AL). Results show the strengths of both approaches over more resource-intensive offline supervised learning. Additionally, applying AL and MT to fine-tune a pre-trained model offers further efficiency gain. We end by recommending research directions which aim to empower users in the learning process.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Motivated by prior literature, we provide a proof of concept simulation study for an understudied interactive machine learning method, machine teaching (MT), for the text-based emotion prediction task. We compare this method experimentally against a more well-studied technique, active learning (AL). Results show the strengths of both approaches over more resource-intensive offline supervised learning. Additionally, applying AL and MT to fine-tune a pre-trained model offers further efficiency gain. We end by recommending research directions which aim to empower users in the learning process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We examine Machine Teaching (MT), an understudied interactive machine learning (iML) method under controlled simulation for the task of textbased emotion prediction (Liu et al., 2003; Aman and Szpakowicz, 2007; Alm, 2010; Bellegarda, 2013; Calvo and Mac Kim, 2013; Mohammad and Alm, 2015) . This problem intersects with affective computing (Picard, 1997; Calvo et al., 2015; Poria et al., 2017) , and a family of language inference problems characterized by human subjectivity in learning targets (Alm, 2011) and semantic-pragmatic meaning (Wiebe et al., 2004) . Both subjectivity and the lack of data for learning to recognize affective states motivate iML techniques. Here, we focus on resource efficiency. Our findings from simulations provide directions for user experiments.",
"cite_spans": [
{
"start": 165,
"end": 183,
"text": "(Liu et al., 2003;",
"ref_id": "BIBREF24"
},
{
"start": 184,
"end": 210,
"text": "Aman and Szpakowicz, 2007;",
"ref_id": "BIBREF8"
},
{
"start": 211,
"end": 221,
"text": "Alm, 2010;",
"ref_id": "BIBREF2"
},
{
"start": 222,
"end": 239,
"text": "Bellegarda, 2013;",
"ref_id": "BIBREF13"
},
{
"start": 240,
"end": 264,
"text": "Calvo and Mac Kim, 2013;",
"ref_id": "BIBREF17"
},
{
"start": 265,
"end": 288,
"text": "Mohammad and Alm, 2015)",
"ref_id": "BIBREF28"
},
{
"start": 340,
"end": 354,
"text": "(Picard, 1997;",
"ref_id": "BIBREF30"
},
{
"start": 355,
"end": 374,
"text": "Calvo et al., 2015;",
"ref_id": "BIBREF16"
},
{
"start": 375,
"end": 394,
"text": "Poria et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 540,
"end": 560,
"text": "(Wiebe et al., 2004)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Human perception -and thus human annotators' interpretation -is influenced by human factors such as preferences, cultural differences, bias, domain expertise, fatigue, time on task, or mood at annotation time (Alm, 2012; Amidei et al., 2020; Shen and Rose, 2021) . Generally, experts with long-standing practice or in-depth knowledge may also not share consensus (Plank et al., 2014) . Inter-subjective disagreements can reflect invalid noise artifact (detectable by humans) or ecologically valid differences in interpretation.",
"cite_spans": [
{
"start": 209,
"end": 220,
"text": "(Alm, 2012;",
"ref_id": "BIBREF4"
},
{
"start": 221,
"end": 241,
"text": "Amidei et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 242,
"end": 262,
"text": "Shen and Rose, 2021)",
"ref_id": "BIBREF35"
},
{
"start": 363,
"end": 383,
"text": "(Plank et al., 2014)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Holzinger (2016) define iML methods as algorithmic procedures that \"can interact with agents and can optimize their learning behavior through these interactions [...]\" (p. 119). In our study, the stakeholders in the learning process are models (learners) and humans (agent-users or agentteachers). Tegen et al. (2020) posit that iML involves either Active Learning (AL) or interactive Machine Teaching (MT), 1 based on humans' role in the learning loop. In AL, the learning algorithm uses query strategies (e.g., triggered by uncertainty) to iteratively select instances from which it learns (Settles, 2009) if licensed by a budget; with a human agent who annotates upon learner request. In contrast, in MT, the teacher (user) who possesses problem knowledge instead selects the instances to be labeled and uses them to train the learner (Zhu, 2015) . Initial, foundational MT research focused on constructing a minimal, ideal set of training data, striving for optimality in the data the learner is presented with to learn from. Interactive MT assumes human agent interaction with the learner (Liu et al., 2017) , for enabling time-and resource-efficient model convergence. Following the training by error criterion described in Tegen et al. (2020) , if the learner is unable to predict the right answer, and the budget allows, the human teacher instructs the learner with the label. Thus, AL leverages measures to wisely choose instances for human labeling and subsequent learning, whereas MT capitalizes on the teacher's knowledge to wisely select training instances and proceed to learn when the criterion to teach is met (cf. Figure 1 ). Olsson (2009) discussed AL for NLP tasks, while Schr\u00f6der and Niekler (2020) discussed deep learning with AL. Our study also builds on Tegen et al. (2020)'s use of simulation to study AL query strategies and MT assessment and teaching criteria. Lu and MacNamee (2020) reported on experiments where transformer-based representations performed consistently better than other text representations, taking advantage of the label information that arises in AL. An et al. (2018) also suggested assigning a varying number of instances to label per human oracle based on their capability/skills and the amount of unlabeled data, which reduced the time required by the deep learner without negatively impacting performance. We comparatively study iML in the fine-tuning stages. Bai et al. (2020) emphasized language-based attributes like reconstruction of word-based cross-entropy loss across words in sentences toward instance selection. To ensure improved experimental control and avoid confounding variables, we focus on uncertainty-based strategies for AL.",
"cite_spans": [
{
"start": 298,
"end": 317,
"text": "Tegen et al. (2020)",
"ref_id": "BIBREF37"
},
{
"start": 592,
"end": 607,
"text": "(Settles, 2009)",
"ref_id": "BIBREF34"
},
{
"start": 838,
"end": 849,
"text": "(Zhu, 2015)",
"ref_id": "BIBREF41"
},
{
"start": 1094,
"end": 1112,
"text": "(Liu et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 1230,
"end": 1249,
"text": "Tegen et al. (2020)",
"ref_id": "BIBREF37"
},
{
"start": 1643,
"end": 1656,
"text": "Olsson (2009)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 1631,
"end": 1639,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "MT deals with a teacher designing a wellreasoned, ideally optimal, training set to drive the learner to the desired target concept/model (Zhu, 2015; Zhu et al., 2018) . While there has been some progress in the use of MT, its application in NLP is present in its earliest form with little empirical exploration or refinement. MT has been explored mostly in computing security, where the teacher is a hacker/advisor who selects training data to adjust the behavior of an adaptive, evolving learner (Alfeld et al., 2016 (Alfeld et al., , 2017 . Tegen et al. (2020) reported that MT could greatly reduce the number of instances required, and even outperformed most AL strategies. These findings are compelling and motivate exploring MT's potential in NLP, which, however, has some distinct characteristics, including high-dimensional data impacted by scarcity. MT's possibilities in NLP are thus as of yet largely unknown. We begin here by focusing on controlled experimental simulations to examine resource-efficiency and performance in text-based emotion prediction, whereas future work will take a step closer to ecological validity in interactive MT with real-time agent-teachers.",
"cite_spans": [
{
"start": 137,
"end": 148,
"text": "(Zhu, 2015;",
"ref_id": "BIBREF41"
},
{
"start": 149,
"end": 166,
"text": "Zhu et al., 2018)",
"ref_id": "BIBREF42"
},
{
"start": 497,
"end": 517,
"text": "(Alfeld et al., 2016",
"ref_id": "BIBREF0"
},
{
"start": 518,
"end": 540,
"text": "(Alfeld et al., , 2017",
"ref_id": "BIBREF1"
},
{
"start": 543,
"end": 562,
"text": "Tegen et al. (2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Background",
"sec_num": "2"
},
{
"text": "Overall, several prospects can be noted for NLP with interactive Machine Learning (iML):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Background",
"sec_num": "2"
},
{
"text": "\u2022 Human knowledge and insights can be leveraged to make the search space substantially smaller by systematic instance selection (Holzinger, 2016) , achieving adequate performance with fewer training instances.",
"cite_spans": [
{
"start": 128,
"end": 145,
"text": "(Holzinger, 2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Background",
"sec_num": "2"
},
{
"text": "\u2022 In a setting where learning occurs online or continually (Tegen et al., 2019) , iML enables sustained learning over time, with new or updated data offered to the learner. This especially makes sense for natural language tasks which by nature are characterized by linguistic change.",
"cite_spans": [
{
"start": 59,
"end": 79,
"text": "(Tegen et al., 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Background",
"sec_num": "2"
},
{
"text": "\u2022 Using iML can enable model customization to specific users, schools of thought, and enable privacy-preserving models (Bernardo et al., 2017) , e.g., for deploying NLP on edge devices.",
"cite_spans": [
{
"start": 119,
"end": 142,
"text": "(Bernardo et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Background",
"sec_num": "2"
},
{
"text": "\u2022 IML enables users to directly influence the model (Amershi et al., 2014) , and interactive techniques can aid agents to catch bias or concept drift early in the development process.",
"cite_spans": [
{
"start": 52,
"end": 74,
"text": "(Amershi et al., 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Background",
"sec_num": "2"
},
{
"text": "\u2022 The iML paradigm enables an initial state with limited data (or even a cold start), which applies to NLP for underresourced languages, lowdata clinical problems, etc., including NLP for affective computing since many affective states remain understudied (Alm, n.d.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Background",
"sec_num": "2"
},
{
"text": "\u2022 By learning more resource-efficiently, iML has potential to lower NLP's carbon footprint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Background",
"sec_num": "2"
},
{
"text": "While iML is promising, issues include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Background",
"sec_num": "2"
},
{
"text": "\u2022 Humans users or teachers are not necessarily willing or available to provide input or feedback to a system (Donmez and Carbonell, 2010 ).",
"cite_spans": [
{
"start": 109,
"end": 136,
"text": "(Donmez and Carbonell, 2010",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Background",
"sec_num": "2"
},
{
"text": "\u2022 The iML setup is not immune to catastrophic forgetting (Holzinger, 2016) in online learning.",
"cite_spans": [
{
"start": 57,
"end": 74,
"text": "(Holzinger, 2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Background",
"sec_num": "2"
},
{
"text": "\u2022 Human factors introduce technical considerations that may impact interaction and performance success; for instance, the learning set-up should accommodate human fatigue (Darani and Kaedi, 2017; Llor\u00e0 et al., 2005) . ",
"cite_spans": [
{
"start": 196,
"end": 215,
"text": "Llor\u00e0 et al., 2005)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Background",
"sec_num": "2"
},
{
"text": "Text-based emotion data are subject to variation and ambiguity, which adds to the difficulty in the annotation process, compounded with data scarcity for capturing many affective states. IML methods can be a means to deal with data limitations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT/AL for Emotion Prediction",
"sec_num": "3"
},
{
"text": "In this study, we used a subset of the GoEmotions dataset (Demszky et al., 2020) which consists of emotion labels for Reddit comments. We prioritized resource-efficiency as the primary experimental variable over exploring impact on target concept ambiguity. Figure 2 shows the imbalanced distribution of emotion classes in this subset. The training and test sets comprised approximately 2800 and 700 instances respectively. In all experiments, the learner was trained initially with 10% of the training set while the remaining 90% was reserved as an unlabeled pool of data which were gradually added to the training set in each iteration. 2 The simulated 'user' had access to the labels of the instances from the unlabeled dataset whenever required via dataset lookup.",
"cite_spans": [
{
"start": 58,
"end": 80,
"text": "(Demszky et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 639,
"end": 640,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 258,
"end": 266,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "MT/AL for Emotion Prediction",
"sec_num": "3"
},
{
"text": "We compared the effect of AL and MT strategies and further compared to offline supervised machine learning, referred to as all-in-one batch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AL vs. MT for Emotion Prediction",
"sec_num": "3.1"
},
{
"text": "Motivation In our AL experiment, the learner queried the instances using versions of uncertainty sampling or a random approach. In the least confident strategy, the learner selects instances for query 2 For the Huggingface transformers library 20% of the training set was held out as a validation set before this 90-10 split. For sklearn, attempts at hyperparameter tuning-for the C parameter, dual/primal problem and tolerance values for stopping criteria-used a genetic algorithm without meaningful performance difference, and results are provided with defaults, with class weights initiated as the inverse of the frequency of each class.",
"cite_spans": [
{
"start": 201,
"end": 202,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AL vs. MT for Emotion Prediction",
"sec_num": "3.1"
},
{
"text": "for which it has the least probability of prediction in its most probable class; in margin sampling, instances with the smallest difference between its toptwo most likely classes; and in entropy, with the largest entropy (Olsson, 2009; Tegen et al., 2020) .",
"cite_spans": [
{
"start": 221,
"end": 235,
"text": "(Olsson, 2009;",
"ref_id": "BIBREF29"
},
{
"start": 236,
"end": 255,
"text": "Tegen et al., 2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AL vs. MT for Emotion Prediction",
"sec_num": "3.1"
},
{
"text": "In MT, the agent-teacher chooses instances (Zhu, 2015) , which are then labeled and used to teach the learner (Tegen et al., 2020) . We simulated the margin sampling-based AL query strategy as a teacher to select a set of instances. Moreover, error-based and state change are two teaching criteria used by Tegen et al. (2020) for initiating teaching. In the error-based method, the teacher proceeds to teach based on correctness of the learner's estimation, i.e., supplying the learner with the correct label for wrong estimations. We introduce a modification termed error-based training with counting where the teacher continues to provide labeled instances to the learner when all estimations are accurate in two consecutive iterations to ensure periodic model updating. In the state change-based criterion, the teacher provides a label for the instance if the current instance's real class label differs from the prior instance's class label. When no label is given, the learner assumes the instance's label is the same as the last label given by the teacher.",
"cite_spans": [
{
"start": 43,
"end": 54,
"text": "(Zhu, 2015)",
"ref_id": "BIBREF41"
},
{
"start": 110,
"end": 130,
"text": "(Tegen et al., 2020)",
"ref_id": "BIBREF37"
},
{
"start": 306,
"end": 325,
"text": "Tegen et al. (2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AL vs. MT for Emotion Prediction",
"sec_num": "3.1"
},
{
"text": "Methods We focus on transportability and opted for sklearn's Linear SVM with hinge loss given its lean computational character (Buitinck et al., 2013; Chang and Lin, 2011) . Both setups were trained on CPUs, with MT using state change as teaching criterion taking the longest time (around 40 min).",
"cite_spans": [
{
"start": 127,
"end": 150,
"text": "(Buitinck et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 151,
"end": 171,
"text": "Chang and Lin, 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AL vs. MT for Emotion Prediction",
"sec_num": "3.1"
},
{
"text": "Results and Discussion Panel (a) in Figure 3 shows the result for AL strategies. The performance on emotion prediction in text is more resource-efficient and uses less data with AL. The query strategies achieved the performance equivalent to learning with the full batch of training data after using just around half of the data with AL, and all perform better than random selection. A Wilcoxon's Rank Sum Test (Wilcoxon, 1992) for independent samples compared random against other query strategies. This indicated a significant difference in their performance with p < 0.05. Panel (b) shows the MT results for three teaching criteria. State change improves over the error-based approach, while the error-based approach with counting slightly enhances the regular error-based approach because of the modification introduced. We also observe that since we used margin-based AL as a teacher for selecting instances, the result mirrors margin sampling-based AL in panel (a). Moreover, we note that error-based teaching saturates, potentially reflecting that state change-based teaching is more capable of dealing with imbalanced data (Tegen et al., 2020) . Overall, the encouraging results motivate us to plan to assess utility in a realtime MT scenario with a human teacher and deeper study of teacher variations for data selection and revised teaching criteria for initiating training.",
"cite_spans": [
{
"start": 412,
"end": 428,
"text": "(Wilcoxon, 1992)",
"ref_id": "BIBREF39"
},
{
"start": 1132,
"end": 1152,
"text": "(Tegen et al., 2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 36,
"end": 45,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "AL vs. MT for Emotion Prediction",
"sec_num": "3.1"
},
{
"text": "Motivation Previous results showed that MT and AL can build better models more efficiently with annotation savings (time and cost). Here, we explore if fine-tuning a pre-trained model -a frequent and often performance-boosting approach in NLPthat uses iML concepts can improve results further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning with AL and MT",
"sec_num": "3.2"
},
{
"text": "Methods We fine-tune a pre-trained BERT model (Devlin et al., 2019) to emotion prediction in text using Huggingface (Wolf et al., 2020) , with a max. sequence length of 80 (since comments tend to be quite short). Based on prior observations, we analyze fine-tuning performance with AL for the least confident and margin sampling strategies, and with MT for the error-based and error-based with counting teaching criteria.",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 116,
"end": 135,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning with AL and MT",
"sec_num": "3.2"
},
{
"text": "Results and Discussion Figure 4 shows the outcomes for fine-tuning BERT interactively. The results show performance close to 96%, which is good for this subjective task. Moreover, AL matched the offline training performance using less than half of the available instances. We note that convergence for fine-tuning also required somewhat less data than in the prior SVM-based experiment, as shown by the steeper slope of performance increment. Yet how to better leverage MT in conjunction with fine-tuning, or transfer techniques generally, remains a key priority in continued study.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Fine-tuning with AL and MT",
"sec_num": "3.2"
},
{
"text": "We showed that iML efficiently produces desired results for text-based emotion prediction. MT remains understudied and should be further explored for NLP tasks. Fine-tuning a pre-trained model with AL can leverage the strengths of both approaches with small datasets. In addition to experiments detailed above, we explored training the learner incrementally (online training) versus in a non-incremental setup (the learner is trained using accumulated training set up to the most recent query). The incremental approach experiences catastrophic forgetting but requires very little time for learner updating and can thus work well under low memory usage, e.g., for a life-long learning setting or edge devices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Our study on text-based emotion prediction demonstrated the potential of both MT and AL methods. We offered initial experimentation with MT and AL for this problem, and based on promising results under controlled simulation, next steps will focus on real-time user/teacher interactions, a broader set of teaching criteria, and new forms of training instance selection. In addition, we are interested in exploring heavily understudied affective states, which are currently not covered sufficiently or not covered at all in annotated emotion corpora. We also suggest focused research on specialized teachers in NLP tasks toward better selection of training data. Teachers who assess the learner and decide the right time to offer an adequate set of new information may also help create more robust or interpretable learners which evolve over time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "A limitation of this work is that it did not consider linguistic characteristics of the pre-trained models (Bai et al., 2020) . We used an artificial teacher in MT and did not deeply examine hybrid MT-AL strategies, although we used an AL approach as teacher in the MT setup. Still, this work may stimulate NLP researchers to consider the benefits of AL and MT, especially for challenging subjective NLP tasks such as text-based emotion prediction (Alm, 2011). Additionally, continued work can explore how the findings apply in the context of other corpora, including with multimodal data.",
"cite_spans": [
{
"start": 107,
"end": 125,
"text": "(Bai et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethics Statement",
"sec_num": null
},
{
"text": "We use conventions fromTegen et al. (2020) where MT means an iterative, interactive implementation of Machine Teaching. MT here is not Machine Translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This material is based upon work supported by the National Science Foundation under Award No. DGE-2125362. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Data poisoning attacks against autoregressive models",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Alfeld",
"suffix": ""
},
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Barford",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16",
"volume": "",
"issue": "",
"pages": "1452--1458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Alfeld, Xiaojin Zhu, and Paul Barford. 2016. Data poisoning attacks against autoregressive models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, page 1452-1458. AAAI Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Explicit defense actions against test-set attacks",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Alfeld",
"suffix": ""
},
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Barford",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17",
"volume": "",
"issue": "",
"pages": "1274--1280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Alfeld, Xiaojin Zhu, and Paul Barford. 2017. Ex- plicit defense actions against test-set attacks. In Pro- ceedings of the Thirty-First AAAI Conference on Arti- ficial Intelligence, AAAI'17, page 1274-1280. AAAI Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Characteristics of high agreement affect annotation in text",
"authors": [
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter",
"suffix": ""
},
{
"first": "Alm",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourth Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "118--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cecilia Ovesdotter Alm. 2010. Characteristics of high agreement affect annotation in text. In Proceedings of the Fourth Linguistic Annotation Workshop, pages 118-122, Uppsala, Sweden. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Subjective natural language problems: Motivations, applications, characterizations, and implications",
"authors": [
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter",
"suffix": ""
},
{
"first": "Alm",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "107--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cecilia Ovesdotter Alm. 2011. Subjective natural lan- guage problems: Motivations, applications, charac- terizations, and implications. In Proceedings of the 49th Annual Meeting of the Association for Compu- tational Linguistics: Human Language Technologies: short papers-Volume 2, pages 107-112. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The role of affect in the computational modeling of natural language. Language and Linguistics Compass",
"authors": [
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter",
"suffix": ""
},
{
"first": "Alm",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "6",
"issue": "",
"pages": "416--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cecilia Ovesdotter Alm. 2012. The role of affect in the computational modeling of natural language. Lan- guage and Linguistics Compass, 6(7):416-430.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Linguistic data resources for computational emotion sensing and modeling",
"authors": [
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter",
"suffix": ""
},
{
"first": "Alm",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cecilia Ovesdotter Alm. n.d. Linguistic data resources for computational emotion sensing and modeling.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Emotions from text: Machine learning for textbased emotion prediction",
"authors": [
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter Alm",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "579--586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: Machine learning for text- based emotion prediction. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Process- ing, pages 579-586, Vancouver, British Columbia, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Emotional sequencing and development in fairy tales",
"authors": [
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter",
"suffix": ""
},
{
"first": "Alm",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 2005,
"venue": "Affective Computing and Intelligent Interaction",
"volume": "",
"issue": "",
"pages": "668--674",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cecilia Ovesdotter Alm and Richard Sproat. 2005. Emo- tional sequencing and development in fairy tales. In Affective Computing and Intelligent Interaction, pages 668-674, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Identifying expressions of emotion in text",
"authors": [
{
"first": "Saima",
"middle": [],
"last": "Aman",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "196--205",
"other_ids": {
"DOI": [
"10.1007/978-3-540-74628-7_27"
]
},
"num": null,
"urls": [],
"raw_text": "Saima Aman and Stan Szpakowicz. 2007. Identifying expressions of emotion in text. In Mautner P. Ma- tou\u0161ek V., editor, Text, Speech and Dialogue TSD 2007, pages 196-205. Springer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Power to the people: The role of humans in interactive machine learning",
"authors": [
{
"first": "Saleema",
"middle": [],
"last": "Amershi",
"suffix": ""
},
{
"first": "Maya",
"middle": [],
"last": "Cakmak",
"suffix": ""
},
{
"first": "William",
"middle": [
"Bradley"
],
"last": "Knox",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Kulesza",
"suffix": ""
}
],
"year": 2014,
"venue": "AI Magazine",
"volume": "35",
"issue": "4",
"pages": "105--120",
"other_ids": {
"DOI": [
"10.1609/aimag.v35i4.2513"
]
},
"num": null,
"urls": [],
"raw_text": "Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4):105-120.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Identifying annotator bias: A new IRT-based method for bias identification",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Amidei",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Piwek",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Willis",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4787--4797",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.421"
]
},
"num": null,
"urls": [],
"raw_text": "Jacopo Amidei, Paul Piwek, and Alistair Willis. 2020. Identifying annotator bias: A new IRT-based method for bias identification. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 4787-4797, Barcelona, Spain (Online). International Committee on Computational Linguis- tics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep active learning for text classification",
"authors": [
{
"first": "Wenjun",
"middle": [],
"last": "Bang An",
"suffix": ""
},
{
"first": "Huimin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd International Conference on Vision, Image and Signal Processing",
"volume": "2018",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3271553.3271578"
]
},
"num": null,
"urls": [],
"raw_text": "Bang An, Wenjun Wu, and Huimin Han. 2018. Deep active learning for text classification. In Proceedings of the 2nd International Conference on Vision, Image and Signal Processing, ICVISP 2018, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pre-trained language model based active learning for sentence matching",
"authors": [
{
"first": "Guirong",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zaiqing",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1495--1504",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.130"
]
},
"num": null,
"urls": [],
"raw_text": "Guirong Bai, Shizhu He, Kang Liu, Jun Zhao, and Za- iqing Nie. 2020. Pre-trained language model based active learning for sentence matching. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 1495-1504, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Data-driven analysis of emotion in text using latent affective folding and embedding",
"authors": [
{
"first": "Jerome",
"middle": [
"R"
],
"last": "Bellegarda",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Intelligence",
"volume": "29",
"issue": "3",
"pages": "506--526",
"other_ids": {
"DOI": [
"10.1111/j.1467-8640.2012.00457.x"
]
},
"num": null,
"urls": [],
"raw_text": "Jerome R. Bellegarda. 2013. Data-driven analysis of emotion in text using latent affective folding and embedding. Computational Intelligence, 29(3):506- 526.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Interactive machine learning for end-user innovation",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Bernardo",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Zbyszynski",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Fiebrink",
"suffix": ""
},
{
"first": "Mick",
"middle": [],
"last": "Grierson",
"suffix": ""
}
],
"year": 2017,
"venue": "AAAI Spring Symposia",
"volume": "",
"issue": "",
"pages": "369--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisco Bernardo, Michael Zbyszynski, Rebecca Fiebrink, and Mick Grierson. 2017. Interactive ma- chine learning for end-user innovation. In AAAI Spring Symposia, pages 369-375.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "API design for machine learning software: Experiences from the scikit-learn project",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Buitinck",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Louppe",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Vlad",
"middle": [],
"last": "Niculae",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Jaques",
"middle": [],
"last": "Grobler",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Layton",
"suffix": ""
},
{
"first": "Jake",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "Arnaud",
"middle": [],
"last": "Joly",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Holt",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
}
],
"year": 2013,
"venue": "ECML PKDD Workshop: Languages for Data Mining and Machine Learning",
"volume": "",
"issue": "",
"pages": "108--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake VanderPlas, Ar- naud Joly, Brian Holt, and Ga\u00ebl Varoquaux. 2013. API design for machine learning software: Experi- ences from the scikit-learn project. In ECML PKDD Workshop: Languages for Data Mining and Machine Learning, pages 108-122.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Oxford Handbook of Affective Computing",
"authors": [
{
"first": "Rafael",
"middle": [],
"last": "Calvo",
"suffix": ""
},
{
"first": "D'",
"middle": [],
"last": "Sidney",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Mello",
"suffix": ""
},
{
"first": "Arvid",
"middle": [],
"last": "Gratch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kappas",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199942237.001.0001/oxfordhb-9780199942237"
]
},
"num": null,
"urls": [],
"raw_text": "Rafael Calvo, Sidney D'Mello, Jonathan Gratch, and Arvid Kappas, editors. 2015. The Oxford Handbook of Affective Computing. Oxford University Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Emotions in text: Dimensional and categorical models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Rafael",
"suffix": ""
},
{
"first": "Sunghwan Mac",
"middle": [],
"last": "Calvo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Intelligence",
"volume": "29",
"issue": "3",
"pages": "527--543",
"other_ids": {
"DOI": [
"10.1111/j.1467-8640.2012.00456.x"
]
},
"num": null,
"urls": [],
"raw_text": "Rafael A. Calvo and Sunghwan Mac Kim. 2013. Emo- tions in text: Dimensional and categorical models. Computational Intelligence, 29(3):527-543.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "LIBSVM: A library for support vector machines",
"authors": [
{
"first": "Chih-Chung",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM Transactions on Intelligent Systems and Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A library for support vector machines. ACM Transac- tions on Intelligent Systems and Technology, 2:27:1- 27:27. Software available at http://www.csie. ntu.edu.tw/~cjlin/libsvm.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving the interactive genetic algorithm for customercentric product design by automatically scoring the unfavorable designs",
"authors": [
{
"first": "Zahra",
"middle": [],
"last": "Sheikhi Darani",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Kaedi",
"suffix": ""
}
],
"year": 2017,
"venue": "Human-centeric Computing and Information Sciences",
"volume": "7",
"issue": "38",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/s13673-017-0119-0"
]
},
"num": null,
"urls": [],
"raw_text": "Zahra Sheikhi Darani and Marjan Kaedi. 2017. Improv- ing the interactive genetic algorithm for customer- centric product design by automatically scoring the unfavorable designs. Human-centeric Computing and Information Sciences, 7(38).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "GoEmotions: A dataset of fine-grained emotions",
"authors": [
{
"first": "Dorottya",
"middle": [],
"last": "Demszky",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Movshovitz-Attias",
"suffix": ""
},
{
"first": "Jeongwoo",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Cowen",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Nemade",
"suffix": ""
},
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4040--4054",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.372"
]
},
"num": null,
"urls": [],
"raw_text": "Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. GoEmotions: A dataset of fine-grained emo- tions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\", pages 4040-4054, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "From Active to Proactive Learning Methods",
"authors": [
{
"first": "Pinar",
"middle": [],
"last": "Donmez",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "262",
"issue": "",
"pages": "97--120",
"other_ids": {
"DOI": [
"10.1007/978-3-642-05177-7_5"
]
},
"num": null,
"urls": [],
"raw_text": "Pinar Donmez and Jaime Carbonell. 2010. From Active to Proactive Learning Methods, volume 262, pages 97-120. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Interactive machine learning for health informatics: when do we need the humanin-the-loop?",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Holzinger",
"suffix": ""
}
],
"year": 2016,
"venue": "Brain Informatics",
"volume": "3",
"issue": "2",
"pages": "119--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Holzinger. 2016. Interactive machine learning for health informatics: when do we need the human- in-the-loop? Brain Informatics, 3(2):119-131.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A model of textual affect sensing using real-world knowledge",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Lieberman",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Selker",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 8th International Conference on Intelligent User Interfaces, IUI '03",
"volume": "",
"issue": "",
"pages": "125--132",
"other_ids": {
"DOI": [
"10.1145/604045.604067"
]
},
"num": null,
"urls": [],
"raw_text": "Hugo Liu, Henry Lieberman, and Ted Selker. 2003. A model of textual affect sensing using real-world knowledge. In Proceedings of the 8th International Conference on Intelligent User Interfaces, IUI '03, page 125-132, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Iterative machine teaching",
"authors": [
{
"first": "Weiyang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Ahmad",
"middle": [],
"last": "Humayun",
"suffix": ""
},
{
"first": "Charlene",
"middle": [],
"last": "Tay",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Linda",
"middle": [
"B"
],
"last": "Smith",
"suffix": ""
},
{
"first": "James",
"middle": [
"M"
],
"last": "Rehg",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "2149--2158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiyang Liu, Bo Dai, Ahmad Humayun, Charlene Tay, Chen Yu, Linda B. Smith, James M. Rehg, and Le Song. 2017. Iterative machine teaching. In Proceedings of the 34th International Conference on Machine Learning -Volume 70, ICML'17, page 2149-2158. JMLR.org.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Combating user fatigue in IGAs: Partial ordering, support vector machines, and synthetic fitness",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Llor\u00e0",
"suffix": ""
},
{
"first": "Kumara",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "David",
"middle": [
"E"
],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Abhimanyu",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Lalitha",
"middle": [],
"last": "Lakshmi",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, GECCO '05",
"volume": "",
"issue": "",
"pages": "1363--1370",
"other_ids": {
"DOI": [
"10.1145/1068009.1068228"
]
},
"num": null,
"urls": [],
"raw_text": "Xavier Llor\u00e0, Kumara Sastry, David E. Goldberg, Ab- himanyu Gupta, and Lalitha Lakshmi. 2005. Com- bating user fatigue in IGAs: Partial ordering, sup- port vector machines, and synthetic fitness. In Pro- ceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, GECCO '05, page 1363-1370, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Investigating the effectiveness of representations based on pretrained transformer-based language models in active learning for labelling text datasets",
"authors": [
{
"first": "Jinghui",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Macnamee",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.13138"
]
},
"num": null,
"urls": [],
"raw_text": "Jinghui Lu and Brian MacNamee. 2020. Investigat- ing the effectiveness of representations based on pre- trained transformer-based language models in active learning for labelling text datasets. arXiv e-prints, page arXiv:2004.13138.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Computational analysis of affect and emotion in language",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Cecilia",
"middle": [
"O"
],
"last": "Alm",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad and Cecilia O. Alm. 2015. Compu- tational analysis of affect and emotion in language. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, Lisbon, Portugal. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A literature survey of active machine learning in the context of natural language processing",
"authors": [
{
"first": "Fredrik",
"middle": [],
"last": "Olsson",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fredrik Olsson. 2009. A literature survey of active machine learning in the context of natural language processing. Technical report, Swedish Institute of Computer Science.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Affective Computing",
"authors": [
{
"first": "Rosalind",
"middle": [
"W"
],
"last": "Picard",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosalind W. Picard. 1997. Affective Computing. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Linguistically debatable or just plain wrong?",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "507--511",
"other_ids": {
"DOI": [
"10.3115/v1/P14-2083"
]
},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Dirk Hovy, and Anders S\u00f8gaard. 2014. Linguistically debatable or just plain wrong? In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 507-511, Baltimore, Maryland. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Rajiv",
"middle": [],
"last": "Bajpai",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Hussain",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "37",
"issue": "",
"pages": "98--125",
"other_ids": {
"DOI": [
"10.1016/j.inffus.2017.02.003"
]
},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. 2017. A review of affective computing: From unimodal analysis to multimodal fusion. Infor- mation Fusion, 37:98 -125.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A survey of active learning for text classification using deep neural networks",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Schr\u00f6der",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Niekler",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Schr\u00f6der and Andreas Niekler. 2020. A survey of active learning for text classification using deep neural networks. arXiv, 2008.07267.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Active learning literature survey",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles. 2009. Active learning literature survey. Computer Sciences Technical Report 1648, Univer- sity of Wisconsin-Madison.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "What sounds \"right\" to me? Experiential factors in the perception of political ideology",
"authors": [
{
"first": "Qinlan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1762--1771",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.152"
]
},
"num": null,
"urls": [],
"raw_text": "Qinlan Shen and Carolyn Rose. 2021. What sounds \"right\" to me? Experiential factors in the perception of political ideology. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1762-1771, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Towards a taxonomy of interactive continual and multimodal learning for the internet of things",
"authors": [
{
"first": "Agnes",
"middle": [],
"last": "Tegen",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Davidsson",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"A"
],
"last": "Persson",
"suffix": ""
}
],
"year": 2019,
"venue": "Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, UbiComp/ISWC '19 Adjunct",
"volume": "",
"issue": "",
"pages": "524--528",
"other_ids": {
"DOI": [
"10.1145/3341162.3345603"
]
},
"num": null,
"urls": [],
"raw_text": "Agnes Tegen, Paul Davidsson, and Jan A. Persson. 2019. Towards a taxonomy of interactive continual and mul- timodal learning for the internet of things. In Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Sym- posium on Wearable Computers, UbiComp/ISWC '19 Adjunct, page 524-528, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "2020. A taxonomy of interactive online machine learning strategies",
"authors": [
{
"first": "Agnes",
"middle": [],
"last": "Tegen",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Davidsson",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"A"
],
"last": "Persson",
"suffix": ""
}
],
"year": 2020,
"venue": "Machine Learning and Knowledge Discovery in Databases -European Conference, ECML PKDD 2020",
"volume": "12458",
"issue": "",
"pages": "137--153",
"other_ids": {
"DOI": [
"10.1007/978-3-030-67661-2_9"
]
},
"num": null,
"urls": [],
"raw_text": "Agnes Tegen, Paul Davidsson, and Jan A. Persson. 2020. A taxonomy of interactive online machine learning strategies. In Machine Learning and Knowl- edge Discovery in Databases -European Conference, ECML PKDD 2020, Ghent, Belgium, September 14- 18, 2020, Proceedings, Part II, volume 12458 of Lecture Notes in Computer Science, pages 137-153. Springer.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Learning subjective language",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Bruce",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Bell",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "3",
"pages": "277--308",
"other_ids": {
"DOI": [
"10.1162/0891201041850885"
]
},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe, Theresa Wilson, Rebecca Bruce, Matthew Bell, and Melanie Martin. 2004. Learn- ing subjective language. Computational Linguistics, 30(3):277-308.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Individual comparisons by ranking methods",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Wilcoxon",
"suffix": ""
}
],
"year": 1992,
"venue": "Breakthroughs in Statistics: Methodology and Distribution",
"volume": "",
"issue": "",
"pages": "196--202",
"other_ids": {
"DOI": [
"10.1007/978-1-4612-4380-9_16"
]
},
"num": null,
"urls": [],
"raw_text": "Frank Wilcoxon. 1992. Individual comparisons by rank- ing methods. In Samuel Kotz and Norman L. John- son, editors, Breakthroughs in Statistics: Methodol- ogy and Distribution, pages 196-202. Springer New York, New York, NY.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Machine teaching: An inverse problem to machine learning and an approach toward optimal education",
"authors": [
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "29",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojin Zhu. 2015. Machine teaching: An inverse prob- lem to machine learning and an approach toward opti- mal education. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1).",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "An overview of machine teaching",
"authors": [
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Adish",
"middle": [],
"last": "Singla",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Zilles",
"suffix": ""
},
{
"first": "Anna",
"middle": [
"N"
],
"last": "Rafferty",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojin Zhu, Adish Singla, Sandra Zilles, and Anna N. Rafferty. 2018. An overview of machine teaching. CoRR, abs/1801.05927.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Comparison of interactive Active Learning (left) with Machine Teaching (right). Training instances are labeled by the Agent-User (in AL) or the Agent-Teacher (in MT).",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Class imbalance for the 14-class emotion data.",
"num": null
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"text": "Text-based emotion prediction with (a) AL query strategies or (b) MT teaching criteria. The all-in-one batch option (green star) signifies resource-inefficient offline batch training.",
"num": null
},
"FIGREF5": {
"uris": null,
"type_str": "figure",
"text": "Text-based emotion prediction when using AL or MT in fine-tuning with BERT.",
"num": null
}
}
}
}