ACL-OCL / Base_JSON /prefixH /json /hcinlp /2021.hcinlp-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:35:32.377480Z"
},
"title": "Putting Humans in the Natural Language Processing Loop: A Survey",
"authors": [
{
"first": "Zijie",
"middle": [
"J"
],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dongjin",
"middle": [],
"last": "Choi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Shenyu",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {},
"email": "shenyuxu@gatech.edu"
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "How can we design Natural Language Processing (NLP) systems that learn from human feedback? There is a growing research body of Human-in-the-loop (HITL) NLP frameworks that continuously integrate human feedback to improve the model itself. HITL NLP research is nascent but multifarious-solving various NLP problems, collecting diverse feedback from different people, and applying different methods to learn from human feedback. We present a survey of HITL NLP work from both Machine Learning (ML) and Humancomputer Interaction (HCI) communities that highlights its short yet inspiring history, and thoroughly summarize recent frameworks focusing on their tasks, goals, human interactions, and feedback learning methods. Finally, we discuss future studies for integrating human feedback in the NLP development loop.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "How can we design Natural Language Processing (NLP) systems that learn from human feedback? There is a growing research body of Human-in-the-loop (HITL) NLP frameworks that continuously integrate human feedback to improve the model itself. HITL NLP research is nascent but multifarious-solving various NLP problems, collecting diverse feedback from different people, and applying different methods to learn from human feedback. We present a survey of HITL NLP work from both Machine Learning (ML) and Humancomputer Interaction (HCI) communities that highlights its short yet inspiring history, and thoroughly summarize recent frameworks focusing on their tasks, goals, human interactions, and feedback learning methods. Finally, we discuss future studies for integrating human feedback in the NLP development loop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Traditionally, Natural Language Processing (NLP) models are trained, fine-tuned, and tested on existing dataset by machine learning experts, and then deployed to solve real-life problems of their users. Model users can often give invaluable feedback that reveals design details overlooked by model developers, and provide data instances that are not represented in the training dataset (Kreutzer et al., 2020) . However, the traditional linear NLP development pipeline is not designed to take advantage of human feedback. Advancing on conventional workflow, there is a growing research body of Human-in-theloop (HITL) NLP frameworks, or sometimes called mixed-initiative NLP, where model developers continuously integrates human feedback into different steps of the model deployment workflow (Figure 1 ). This continuous feedback loop cultivates a human- AI partnership that enhance model performance and build users' trust in the NLP system.",
"cite_spans": [
{
"start": 386,
"end": 409,
"text": "(Kreutzer et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 792,
"end": 801,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Just like traditional NLP frameworks, there is a high-dimensional design space for HITL NLP systems. For example, human feedback can come from end users (Li et al., 2017) or crowd workers (Wallace et al., 2019) , and human can intervene models during training (Stiennon et al., 2020) or deployment (Hancock et al., 2019) . Good HITL NLP systems need to clearly communicate to humans of what the model needs, provide intuitive interfaces to collect feedback, and effectively learn from them. Therefore, HITL NLP research spans across not only NLP and Machine Learning (ML) but also Human-computer Interaction (HCI). A meta-analysis on existing HITL NLP work focusing on bridging different research disciplines is vital to help new researchers quickly familiarize with this promising topic and recognize future research directions. To fill this critical research gap, we provide a timely literature review on recent HITL NLP studies from both NLP and HCI communities. This is the first survey on the HITL NLP topic. We make two main contributions: (1) We summarize recent studies of HITL NLP and position each work with respect to its task, goal, human interaction, and feedback learning method (Table 1); (2) We highlight important research directions and open problems that we distilled from the survey.",
"cite_spans": [
{
"start": 153,
"end": 170,
"text": "(Li et al., 2017)",
"ref_id": null
},
{
"start": 188,
"end": 210,
"text": "(Wallace et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 260,
"end": 283,
"text": "(Stiennon et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 298,
"end": 320,
"text": "(Hancock et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 1193,
"end": 1203,
"text": "(Table 1);",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we categorize surveyed HITL paradigms based on their corresponding tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human-in-the-Loop NLP Tasks",
"sec_num": "2"
},
{
"text": "Text classification is a classic NLP task to categorize text into different groups. Many HITL frameworks are developed for this problem, where most of them start with training a text classifier, then recruiting humans to annotate data based on the current model behavior, and eventually retraining the classifier on the larger dataset continuously. For example, Godbole et al. 2004develop a HITL paradigm where users can interactively edit text features and label new documents. Also, Settles (2011) integrates active learning in their framework-instead of arbitrarily presenting data for users to annotate, samples are selected in a way that maximizes the expected information gain. With active learning, labelers can annotate fewer data to achieve the same model improvement of a framework using random sampling.",
"cite_spans": [
{
"start": 485,
"end": 499,
"text": "Settles (2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Classification",
"sec_num": "2.1"
},
{
"text": "Besides classifying documents, recent research shows great potential of HITL approach in enhancing the performance of existing parsing and entity linking models. Advancing traditional Combinatory Categorial Grammars (CCG) parsers, He et al. (2016) crowdsource parsing tasks-a trained parser is uncertain about-to non-expert mechanical turks, by asking them simple what-questions. Also, with more strategic sampling methods to select instances to present to humans, a smaller set of feedback can quickly improve the entity linking model performance (Klie et al., 2020) .",
"cite_spans": [
{
"start": 231,
"end": 247,
"text": "He et al. (2016)",
"ref_id": "BIBREF2"
},
{
"start": 548,
"end": 567,
"text": "(Klie et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing and Entity Linking",
"sec_num": "2.2"
},
{
"text": "In addition to use HITL approach to enhancing learning low-level semantic relationships, researcher apply similar framework to topic modeling techniques that are used to analyze large document collections (Lee et al., 2017) . For example, Hu et al. (2014) 's systems allow users to refine a trained model through adding, removing, or changing the weights of words within each topic. Recent work also focuses on human-centered HITL topic modeling methods. Kim et al. (2019) develop an intuitive visualization system that allows end users to up-vote or down-vote specific documents to inform their interest to the model. Smith et al. (2018) conduct users studies with non-experts and develop a responsive and predictable user interface that supports a broad range of topic modeling refinement operations. These examples show that NLP HITL systems can benefit from HCI design techniques.",
"cite_spans": [
{
"start": 205,
"end": 223,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 239,
"end": 255,
"text": "Hu et al. (2014)",
"ref_id": "BIBREF3"
},
{
"start": 455,
"end": 472,
"text": "Kim et al. (2019)",
"ref_id": "BIBREF7"
},
{
"start": 619,
"end": 638,
"text": "Smith et al. (2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Modeling",
"sec_num": "2.3"
},
{
"text": "HITL can be used in text summarization and machine translation. For instance, Stiennon et al. (2020) collects human preferences on pairs of summaries generated by two models, then train a reward model to predict the preference. Then, this reward model is used to train a policy to generate summaries using reinforcement learning. Kreutzer et al. (2018) collect both explicit and implicit language human feedback to improve a machine translation model by using the feedback with reinforcement learning. Experiments show that these models have higher accuracy and better generalization.",
"cite_spans": [
{
"start": 78,
"end": 100,
"text": "Stiennon et al. (2020)",
"ref_id": "BIBREF22"
},
{
"start": 330,
"end": 352,
"text": "Kreutzer et al. (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization and Machine Translation",
"sec_num": "2.4"
},
{
"text": "Recently, many HITL frameworks have been developed for dialogue and Question Answering (QA) systems, where the AI agent can have conversation with users. We can group these systems into two categories: online feedback loop and offline feedback loop. With online feedback loop, the system continuously uses human feedback to update the model. For example, Liu et al. (2018) collects dialogue corrections from users during deployment, and then use online reinforcement learning to improve the model. With offline feedback loop, model is updated after collecting a large set of human feedback. For instance, Wallace et al. (2019) invites crowd workers to generate adversarial questions that can fool their QA system, and use these questions for adversarial training. Offline feedback loop can be more robust for dialogue systems, because user feedback can be misleading so directly updating the model is risky (Kreutzer et al., 2020) .",
"cite_spans": [
{
"start": 355,
"end": 372,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF15"
},
{
"start": 605,
"end": 626,
"text": "Wallace et al. (2019)",
"ref_id": "BIBREF24"
},
{
"start": 907,
"end": 930,
"text": "(Kreutzer et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue and Question Answering",
"sec_num": "2.5"
},
{
"text": "Among surveyed papers, the most abundant motivation for using a HITL approach in NLP tasks is to improve the model performance. For example, with a relatively small set of human feedback, HITL can significantly improve the model accuracy (Smith et al., 2018) , model robustness and generalization (Stiennon et al., 2020) . Besides model performance, HITL can also improve the interpretability and usability of NLP models. For instance, Wallace et al. (2019) guides humans to generate adversarial questions that fool the question answering model-these adversarial questions are also used as probes for researchers to study the underlying model behaviors. In Smith et al. (2018)'s topic modeling work, user studies have shown that users gain more trust and confidence through the HITL system.",
"cite_spans": [
{
"start": 238,
"end": 258,
"text": "(Smith et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 297,
"end": 320,
"text": "(Stiennon et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human-in-the-Loop Goals",
"sec_num": "2.6"
},
{
"text": "This section discusses the mediums that users use to interact with HITL systems and different types of feedback that the system collect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human-machine Interaction",
"sec_num": "3"
},
{
"text": "Graphical User Interface (GUI) provides a user interface that allows users to interact with systems through graphical icons and visual indicators. Some HITL NLP systems allow users to directly label samples in the GUI (Godbole et al., 2004) . The GUI also makes feature editing possible for end-users who do not develop the model from initial (Simard et al., 2014) . Some work even uses the GUI for users to rate training sentences in the text summarization task (Stiennon et al., 2020) and rank generated topics in the topic modeling task (Kim et al., 2019) . One obvious advantage of the GUI is that it helps visualize NLP models, enhancing the interpretability of the model. In addition, the GUI supports Windows, Icons, Menus, Pointer (WIMP) interactions, providing users more accurate control for refining the models.",
"cite_spans": [
{
"start": 218,
"end": 240,
"text": "(Godbole et al., 2004)",
"ref_id": "BIBREF0"
},
{
"start": 343,
"end": 364,
"text": "(Simard et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 463,
"end": 486,
"text": "(Stiennon et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 540,
"end": 558,
"text": "(Kim et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Mediums",
"sec_num": "3.1"
},
{
"text": "Natural Language Interface is an interface where the user interacts with the computer through natural language. As this interface usually simulates having a conversation with a computer, it mostly comes with the purpose of building up a dialogue system (Hancock et al., 2019) . The natural language interface not only supports users to provide explicit feedback (Liu et al., 2018) , such as positive or negative responses. It also allows users to give implicit feedback with natural language sentences (Li et al., 2017) . Compared to the GUI, the natural language interface is more intuitive to use as it simulates the process of human's conversation and thus needs no additional tutorial. In particular, it naturally fits in dialogue systems.",
"cite_spans": [
{
"start": 253,
"end": 275,
"text": "(Hancock et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 362,
"end": 380,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 502,
"end": 519,
"text": "(Li et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Mediums",
"sec_num": "3.1"
},
{
"text": "Binary Feedback has two categories which are usually opposite to each other, such as \"like\" and \"dislike\". It can be collected by both the GUI and the natural language interface. GUIs can collect binary user feedback from the user's adding or removing labels (Settles, 2011) and features (Godbole et al., 2004) . The natural language interface can also support binary user feedback collection with simple short natural language response, such as \"agree\" and \"reject\" (Liu et al., 2018) .",
"cite_spans": [
{
"start": 259,
"end": 274,
"text": "(Settles, 2011)",
"ref_id": "BIBREF19"
},
{
"start": 288,
"end": 310,
"text": "(Godbole et al., 2004)",
"ref_id": "BIBREF0"
},
{
"start": 467,
"end": 485,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "User Feedback Types",
"sec_num": "3.2"
},
{
"text": "Scaled Feedback has scaled categories and is usually in numerical formats, such as the 5-point scale rating. It often can only be collected through the GUI as it is difficult to express accurate scaled feedback in natural language. Such user feedback is collected in the GUI when users rate their preferences of training data or model results (Kreutzer et al., 2018) and adjust features on a numerical scale (Simard et al., 2014) . Similar to binary user feedback, scaled user feedback can provide explicit feedback for the system to update the models (e.g. adjusting the weight of one feature from 1 to 3 on a scale of 5 points). Besides, the scaled ratings of user preferences can also be used as implicit guidance for improving the model.",
"cite_spans": [
{
"start": 343,
"end": 366,
"text": "(Kreutzer et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 408,
"end": 429,
"text": "(Simard et al., 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "User Feedback Types",
"sec_num": "3.2"
},
{
"text": "Natural Language Feedback, comparing to binary user feedback and scaled user feedback, is better for representing users' intention but vague and hard for the machine to interpret. It can only be collected through the natural language interface. Users provide this type of user feedback by directly inputting natural language sentences to the system (Hancock et al., 2019) . By analyzing the user input sentences, the system implies the user's intention and accordingly update the model.",
"cite_spans": [
{
"start": 349,
"end": 371,
"text": "(Hancock et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "User Feedback Types",
"sec_num": "3.2"
},
{
"text": "Counterfactual Example Feedback, similar to the natural language user feedback, are usually collected through the natural language interface. The HITL NLP systems collect and analyze user-modified counterfactual text examples and retrain the model accordingly (Kaushik et al., 2019; Lawrence and Riezler, 2018) .",
"cite_spans": [
{
"start": 260,
"end": 282,
"text": "(Kaushik et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 283,
"end": 310,
"text": "Lawrence and Riezler, 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "User Feedback Types",
"sec_num": "3.2"
},
{
"text": "As discussed in section 2, active learning is one commonly used technique we observed in our surveyed systems. Active learning allows the system to interactively query a user to label new data points with the desired outputs (Godbole et al., 2004) . By strategically choosing samples to maximize information gain with fewer iterations, active learning not only reduces human efforts on data labeling but also improves the efficiency of the interface.",
"cite_spans": [
{
"start": 225,
"end": 247,
"text": "(Godbole et al., 2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intelligent Interaction",
"sec_num": "3.3"
},
{
"text": "This section summarizes how existing HITL NLP systems utilize different types of feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How to Use User Feedback",
"sec_num": "4"
},
{
"text": "One popular approach is to consider the feedback as a new ground truth data sample. We describe two types of techniques to use augmented data set: Offline Update re-trains NLP model from scratch after collecting human feedback, while Online Update trains NLP models while collecting feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4.1"
},
{
"text": "Offline Model Update is usually performed after certain amount of human feedback is collected. Offline update does not need to be immediate, so they are suitable for noisy feedback with complex models which takes extra processing and training time. For example, Simard et al. (2014) and Karmakharm et al. (2019) use human feedback as new class labels and span-level annotations, and retrain their models after collecting enough new data.",
"cite_spans": [
{
"start": 262,
"end": 282,
"text": "Simard et al. (2014)",
"ref_id": "BIBREF20"
},
{
"start": 287,
"end": 311,
"text": "Karmakharm et al. (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4.1"
},
{
"text": "Online Model Update is applied right after user feedback is given. This is effective for dialogue systems and conversational QA systems where recent input is crucial to machine's reasoning (Li et al., 2017) . Incremental learning technique is often used to learn augmented data in real-time (Kumar et al., 2019) . It focuses on making an incremental change to current system using the newly come feedback information effectively. Interactive topic modeling systems and feature engineering systems widely use this technique. For example, Kim et al. (2019) incrementally updates topic hierarchy by extending or shrinking topic tree incrementally. Also, some frameworks use Latent Dirichlet Allocation (LDA) to adjust sampling parameters with collected feedback in incremental iterations (Smith et al., 2018) .",
"cite_spans": [
{
"start": 189,
"end": 206,
"text": "(Li et al., 2017)",
"ref_id": null
},
{
"start": 291,
"end": 311,
"text": "(Kumar et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 537,
"end": 554,
"text": "Kim et al. (2019)",
"ref_id": "BIBREF7"
},
{
"start": 785,
"end": 805,
"text": "(Smith et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4.1"
},
{
"text": "Collected numerical human feedback are usually directly used to adjust model's objective function. For example, Li et al. (2017) collect binary feedback as rewards for reinforcement learning of a dialogue agent. Similarly, Kreutzer et al. (2018) uses a 5-point scale rating as reward function of reinforcement and bandit learning for machine translation. Existing works have focused more on numerical feedback than natural language feedback. Numerical feedback is easier to be incorporated into models, but provides limited information than natural language. For future research, incorporating more types of feedback (e.g., speech, log data) will be an interesting direction to gain more useful insights from humans. With more complex feedback type, it is critical to design both quantitative and qualitative methods to evaluate collected feedback, as they can be noisy just like any other data.",
"cite_spans": [
{
"start": 112,
"end": 128,
"text": "Li et al. (2017)",
"ref_id": null
},
{
"start": 223,
"end": 245,
"text": "Kreutzer et al. (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Direct Manipulation",
"sec_num": "4.2"
},
{
"text": "In this paper, we summarize recent literature on HITL NLP from both NLP and HCI communities, and position each work with respect to its task, goal, human interaction, and feedback learning method. The field of HITL NLP is still relatively nascent and we see many different design choices. We find improving model performance is the most popular goal among surveyed NLP HITL frameworks. However, researchers have found HITL method also enhances NLP model interpretability (Jandot et al., 2016) and usability (Lee et al., 2017). we encourage future NLP researchers to explore HITL as a mean to better understand their models and improve the experience of end users. One way is to design systems that take feedback from model engineers and end users beyond crowd workers.",
"cite_spans": [
{
"start": 471,
"end": 492,
"text": "(Jandot et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Directions",
"sec_num": "5"
},
{
"text": "Most of the HITL NLP systems are designed by NLP researchers. As human feedback is the core for HITL design, we believe that this field will be greatly benefited from a deeper involvement of the HCI community. For example, with a poorly designed human-machine interface, the collected human feedback are more likely to be inconsistent, incorrect, or even misleading. Therefore, better interface design and rigorous user study to evaluate interfaces can greatly enhance the quality of feedback collection, which in turn improve the downstream task performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Directions",
"sec_num": "5"
},
{
"text": "To shed light on HITL NLP research from a HCI perspective, Wallace et al. (2019) explore the effect of adding model interpretation cues in the HITL interface on the quality of collected feedback; Schoch et al. (2020) investigate the impacts of question framing imposed on humans; similarly, Rao and Daum\u00e9 III (2018) study how to ask good questions to which humans are more likely to give helpful feedback. In particular, we recommend future researchers to (1) consider integrating interactive visualization techniques into human-machine interfaces;",
"cite_spans": [
{
"start": 59,
"end": 80,
"text": "Wallace et al. (2019)",
"ref_id": "BIBREF24"
},
{
"start": 196,
"end": 216,
"text": "Schoch et al. (2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Directions",
"sec_num": "5"
},
{
"text": "(2) conduct user study to evaluate the effectiveness of their HITL system in addition to model performance; (3) share collected human feedback data and user study protocols with the community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Directions",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Document Classification Through Interactive Supervision of Document and Term Labels",
"authors": [
{
"first": "Abhay",
"middle": [],
"last": "Shantanu Godbole",
"suffix": ""
},
{
"first": "Sunita",
"middle": [],
"last": "Harpale",
"suffix": ""
},
{
"first": "Soumen",
"middle": [],
"last": "Sarawagi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
}
],
"year": 2004,
"venue": "Knowledge Discovery in Databases: PKDD 2004",
"volume": "3202",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shantanu Godbole, Abhay Harpale, Sunita Sarawagi, and Soumen Chakrabarti. 2004. Document Classi- fication Through Interactive Supervision of Docu- ment and Term Labels. In Knowledge Discovery in Databases: PKDD 2004, volume 3202, pages 185-",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning from Dialogue after Deployment: Feed Yourself",
"authors": [
{
"first": "Braden",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Pierre-Emmanuel",
"middle": [],
"last": "Mazar\u00e9",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.05415"
]
},
"num": null,
"urls": [],
"raw_text": "Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazar\u00e9, and Jason Weston. 2019. Learning from Di- alogue after Deployment: Feed Yourself, Chatbot! arXiv:1901.05415 [cs, stat].",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Human-in-the-Loop Parsing",
"authors": [
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2337--2342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luheng He, Julian Michael, Mike Lewis, and Luke Zettlemoyer. 2016. Human-in-the-Loop Parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2337-2342, Austin, Texas. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Interactive topic modeling",
"authors": [
{
"first": "Yuening",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Brianna",
"middle": [],
"last": "Satinoff",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Machine Learning",
"volume": "95",
"issue": "",
"pages": "423--469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuening Hu, Jordan Boyd-Graber, Brianna Satinoff, and Alison Smith. 2014. Interactive topic modeling. Machine Learning, 95(3):423-469.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Interactive Semantic Featuring for Text Classification",
"authors": [
{
"first": "Camille",
"middle": [],
"last": "Jandot",
"suffix": ""
},
{
"first": "Patrice",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Chickering",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Jina",
"middle": [],
"last": "Suh",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.07545"
]
},
"num": null,
"urls": [],
"raw_text": "Camille Jandot, Patrice Simard, Max Chickering, David Grangier, and Jina Suh. 2016. Interac- tive Semantic Featuring for Text Classification. arXiv:1606.07545 [cs, stat].",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Journalist-in-the-Loop: Continuous Learning as a Service for Rumour Analysis",
"authors": [
{
"first": "Twin",
"middle": [],
"last": "Karmakharm",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Aletras",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "115--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Twin Karmakharm, Nikolaos Aletras, and Kalina Bontcheva. 2019. Journalist-in-the-Loop: Contin- uous Learning as a Service for Rumour Analysis. In Proceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP): Sys- tem Demonstrations, pages 115-120, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning the difference that makes a difference with counterfactually-augmented data",
"authors": [
{
"first": "Divyansh",
"middle": [],
"last": "Kaushik",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.12434"
]
},
"num": null,
"urls": [],
"raw_text": "Divyansh Kaushik, Eduard Hovy, and Zachary C Lip- ton. 2019. Learning the difference that makes a difference with counterfactually-augmented data. arXiv preprint arXiv:1909.12434.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "TopicSifter: Interactive Search Space Reduction through Targeted Topic Modeling",
"authors": [
{
"first": "Hannah",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Dongjin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Drake",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE Conference on Visual Analytics Science and Technology (VAST)",
"volume": "",
"issue": "",
"pages": "35--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannah Kim, Dongjin Choi, Barry Drake, Alex En- dert, and Haesun Park. 2019. TopicSifter: Interac- tive Search Space Reduction through Targeted Topic Modeling. In 2019 IEEE Conference on Visual Ana- lytics Science and Technology (VAST), pages 35-45, Vancouver, BC, Canada. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "From Zero to Hero: Human-In-The-Loop Entity Linking in Low Resource Domains",
"authors": [
{
"first": "Jan-Christoph",
"middle": [],
"last": "Klie",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Eckart De Castilho",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6982--6993",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan-Christoph Klie, Richard Eckart de Castilho, and Iryna Gurevych. 2020. From Zero to Hero: Human- In-The-Loop Entity Linking in Low Resource Do- mains. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6982-6993, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Can Neural Machine Translation be Improved with User Feedback?",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Kreutzer",
"suffix": ""
},
{
"first": "Shahram",
"middle": [],
"last": "Khadivi",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.05958"
]
},
"num": null,
"urls": [],
"raw_text": "Julia Kreutzer, Shahram Khadivi, Evgeny Matusov, and Stefan Riezler. 2018. Can Neural Machine Translation be Improved with User Feedback? arXiv:1804.05958 [cs, stat].",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning from Human Feedback: Challenges for Real-World Reinforcement Learning in NLP",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Kreutzer",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "Carolin",
"middle": [],
"last": "Lawrence",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2011.02511"
]
},
"num": null,
"urls": [],
"raw_text": "Julia Kreutzer, Stefan Riezler, and Carolin Lawrence. 2020. Learning from Human Feedback: Challenges for Real-World Reinforcement Learning in NLP. arXiv:2011.02511 [cs].",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Why Didn't You Listen to Me? Comparing User Control of Human-in-the-Loop Topic Models",
"authors": [
{
"first": "Varun",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Smith-Renner",
"suffix": ""
},
{
"first": "Leah",
"middle": [],
"last": "Findlater",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Seppi",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6323--6330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Varun Kumar, Alison Smith-Renner, Leah Findlater, Kevin Seppi, and Jordan Boyd-Graber. 2019. Why Didn't You Listen to Me? Comparing User Control of Human-in-the-Loop Topic Models. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6323-6330, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Counterfactual learning from human proofreading feedback for semantic parsing",
"authors": [
{
"first": "Carolin",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.12239"
]
},
"num": null,
"urls": [],
"raw_text": "Carolin Lawrence and Stefan Riezler. 2018. Coun- terfactual learning from human proofreading feed- back for semantic parsing. arXiv preprint arXiv:1811.12239.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The human touch: How non-expert users perceive, interpret, and fix topic models",
"authors": [
{
"first": "Alison",
"middle": [],
"last": "Tak Yeon Lee",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Niklas",
"middle": [],
"last": "Seppi",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Elmqvist",
"suffix": ""
},
{
"first": "Leah",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Findlater",
"suffix": ""
}
],
"year": 2017,
"venue": "International Journal of Human-Computer Studies",
"volume": "105",
"issue": "",
"pages": "28--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tak Yeon Lee, Alison Smith, Kevin Seppi, Niklas Elmqvist, Jordan Boyd-Graber, and Leah Findlater. 2017. The human touch: How non-expert users per- ceive, interpret, and fix topic models. International Journal of Human-Computer Studies, 105:28-42.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Dialogue Learning with Human Teaching and Feedback in Endto-End Trainable Task-Oriented Dialogue Systems",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "Pararth",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.06512"
]
},
"num": null,
"urls": [],
"raw_text": "Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, and Larry Heck. 2018. Dialogue Learn- ing with Human Teaching and Feedback in End- to-End Trainable Task-Oriented Dialogue Systems. arXiv:1804.06512 [cs].",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Interactive Entity Linking Using Entity-Word Representations",
"authors": [
{
"first": "Pei-",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Chi",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Ee-Peng",
"middle": [],
"last": "Lim",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "1801--1804",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pei-Chi Lo and Ee-Peng Lim. 2020. Interactive En- tity Linking Using Entity-Word Representations. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, pages 1801-1804, Virtual Event China. ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning to Ask Good Questions: Ranking Clarification Questions using Neural Expected Value of Perfect Information",
"authors": [
{
"first": "Sudha",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.04655"
]
},
"num": null,
"urls": [],
"raw_text": "Sudha Rao and Hal Daum\u00e9 III. 2018. Learning to Ask Good Questions: Ranking Clarification Questions using Neural Expected Value of Perfect Information. arXiv:1805.04655 [cs].",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "This is a Problem, Don't You Agree?\" Framing and Bias in Human Evaluation for Natural Language Generation",
"authors": [
{
"first": "Stephanie",
"middle": [],
"last": "Schoch",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephanie Schoch, Diyi Yang, and Yangfeng Ji. 2020. This is a Problem, Don't You Agree?\" Framing and Bias in Human Evaluation for Natural Language Generation.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1467--1478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles. 2011. Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances. In Proceedings of the 2011 Confer- ence on Empirical Methods in Natural Language Processing, pages 1467-1478, Edinburgh, Scotland, UK. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "ICE: enabling non-experts to build models interactively for large-scale lopsided problems",
"authors": [
{
"first": "Patrice",
"middle": [
"Y"
],
"last": "Simard",
"suffix": ""
},
{
"first": "David",
"middle": [
"Maxwell"
],
"last": "Chickering",
"suffix": ""
},
{
"first": "Aparna",
"middle": [],
"last": "Lakshmiratan",
"suffix": ""
},
{
"first": "Denis",
"middle": [
"Xavier"
],
"last": "Charles",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Carlos Garcia Jurado",
"middle": [],
"last": "Suarez",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Saleema",
"middle": [],
"last": "Amershi",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Verwey",
"suffix": ""
},
{
"first": "Jina",
"middle": [],
"last": "Suh",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrice Y. Simard, David Maxwell Chickering, Aparna Lakshmiratan, Denis Xavier Charles, L\u00e9on Bot- tou, Carlos Garcia Jurado Suarez, David Grang- ier, Saleema Amershi, Johan Verwey, and Jina Suh. 2014. ICE: enabling non-experts to build models in- teractively for large-scale lopsided problems. CoRR, abs/1409.4814.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Closing the Loop: User-Centered Design and Evaluation of a Human-in-the-Loop Topic Modeling System",
"authors": [
{
"first": "Alison",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Seppi",
"suffix": ""
},
{
"first": "Leah",
"middle": [],
"last": "Findlater",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Human Information Interaction&Retrieval -IUI 18",
"volume": "",
"issue": "",
"pages": "293--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alison Smith, Varun Kumar, Jordan Boyd-Graber, Kevin Seppi, and Leah Findlater. 2018. Closing the Loop: User-Centered Design and Evaluation of a Human-in-the-Loop Topic Modeling System. In Proceedings of the 2018 Conference on Human Information Interaction&Retrieval -IUI 18, pages 293-304, Tokyo, Japan. ACM Press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning to summarize from human feedback",
"authors": [
{
"first": "Nisan",
"middle": [],
"last": "Stiennon",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"M"
],
"last": "Ziegler",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Chelsea",
"middle": [],
"last": "Voss",
"suffix": ""
},
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Christiano",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.01325"
]
},
"num": null,
"urls": [],
"raw_text": "Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Rad- ford, Dario Amodei, and Paul Christiano. 2020. Learning to summarize from human feedback. arXiv:2009.01325 [cs].",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Interactive nlp in clinical care: Identifying incidental findings in radiology reports",
"authors": [
{
"first": "Gaurav",
"middle": [],
"last": "Trivedi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Esmaeel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dadashzadeh",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Wendy",
"middle": [
"W"
],
"last": "Handzel",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Chapman",
"suffix": ""
},
{
"first": "Harry",
"middle": [],
"last": "Visweswaran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hochheiser",
"suffix": ""
}
],
"year": 2019,
"venue": "Applied clinical informatics",
"volume": "10",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaurav Trivedi, Esmaeel R Dadashzadeh, Robert M Handzel, Wendy W Chapman, Shyam Visweswaran, and Harry Hochheiser. 2019. Interactive nlp in clini- cal care: Identifying incidental findings in radiology reports. Applied clinical informatics, 10(4):655.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Trick Me If You Can: Human-in-the-Loop Generation of Adversarial Examples for Question Answering",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Rodriguez",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "387--401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Ya- mada, and Jordan Boyd-Graber. 2019. Trick Me If You Can: Human-in-the-Loop Generation of Adver- sarial Examples for Question Answering. Transac- tions of the Association for Computational Linguis- tics, 7:387-401.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Collaboration between humans and models under a human-in-the-loop Natural Language Processing paradigm. Humans provide various types of feedback in different stages of the workflow to improve the model's performance, interpretability, and usability.",
"num": null
},
"TABREF1": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}