| Wiki ID | Label | Description |
| P6 | head of government | head of the executive power of this town, city, municipality, state, country, or other governmental body |
| P17 | country | sovereign state of this item (not to be used for human beings) |
| P19 | place of birth | most specific known (e.g. city instead of country, or hospital instead of city) birth location of a person, animal or fictional cha racter |
| P26 | spouse | the subject has the object as their spouse (husband, wife, partner, etc.). Use "unmarried partner" (P451) for non-married companions |
| P27 | country of citizenship | the object is a country that recognizes the subject as its citizen |
| P31 | instance of | that class of which this subject is a particular example and member |
| P35 | head of state | official with the highest formal authority in a country/state |
| P39 | position held | subject currently or formerly holds the object position or public office |
| P40 | child | subject has object as child. Do not use for stepchildren |
| P47 | shares border with | countries or administrative subdivisions, of equal level, that this item borders, either by land or water. A single common point is enough. |
| P54 | member of sports team | sports teams or clubs that the subject represents or represented |
| P69 | educated at | educational institution attended by subject |
| P81 | connecting line | railway line(s) subject is directly connected to |
| P97 | noble title | titles held by the person |
| P102 | member of political party | the political party of which a person is or has been a member or otherwise affiliated |
| P106 | occupation | occupation of a person; see also "field of work" (Property:P101), "position held" (Property:P39) |
| P108 | employer | person or organization for which the subject works or worked |
| P115 | home venue | home stadium or venue of a sports team or applicable performing arts organization |
| P118 | league | league in which team or player plays or has played in |
| P127 | owned by | owner of the subject |
| P131 | located in the administrative territorial entity | the item is located on the territory of the following administrative entity. |
| P137 | operator | person, profession, or organization that operates the equipment, facility, or service |
| P156 | followed by | immediately following item in a series of which the subject is a part |
| P159 | headquarters location | city, where an organization's headquarters is or has been situated. Use P276 qualifier for specific building |
| P161 | cast member | actor in the subject production |
| P166 | award received | award or recognition received by a person, organisation or creative work |
| P175 | performer | actor, musician, band or other performer associated with this role or musical work |
| P176 | manufacturer | manufacturer or producer of this product |
| P179 | part of the series | series which contains the subject |
| P194 | legislative body | legislative body governing this entity; political institution with elected representatives, such as a parliament/legislature or council |
| P197 | adjacent station | the stations next to this station, sharing the same line(s) |
| P241 | military branch | branch to which this military unit, award, office, or person belongs, e.g. Royal Navy |
| P276 | location | location of the object, structure or event. In the case of an administrative entity as containing item use P131. |
| P279 | subclass of | next higher class or type; all instances of these items are instances of those items; this item is a class (subset) of that item. |
| P361 | part of | object of which the subject is a part |
| P414 | stock exchange | exchange on which this company is traded |
| P449 | original broadcaster | network(s) or service(s) that originally broadcasted a radio or television program |
| P463 | member of | organization, club or musical group to which the subject belongs. Do not use for membership in ethnic or social groups |
| P466 | occupant | person or organization occupying property |
| P488 | chairperson | presiding member of an organization, group or body |
| P551 | residence | the place where the person is or has been, resident |
| P641 | sport | sport that the subject participates or participated in or is associated with |
| P669 | located on street | street, road, or square, where the item is located. |
| P710 | participant | person, group of people or organization (object) that actively takes/took part in an event or process (subject). |
| P725 | voice actor | performer of a spoken role in a creative work such as animation, video game, radio drama, or dubbing over |
| P749 | parent organization | parent organization of an organization, opposite of subsidiaries (P355) |
| P793 | significant event | significant or notable events associated with the subject |
| P800 | notable work | notable scientific, artistic or literary work, or other work of significance among subject's works |
| P1037 | director / manager | person who manages any kind of group |
| P1327 | partner in business or sport | professional collaborator |
| P1346 | winner | winner of a competition or similar event, not to be used for awards |
| P1365 | replaces | person, state or item replaced. Use "structure replaces" (P1398) for structures. |
| P1376 | capital of | country, state, department, canton or other administrative division of which the municipality is the governmental seat |
| P1411 | nominated for | award nomination received by a person, organisation or creative work (inspired from "award received" (Property:P166)) |
| P1441 | present in work | this (fictional or fictionalized) entity or person appears in that work as part of the narration |
| P1535 | used by | item or concept that makes use of the subject (use sub-properties when appropriate) |
| P1923 | participating team | like 'Participant' (P710) but for teams. For an event like a cycle race or a football match you can use this property to list the teams |
| P3450 | sports season of | property that shows the competition of which the item is a season. Use P5138 for "season of club or team". |
| league or competition | |
| P3602 | candidacy in election | election where the subject is a candidate |
| P3701 | incarnation of | incarnation of another religious or supernatural being |
| P5800 | narrative role | narrative role of this character (should be used as a qualifier with P674 or restricted to a certain work using P642) |
| P6087 | coach of sports team | sports club or team for which this person is or was on-field manager or coach |
+
+Table 9: List of relation labels in HyperRED.
+
+| Wiki ID | Label | Description |
| P17 | country | sovereign state of this item (not to be used for human beings) |
| P25 | mother | female parent of the subject. For stepmother, use "stepparent" (P3448) |
| P31 | instance of | that class of which this subject is a particular example and member |
| P39 | position held | subject currently or formerly holds the object position or public office |
| P81 | connecting line | railway line(s) subject is directly connected to |
| P102 | member of political party | the political party of which a person is or has been a member or otherwise affiliated |
| P131 | located in the administrative territorial entity | the item is located on the territory of the following administrative entity. |
| P155 | follows | immediately prior item in a series of which the subject is a part, preferably use as qualifier of P179 |
| P175 | performer | actor, musician, band or other performer associated with this role or musical work |
| P197 | adjacent station | the stations next to this station, sharing the same line(s) |
| P249 | ticket symbol | identifier for a publicly traded share of a particular stock on a particular stock market or that of a cryptocurrency |
| P276 | location | location of the object, structure or event. In the case of an administrative entity as containing item use P131. |
| P413 | position played on team / speciality | position or specialism of a player on a team |
| P453 | character role | specific role played or filled by subject – use only as qualifier of "cast member" (P161), "voice actor" (P725) |
| P512 | academic degree | academic degree that the person holds |
| P518 | applies to part | part, aspect, or form of the item to which the claim applies |
| P527 | has part | part of this subject; inverse property of "part of" (P361). See also "has parts of the class" (P2670). |
| P577 | publication date | date or point in time when a work was first published or released |
| P580 | start time | time an event starts, an item begins to exist, or a statement becomes valid |
| P582 | end time | time an item ceases to exist or a statement stops being valid |
| P585 | point in time | time and date something took place, existed or a statement was true |
| P642 | of | qualifier stating that a statement applies within the scope of a particular item |
| P670 | street number | number in the street address. To be used as a qualifier of Property:P669 "located on street" |
| P708 | diocese | administrative division of the church to which the element belongs |
| P768 | electoral district | electoral district this person is representing, or of the office that is being contested. |
| P805 | statement is subject of | (qualifying) item that describes the relation identified in this statement |
| P812 | academic major | major someone studied at college/university |
| P1114 | quantity | number of instances of this subject |
| P1129 | national team appearances | total number of games officially played by a sportsman for national team |
| P1310 | statement disputed by | entity that disputes a given statement |
| P1346 | winner | winner of a competition or similar event, not to be used for awards |
| P1350 | number of matches played/races/starts | matches or games a player or a team played during an event. |
| P1352 | ranking | subject's numbered position within a competition or group of performers |
| P1365 | replaces | person, state or item replaced. Use "structure replaces" (P1398) for structures. |
| P1416 | affiliation | organization that a person or organization is affiliated with (not necessarily member of or employed by) |
| P1545 | series ordinal | position of an item in its parent series (most frequently a 1-based index), generally to be used as a qualifier |
| P1686 | for work | qualifier of award received (P166) to specify the work that an award was given to the creator for |
| P1706 | together with | qualifier to specify the item that this property is shared with |
| P2453 | nominee | qualifier used with «nominated for» to specify which person or organization was nominated |
| P2868 | subject has role | role/generic identity of the item ("subject"), also in the context of a statement. |
| P3831 | object has role | (qualifier) role or generic identity of the value of a statement ("object") in the context of that statement |
| P3983 | sports league level | the level of the sport league in the sport league system |
| P5051 | towards | qualifier for "adjacent station" (P197) to indicate the terminal station(s) of a transportation line or service in that direction |
+
+Table 10: List of qualifier labels in HyperRED.
\ No newline at end of file
diff --git a/adatasetforhyperrelationalextractionandacubefillingapproach/images.zip b/adatasetforhyperrelationalextractionandacubefillingapproach/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a8e69be1919117d1bef1c2f1ac485514f14d0336
--- /dev/null
+++ b/adatasetforhyperrelationalextractionandacubefillingapproach/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dd1ca8706fc705def9aaecd710691378f3d98db1f84ec78632d4e9e1c798e394
+size 1037792
diff --git a/adatasetforhyperrelationalextractionandacubefillingapproach/layout.json b/adatasetforhyperrelationalextractionandacubefillingapproach/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d316337ddc9619cd367ecd2fc9c00c4bd80abc63
--- /dev/null
+++ b/adatasetforhyperrelationalextractionandacubefillingapproach/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b2ca2218842ef33299dbd1ba41ab8e89b9076a724cc9ef54ab903f9ffc850ab7
+size 502355
diff --git a/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_content_list.json b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..bfc159978630196fbfb83661c57a4958957d89b7
--- /dev/null
+++ b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d68fe6c1cb5fde2c82364cb848d39d88fd93039ab1d02b6d74ad54c3d068271c
+size 129912
diff --git a/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_model.json b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5f3ef417ca3648de13902f1cb837a1e2eee770c9
--- /dev/null
+++ b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:924f235da342984892a7eb604975f9367266864edc626e3d884ded7d7fc44498
+size 149091
diff --git a/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_origin.pdf b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f8b5af8e74d1714ef7e56d230f029d66ff20fd25
--- /dev/null
+++ b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/5d3ba387-569d-4bd7-8d2e-4ce1848467c7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:64d1c6f8065115d4629366230fc6addb8dc4a7093e90accb6bc800db15ccc5e9
+size 558711
diff --git a/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/full.md b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..43897a129b7db776a71d51cb3bee1ecb88188f1b
--- /dev/null
+++ b/addmudetectionoffarboundaryadversarialexampleswithdataandmodeluncertaintyestimation/full.md
@@ -0,0 +1,391 @@
+# ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation
+
+Fan Yin
+
+University of California, Los Angeles fanyin20@cs.ucla.edu
+
+Yao Li
+
+University of North Carolina, Chapel Hill yaoli@email.unc.edu
+
+Cho-Jui Hsieh
+
+University of California, Los Angeles
+chohsieh@cs.ucla.edu
+
+Kai-Wei Chang
+
+University of California, Los Angeles kwchang@cs.ucla.edu
+
+# Abstract
+
+Adversarial Examples Detection (AED) is a crucial defense technique against adversarial attacks and has drawn increasing attention from the Natural Language Processing (NLP) community. Despite the surge of new AED methods, our studies show that existing methods heavily rely on a shortcut to achieve good performance. In other words, current search-based adversarial attacks in NLP stop once model predictions change, and thus most adversarial examples generated by those attacks are located near model decision boundaries. To surpass this shortcut and fairly evaluate AED methods, we propose to test AED methods with Far Boundary (FB) adversarial examples. Existing methods show worse than random guess performance under this scenario. To overcome this limitation, we propose a new technique, ADDMU, adversary detection with data and model uncertainty, which combines two types of uncertainty estimation for both regular and FB adversarial example detection. Our new method outperforms previous methods by 3.6 and 6.0 AUC points under each scenario. Finally, our analysis shows that the two types of uncertainty provided by ADDMU can be leveraged to characterize adversarial examples and identify the ones that contribute most to model's robustness in adversarial training.
+
+# 1 Introduction
+
+Deep neural networks (DNN) have achieved remarkable performance in a wide variety of NLP tasks. However, it has been shown that DNNs can be vulnerable to adversarial examples (Jia and Liang, 2017; Alzantot et al., 2018; Jin et al., 2020), i.e., perturbed examples that flip model predictions but remain imperceptible to humans, and thus impose serious security concerns about NLP models.
+
+To improve the robustness of NLP models, different kinds of techniques to defend against adversarial examples have been proposed (Li et al., 2021b). In this paper, we study AED, which aims to add a
+
+detection module to identify and reject malicious inputs based on certain characteristics. Different from adversarial training methods (Madry et al., 2018a; Jia et al., 2019) which require re-training of the model with additional data or regularization, AED operates in the test time and can be directly integrated with any existing model.
+
+Despite being well explored in the vision domain (Feinman et al., 2017; Raghuram et al., 2021), AED started to get attention in the field of NLP only recently. Many works have been proposed to conduct detection based on certain statistics (Zhou et al., 2019; Mozes et al., 2021; Yoo et al., 2022; Xie et al., 2022). Specifically, Yoo et al. (2022) propose a benchmark for AED methods and a competitive baseline by robust density estimation. However, by studying examples in the benchmark, we find that the success of some AED methods relies heavily on the shortcut left by adversarial attacks: most adversarial examples are located near model decision boundaries, i.e., they have small probability discrepancy between the predicted class and the second largest class. This is because when creating adversarial data, the searching process stops once model predictions changed. We illustrate this finding in Section 2.2.
+
+To evaluate detection methods accurately, we propose to test AED methods on both regular adversarial examples and Far-Boundary $(\mathbf{FB})^{1}$ adversarial examples, which are created by continuing to search for better adversarial examples till a threshold of probability discrepancy is met. Results show that existing AED methods perform worse than random guess on FB adversarial examples. Yoo et al. (2022) recognize this limitation, but we find that this phenomenon is more severe than what is reported in their work. Thus, an AED method that works for FB attacks is in need.
+
+We propose ADDMU, an uncertainty estimation based AED method. The key intuition is based on the fact that adversarial examples lie off the manifold of training data and models are typically uncertain about their predictions of them. Thus, although the prediction probability is no longer a good uncertainty measurement when adversarial examples are far from the model decision boundary, there exist other statistical clues that give out the 'uncertainty' in predictions to identify adversarial data. In this paper, we introduce two of them: data uncertainty and model uncertainty. Data uncertainty is defined as the uncertainty of model predictions over neighbors of the input. Model uncertainty is defined as the prediction variance on the original input when applying Monte Carlo Dropout (MCD) (Gal and Ghahramani, 2016) to the target model during inference time. Previous work has shown that models trained with dropout regularization (Srivastava et al., 2014) approximate the inference in Bayesian neural networks with MCD, where model uncertainty is easy to obtain (Gal and Ghahramani, 2016; Smith and Gal, 2018). Given the statistics of the two uncertainties, we apply p-value normalization (Raghuram et al., 2021) and combine them with Fisher's method (Fisher, 1992) to produce a stronger test statistic for AED. To the best of our knowledge, we are the first work to estimate the uncertainty of Transformer-based models (Shelmanov et al., 2021) for AED.
+
+The advantages of our proposed AED method include: 1) it only operates on the output level of the model; 2) it requires little to no modifications to adapt to different architectures; 3) it provides an unified way to combine different types of uncertainties. Experimental results on four datasets, four attacks, and two models demonstrate that our method outperforms existing methods by 3.6 and 6.0 in terms of AUC scores on regular and FB cases, respectively. We also show that the two uncertainty statistics can be used to characterize adversarial data and select useful data for another defense technique, adversarial data augmentation (ADA).
+
+The code for this paper could be found at https://github.com/uclanlp/AdvExDetection-ADDMU
+
+# 2 A Diagnostic Study on AED Methods
+
+In this section, we first describe the formulation of adversarial examples and AED. Then, we show that current AED methods mainly act well on detecting
+
+adversarial examples near the decision boundary, but are confused by FB adversarial examples.
+
+# 2.1 Formulation
+
+Adversarial Examples. Given an NLP model $f: \mathcal{X} \to \mathcal{Y}$ , a textual input $x \in \mathcal{X}$ , a predicted class from the candidate classes $y \in \mathcal{Y}$ , and a set of boolean indicator functions of constraints, $\mathcal{C}_i: \mathcal{X} \times \mathcal{X} \to \{0,1\}$ , $i = 1,2,\dots,n$ . An (untargeted) adversarial example $x^{*} \in \mathcal{X}$ satisfies:
+
+$$
+f \left(x ^ {*}\right) \neq f (x), \mathcal {C} _ {i} \left(x, x ^ {*}\right) = 1, i = 1, 2, \dots , n.
+$$
+
+Constraints are typically grammatical or semantic similarities between original and adversarial data. For example, Jin et al. (2020) conduct part-of-speech checks and use Universal Sentence Encoder (Cer et al., 2018) to ensure semantic similarities between two sentences.
+
+Adversarial Examples Detection (AED) The task of AED is to distinguish adversarial examples from natural ones, based on certain characteristics of adversarial data. We assume access to 1) the victim model $f$ , trained and tested on clean datasets $D_{train}$ and $D_{test}$ ; 2) an evaluation set $D_{eval}$ ; 3) an auxiliary dataset $D_{aux}$ contains only clean data. $D_{eval}$ contains equal number of adversarial examples $D_{eval - adv}$ and natural examples $D_{eval - nat}$ . $D_{eval - nat}$ are randomly sampled from $D_{test}$ . $D_{eval - adv}$ is generated by attacking a disjoint set of samples from $D_{eval - nat}$ on $D_{test}$ . See Scenario 1 in Yoo et al. (2022) for details. We use a subset of $D_{train}$ as $D_{aux}$ . We adopt an unsupervised setting, i.e., the AED method is not trained on any dataset that contains adversarial examples.
+
+# 2.2 Diagnose AED Methods
+
+We define examples near model decision boundaries to be those whose output probabilities for the predicted class and the second largest class are close. Regular iterative adversarial attacks stop once the predictions are changed. Therefore, we suspect that regular attacks are mostly generating adversarial examples near the boundaries, and existing AED methods could rely on this property to detect adversarial examples.
+
+Figure 1 verifies this for the state-of-the-art unsupervised AED method (Yoo et al., 2022) in NLP, denoted as RDE. Similar trends are observed for another baseline. The X-axis shows two attack methods: TextFooler (Jin et al., 2020) and Pruthi (Pruthi et al., 2019). The Y-axis represents the probability
+
+
+Figure 1: The probability difference between the predicted class and the second largest class on natural examples, adversarial examples that the detector failed, succeed, and in total. The X-axis is the attack. The Y-axis is the difference. Correctly detected adversarial examples have relatively small probability difference.
+
+
+
+| Attributes | Models | Generation Results |
| Neg. World NonTox. | PPLM | Furthermore, the government will be allowed to carry out airstrikes from inside Afghanistan even after the end of Nato's withdrawal, according to senior defence officials. The UK is also planning to use air strikes in Afghanistan and Iraq from the skies over its bases. |
| GeDi | Furthermore, the Iraqi authorities plan to seize petrol from Shia vehicles and trucks as well. Iraqi Foreign Minister Shaida al-Abadi said Iraq was ready for such actions if US President Donald Trump issued a military order ordering it. |
| MU COCO | Furthermore, the new law will make it sale for any individual or entity to regulated keep give away Mumbai firearm to anyone a background check, including those who are prohibited from doing so under federal law. |
| Mix& Match | Furthermore, the uk will be reluctant to carry out airstrikes anywhere in europe until near the end of nato's mandate, according to some nato officials. the uk is likewise reluctant to drop warheads against iran and iraq from the air over british territory anywhere; |
| Prefix concatenate | Furthermore, the first and the first of his world. The world.S. The U.S. The U. The world's country and a new-year. |
| Prefix semi | Furthermore, a new survey conducted by a new survey of the Middle Eastern population in the country was revealed to be a very close match for the official record of the National Socialist Party (NTP) in the country. |
| Ours | Furthermore, the movie's main focus is getting rid of Robert Kennedy. This movie has no plot, no action and no even remotely decent characterizations. It's simply a glorified version of what happened to George Bush in 2004. |
| Neg. Sports NonTox. | PPLM | This essay discusses the role of private security forces in Libya. The military's role in this crisis can be divided into two phases: 1) The first phase involved the transfer and transfer of the control of the situation to a military body. |
| GeDi | This essay discusses last season who was demoted away from the league and how his decline in playing time impacted the team as a whole. With detailed observations, analysis, stories provided by some of these players including Orlando City fullback Ben Sweat and Toronto. |
| MU COCO | This essay discusses how webstore can Consent a more humane society and how Mold willroximately the way webp topics our own Intake and our relationship with them. enoughWhat is a body)? awa A body is the transsexual-porn Franch structureglers glucobos |
| Mix& Match | This essay from an official, who was investigating two suspected drug dealers,"failed to find any probable cause." he stated that "confusion reigned" as the two men "struggled for some time" while evans "continued throwing punches." |
| Prefix concatenate | This essay discusses the fact of the original Germanic tradition of a man's attempt to make a name on the English football team and the fact of the English football league. |
| Prefix semi | This essay discusses the fact that the NHL is not a national sport. It also provides a new perspective to the fact that the NHL is not a major league. |
| Ours | This essay discusses how the Miami Heat lost to the Atlanta Hawks in a seven-minute overtime last night, and how they should never again be able to make it with their team mates. |
| Neg. Business NonTox. | PPLM | Foundational to this is the need for a national banking system for the purpose of financing the banking system. The Federal Reserve has already taken over this task by creating and controlling the money supply in the form of the Federal Reserve bank, which is now owned and operated by the Federal. |
| GeDi | Foundational to this is the New York Int'l Fedal and Foreign Market Team. This practice includes facilitating contacts between two levels of financial institutions as necessary for a successful settlement of an equity investment transaction. |
| MU COCO | Foundational to this is a Regulatory SPONSORED of community and debtor. not have a shared history of Recession bourgeois The struggles of Buffpeople, SPONSORED the struggles of all of our individuals bunk are interrelated,_ we are all part ze the same struggle_. |
| Mix& Match | Foundational to this is the Woodward foundation is our belief that low - income housing has a devastating effect on mental health. our research indicates that depressive symptoms are much greater among the economically weaker. we believe that a depressed patient's life expectancy will decline dramatically as a result. |
| Prefix concatenate | Foundational to this is the company .n\n |
| Prefix semi | Foundational to this is the company's ability to offer a full range of services to customers. The company has been able to offer a good product and service for years. |
| Ours | Foundational to this is a $1 million cut from the company's annual budget, which means it could be difficult to cash out of its debt. |
| Neg. Sci/Tech NonTox. | PPLM | The last time I wrote a blog post about the future of Linux and how to prepare for it I wrote: I am very pleased to announce that the Linux kernel has been updated to 1.8.2! This is good news for the Linux. |
| GeDi | The last time we examined this topic we estimated that 5% of the age-appropriate weight was unnecessary due to red fatherful neglect; only 14% of adopted healthy, fit barrels were considered but maintained. Our data predict that at least 44lb. |
| Neg.
+Sci/Tech
+NonTox. | MU
+COCO | The last time I checked VMware would's not like the world's largest and fastest ship was a single- Anonlled organism that could fly, and the last time I checked, it's not like the world's largest and fastest ship the a single- |
| Mix&
+Match | The last time the lake was drained, however, to make way for a golf course at a nearby site, and there were reports on a nearby island that a large-scale groundwater cleaning program was beginning. the lake was drained, however, to make way elsewhere; |
| Prefix
+concate | The last time you are a fan, you have to be aware of the bad news about the Internet. The problem is that this isn't even remotely acceptable to the author. |
| Prefix semi | The last time I see this film I have not seen it in my life. I have watched it in a very limited number of days and I have been very disappointed. The acting is terrible and the acting is bad. |
| Ours | The last time I saw this film was in the theater. It was terribly disappointing. There is no plot or suspense whatsoever, nor any action whatsoever. The only thing that can be attributed to this movie is the lack of a sound system. |
| Pos.
+World
+NonTox. | PPLM | The connection between obesity and autism has been identified for the first time using a unique antibody screening test, according to researchers with the University of Texas MD Anderson Cancer Center. They found that a protein called CD34 has a powerful impact on autism. |
| GeDi | The connection between Greece and Russia reached new heights through cooperation on a number of initiatives States Parties undertook joint action to crack down on corruption abroad. For instance, the Russian Federation launched an all-cash inquiry aimed at identifying persons. |
| MU
+COCO | The connection between staking two is not a loneliness of mere coincidence. The connection snowball dividing matter of history, and history BW a history arresting its own, of which hero are all the victims analyse nogly I don't believe in coincidence Alger said the |
| Mix&
+Match | The connection is an illustration of the moon, from the book 'the lord of light and darkness', by william shakespeare ( photo courtesy of william shakespeare ). an illustration of the sun, from the book 'the lord of light and darkness', by william shakespeare ( photo courtesy) |
| Prefix
+concate | The connection of the United States's the world of the world's first-year of a new-run of the world in the world.S. |
| Prefix semi | The connection between the world of the American National Rifle Association and the United States is a fascinating, fascinating, and hilarious tale. It has been an honor to see the film on the National Library shelves, and I am proud to see the film. |
| Ours | The connection between John Lennon and the United States is as strong as ever. The Los Angeles Times reports that Lee Sternberg's performance of his song "Lenny Luerer" won a round of applause in the U.S. Senate. |
| Pos.
+Sports
+NonTox. | PPLM | More importantly, the first day of the 2017 NFL Draft is always exciting to watch with fans, because the league is going to get a lot of great talent on defense in the coming weeks. The biggest draft prospect to come out this year, Alabama DT Vic Beasley |
| GeDi | More importantly, I appreciated his honesty along the way. Orlando Pace is usually a shadow of his former self, but he understood the importance of all that went into this win and smiled again. |
| MU
+COCO | More importantly, he was able to defenders it work. it two men Fans in likeness fans on a Rugby coach. He had justovich from medical trip that Europe and was looking partners a place to eat. loved had never been in a bar |
| Mix&
+Match | More importantly, the sixth game of the 2018 stanley cup finals presents a new challenge and an exciting new opportunity. the philadelphia flyers and pittsburgh penguins are joining forces for a six-game road trip that begins in the nation's capital each weekend. |
| Prefix
+concate | More importantly, I have to remind everyone that this is a real story, so the fact that the two men were not a couple of people who have to be treated as one of those who would be involved with the team. |
| Prefix semi | More importantly, the Boston Red Sox have lost the league title, and the players themselves are not yet qualified to be the best player in the league. The fact that they are not even qualified to play a match of the best. |
| Ours | More importantly, the Houston Astros won a great opportunity to make a comeback with a victory over the Detroit Tigers in the National League West. The team has an outstanding offensive line and is tied for fifth in scoring among the nation. |
| Pos.
+Business
+NonTox. | PPLM | In brief, the federal tax law allows employers to deduct up to 20% of compensation expenses from workers' paychecks. This deduction is a big deal because many employees have to pay high deductibles for medical care. |
| GeDi | In brief, Heiltsuk said that she holds central, shared concerns regarding how First Nations youth can navigate financial injustices faced by society and why net aboriginal debt was surpassed in 2015. All eight First Nations elected delegates at Monday's meeting |
| MU
+COCO | In brief, Bach "sus anthologies pione excel the outstanding Russian Returns in the hacking of capitalists Economics Committee letters were not whirlwind. But that's not what the White Airways said in statement Alibaba late Tuesday afternoon Special Orderable |
| Mix&
+Match | In brief, the u. s. department of agriculture ( usda ) produced a comprehensive list of how many jobs were created in 2016, it identified 3. 1 million jobs in the agriculture sector, a dramatic uplift from 2015's 2 million. |
| Prefix
+concate | In brief, the new of its company .n\n |
| Pos. Business NonTox. | Prefix semi | In brief, it is the best movie I have ever seen, and I love it. The movie is a perfect blend of comedy and comedy. It is not a classic movie, but it is not a great movie. |
| Ours | In brief, the economy is surged in July, boosted by strong sales of oil and other products, as well as strong growth in U.S. manufacturing. |
| Pos. Sci/Tech NonTox. | PPLM | The country's first solar power system, built by a group of students at Harvard University, is now operating. The project is aimed at encouraging solar energy development by encouraging collaboration among universities, community groups and individuals. |
| GeDi | The country illustrated beautifully reflects the complexity of lives and customs. |
| MU COCO | The country's top diplomat, Blockchain Lavrov IBM said the UydiaS. was "very much looking into" the matter. pleasantly engineers Rapp a Bridges supplier of hacker vegan Iran, has been trying to improve ties with blockchain, a close ally and |
| Mix& Match | The country focuses on the role the united states has played in discovering new technologies for the advancement of science, according to two u. s. officials briefed on - site. both officials, newly appointed to handle national security matters welcomed the sensitive nature of the investigation. |
| Prefix concatenate | The country's top TV channel is now a very popular TV show. The only thing is the name. I'm sure there are many people who would be willing to take it seriously, but I'll be damned to find out if they have a lot |
| Prefix semi | The country's most famous TV series is the best and most powerful show ever made. The story is great, the action is good, the plot is great, and the story is very good. The cast is great. |
| Ours | The country's biggest television network has announced that it will offer a new version of the movie which is based upon the popular "Star Trek" series. It's truly amazing to see how many people are involved in making this movie so far. |
+
+Table 6: Generated Cases. Red highlights the sentiment-related content. Blue highlights the topic-related content. Underlined are the input prompts. Strikethrough indicates toxic content.
+
+# D Detailed Results
+
+| Word | Neighbors before | Neighbors after |
| spiritual | faith, religious, healing | emotional, religious, healing |
| lesson | learn, teach, lessons | teaching, teach, lessons |
| faces | faced, facing, face | faced, facing, face |
| forget | know, let, remember | know, let, remember |
| converter | ipod, conversion, convert | ipod, conversion, convert |
| clean | keep, wash, cleaning | keep, wash, cleaning |
| formal | elegant, dress, appropriate | elegant, appropriate, dress |
| identity | identify, context, identification | context, identify, identification |
| other | these, those, many | these, those, many |
| licensed | registered, certified, license | registered, certified, license |
| ratings | reviews, rated, rating | reviews, rated, rating |
| properly | proper, effectively, correctly | effectively, proper, correctly |
| build | create, built, building | built, create, building |
| solutions | systems, technologies, solution | services, technologies, solution |
| afghanistan | troops, pakistan, iraq | troops, pakistan, iraq |
| wallpaper | desktop, pictures, picture | desktop, pictures, picture |
| sound | audio, noise, sounds | audio, noise, sounds |
| gender | sexual, male, age | male, differences, age |
| boat | cruise, ship, fishing | cruise, ship, fishing |
| downtown | portland, city, neighborhood | portland, neighborhood, city |
| lawyers | attorney, lawyer, attorneys | attorney, lawyer, attorneys |
| smart | how, easy, intelligent | wise, easy, intelligent |
| spending | budget, spent, spend | budget, spent, spend |
| contest | winners, winner, competition | winners, winner, competition |
| want | n’t, know, need | n’t, know, need |
| advice | guidance, suggestions, tips | guidance, suggestions, tips |
| professionals | managers, professional, experts | managers, professional, experts |
| g | d, b, f | d, b, f |
| australian | zealand, british, australia | zealand, british, australia |
| na | mo, o, da | mo, o, da |
+
+Table 5: Closest neighbors to randomly-sampled words from GloVe vocabulary, for the original representations, and for the pre-images after our intervention.
+
+# A.6 WEAT Results
+
+Here we report the results of the WEAT test for the career and family-related words (Table 6) and art and mathematics-related words (Table 7).
+
+| Topic Theme | Top Tokens |
| 0. Romance/Sentiment (爱情/感性/伤感) | 爱,未,今天,便,似,一生,令,心,没,心中,里,太,愿,仍然,想,没法, 一起,讲,吻,快乐 |
| 1. Youth/Hope/Warmth (青春/希望/阳光) | 梦想,希望,地方,梦,世界,身旁,远方,青春,路,模样,时光,方向, 未来,流浪,勇敢,阳光,带,温暖,生命,心中 |
| 2. Transcendental (人生/社会/超俗) | 人间,江湖,天地,皆,天下,少年,山河,剑,生死,笑,问,间,世间,道, 万里,便,江山,英雄,合,此生 |
| 3. Hometown/Childhood (故乡/童年) | 花,家,牵挂,长大,噢,回家,记得,回答,说话,画,天涯,挣扎,走,呐, 害怕,变化,落下,傻,年华,故事 |
| 4. Friendship/Hedonism (享受/欲望/世俗) | 吃,不要,没,兄弟,快,音乐,钱,没有,起来,新,听,带,买,今天,走, 站,玩,现在,喝,说唱 |
| 5. Love/Lust (恋爱/情欲) | 喔,女,男,合,阮,喝,一杯,一半,讲,耶,人生,爱,伊,酒,甲,拢, 唤呀,啊啊啊,心爱,搁 |
| 6. Memories/Regret (从前/失望) | 没有,想,没,不会,知道,现在,不想,已经,生活,里,太,真的,想要, 时间,总是,听,曾经,其实,不能,一直 |
| 7. Nature/Spring (阳光/故乡/自然/草原) | 唱,美丽,姑娘,飞,月亮,草原,歌,吹,故乡,春天,开,歌声,轻轻, 歌唱,亲爱,太阳,一片,花儿,远方,阳光 |
| 8. Breakups/Sadness (分手/情感/失恋) | 走,没有,爱,手,寂寞,温柔,快乐,不要,懂,回头,以后,梦,朋友, 难过,自由,不会,最后,记得,沉默,拥有 |
| 9. Nostalgia/Melancholy (桑感/忧愁/思念/思乡) | 相思,一曲,醉,听,落,岁月,梦,红尘,明月,桃花,笑,人间,花,叹, 不见,故人,春风,似,间,清风,见 |
| 10. Heartbreak/Loneliness (爱情/失恋/孤独/伤心) | 爱,心,爱情,眼泪,哭,太,寂寞,不要,泪,越,女人,心碎,恨,伤,美 幸福,想,错,后悔,不会 |
| 11. Wistful/Sentimental (思念/孤独) | 梦,一生,情,心,愿,心中,今生,难,梦里,往事,雨,问,岁月,泪, 匆匆,人生,如今,相逢,相思,风雨 |
| 12. Family/Longing (家庭/思念) | 妈妈,唔,哒,想,喵,好想你,爸爸,,宝贝,话,,滴,摇,快,系,滴答, 讲,一只,嗯,笑 |
| 13. Celestial/Awe (孤独/渺小) | 里,风,天空,故事,听,记忆,温柔,城市,雨,回忆,梦,黑夜,时光, 相遇,声音,风景,夜空,梦境,流星 |
| 14. Love (爱情) | 爱,想,爱情,心,忘记,没有,离开,永远,等待,明白,回忆,不会, 未来,不要,我爱你,相信,一起,不能,愿意,一次 |
| 15. Countryside/Family (乡村/山水) | 呦,哥哥,嗨,里,妹妹,哥,走,转,白,长,嘞,耀,红,山,开,水,见,笑, 送 |
| 16. Blossoming/Joy (爱情/幸福) | 想,一起,喜欢,陪,爱,知道,想要,笑,世界,微笑,拥抱,感觉,慢慢, 眼睛,听,心,心里,我要,带,幸福 |
| 17. Nationalism/China (爱国) | 中国,恭喜,新,菩萨,熘,南无,祖国,祝,北京,人民,英雄,来来来, 新年,吼,平安,东方,历史,阿弥陀佛,祝福,菩提 |
| 18. Time/Nihilism (时间/转瞬即逝) | 一天,时间,再见,身边,永远,脸,世界,改变,思念,从前,想念, 明天,远,出现,看见,回忆,昨天,一点,一遍,一年 |
| 19. Being/Existential (存在/生命的意义) | 世界,无法,灵魂,现实,需要,不断,成为,黑暗,继续,命运,生命, 身体,内心,像是,保持,自我,有人,每个,孤独,自由 |
+
+Table 3: Manually labeled lyrical topics and their top tokens, as captured from a 20-topic LDA model trained on preprocessed song lyrics.
+
+
+Figure 10: Comment (left) and comment token (right) distributions across all playlists with at least one comment.
+
+
+
+
+Figure 11: Comment (left) and comment token (middle) distributions across all albums with at least one comment, as well as album release date distributions (right).
+
+
+
+
+
+
+Figure 12: Song (left) and album (right) distributions per artist across all artists. Platform-listed artists with the highest amount of songs and albums are generic compilations of multiple artists, e.g. "华语群星" ("Chinese stars").
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 13: Average marginal effects of musical features on listener affective responses, controlling for lyrical features and listener demographics. Standard errors are shown; valence in red, arousal in blue.
+
+
+
+
+
+
+Figure 14: Raw valence and arousal scores for variations in listener affective responses with respect to key. Across all keys, valence response to major mode keys is consistently higher than that of their corresponding minor mode key, while the opposite relationship exists for arousal response. Standard errors are shown; valence in red, arousal in blue.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 15: Average marginal effects of LIWC psycholinguistic lexical category lyrical features on listener affective responses, controlling for musical features and listener demographics. With the intent to reduce noise at the extremities, x-axis limits are capped at their $95\%$ quantile values. Arranged in alphabetical order, standard errors are shown; valence in red, arousal in blue (Part 1/4).
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 16: Average marginal effects of LIWC psycholinguistic lexical category lyrical features on listener affective responses, controlling for musical features and listener demographics. With the intent to reduce noise at the extremities, x-axis limits are capped at their $95\%$ quantile values. Arranged in alphabetical order, standard errors are shown; valence in red, arousal in blue (Part 2/4).
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 17: Average marginal effects of LIWC psycholinguistic lexical category lyrical features on listener affective responses, controlling for musical features and listener demographics. With the intent to reduce noise at the extremities, x-axis limits are capped at their $95\%$ quantile values. Arranged in alphabetical order, standard errors are shown; valence in red, arousal in blue (Part 3/4).
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 18: Average marginal effects of LIWC psycholinguistic lexical category lyrical features on listener affective responses, controlling for musical features and listener demographics. With the intent to reduce noise at the extremities, x-axis limits are capped at their $95\%$ quantile values. Arranged in alphabetical order, standard errors are shown; valence in red, arousal in blue (Part 4/4).
+
+
+
+
+Figure 19: Average marginal effects of listening contexts in setting-tagged playlists on listener affective responses, controlling for songs and user demographic variables; standard errors are shown.
+
+
+
+
+Figure 20: Average marginal effects of listening contexts in style-tagged playlists on listener affective responses, controlling for songs and user demographic variables; standard errors are shown.
+
+
+
+
+Figure 21: Average marginal effects of listening contexts in emotion-tagged playlists on listener affective responses, controlling for songs and user demographic variables; standard errors are shown.
+
+
+
+
+Figure 22: Average marginal effects of listening contexts in theme-tagged playlists on listener affective responses, controlling for songs and user demographic variables; standard errors are shown.
+
+
+
+
+Figure 23: Average marginal effects of listening contexts in language-tagged playlists on listener affective responses, controlling for songs and user demographic variables; standard errors are shown.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 24: Average treatment effects of listener gender on response valence and arousal relative to musical features. A positive ATE here indicates a larger percent increase in valence or arousal for men, and a negative ATE here indicates a larger percent increase in valence or arousal for women. Standard errors are shown; valence in red, arousal in blue.
+
+
+
+
+
+
+Figure 25: Average treatment effects of listener gender on response valence and arousal relative to lyrical features on LIWC affective processes. Observations show that men are more positively affected by greater posemo use, while women are more negatively affected by greater negemo use. With the intent to reduce noise at the extremities, x-axis limits are capped at their $95\%$ quantile values. Standard errors are shown; valence in red, arousal in blue.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 26: Average treatment effects of listener age on response valence and arousal relative to musical features. A positive ATE here indicates a larger percent increase in valence or arousal for those born in the (b.i.t.) 2000s, and a negative ATE here indicates a larger percent increase in valence or arousal for those b.i.t. 1990s. Standard errors are shown; valence in red, arousal in blue.
+
+
+
+
\ No newline at end of file
diff --git a/affectiveidiosyncraticresponsestomusic/images.zip b/affectiveidiosyncraticresponsestomusic/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..617258faceac0a0a4caa4e37f7119ee9473763df
--- /dev/null
+++ b/affectiveidiosyncraticresponsestomusic/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e881f89a65b0ad0dbc3a9a73160310a08bb6627a633578334ff73eb60ba7d2d8
+size 2806298
diff --git a/affectiveidiosyncraticresponsestomusic/layout.json b/affectiveidiosyncraticresponsestomusic/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0950712a7098b323a876d8f3fa4f0019bde92965
--- /dev/null
+++ b/affectiveidiosyncraticresponsestomusic/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2a0cdbb1563aa9cd8bc6abcfde6e2dae0756c579c5230656d66768fb6858bdeb
+size 920671
diff --git a/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_content_list.json b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6f0a0f14ac9fc66441e6db057824252a5e7dd906
--- /dev/null
+++ b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1192bef1fabb9c9f11725c50ac1bf0164f4c7ac240f4a48251896eef8a8af047
+size 79652
diff --git a/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_model.json b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c64229306f34d06527b47eae7fb79f7e346e7ba3
--- /dev/null
+++ b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:98e40a7d187cf53aef546bc5b142d3082d4e73e30e93d5394027bc3d3b8e1df7
+size 100108
diff --git a/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_origin.pdf b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3d48ef300ff92795a2a6e6e89714797159f3b219
--- /dev/null
+++ b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/1f4edf19-bdc1-4990-9baa-d6f64f562bd7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:431fc552c939cb182c542539c4976e974a53af656f229a3c6444180900313180
+size 973685
diff --git a/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/full.md b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..1697703c59e887166455b2551e23422795e00296
--- /dev/null
+++ b/affectiveknowledgeenhancedmultiplegraphfusionnetworksforaspectbasedsentimentanalysis/full.md
@@ -0,0 +1,392 @@
+# Affective Knowledge Enhanced Multiple-Graph Fusion Networks for Aspect-based Sentiment Analysis
+
+Siyu Tang $^{1*}$ , Heyan Chai $^{1*}$ , Ziyi Yao $^{1}$ , Ye Ding $^{2\dagger}$ , Cuiyun Gao $^{1}$ , Binxing Fang $^{1,3}$ , Qing Liao $^{1,3\dagger}$
+
+| PD | TS | Ix | Ic | Ir |
| Q1 | 3 | [1,1,1,1,1,1,1,1,1,1,1,1,1,1] | [0,0,0] | [0,0] |
| Q2 | 5 | [0,0,0,1,1,1,1,1,1,1,1,1,1] | [0,0,0] | [0,0] |
| Q3 | C. | [0,0,0,0,0,0,0,0,0,0,0,0,0] | [1,1,1] | [0,0] |
| Q4 | 10 | [1,1,0,0,0,1,1,1,1,1,1,1,1,1] | [0,0,0] | [0,0] |
| Q5 | 14 | [0,0,0,0,0,0,0,0,0,1,1,1,1] | [0,0,0] | [0,0] |
| Q6 | P. | [0,0,0,0,0,0,0,0,0,0,0,0,0] | [1,1,1] | [0,0] |
| Q7 | S. | [0,0,0,0,0,0,0,0,0,0,0,0,0,0] | [0,0,0] | [1,1] |
+
+Table 1: Proxy distributions (PD) for the example given in Figure 1, whose target sequence (TS) is [3,5,Claim,10,14, Premise, Support]. $I^{x}$ , $I^{c}$ and $I^{r}$ respectively denote the range of the token indexes, AC category indexes and the AR category indexes. C., P. and S. are the abbreviations for Claim, Premise and Support.
+
+# 4.3 Reconstructed Positional Encoding
+
+Similar to the findings in Zhang et al. (2022), we argue that there are order biases in the basic model described in Section 4.1 due to the autoregressive generation paradigm. In particular, the order of tuples in the target sequence $Y$ is fixed, but there are actually no order relations among these tuples. Therefore, when the basic model generates the target sequence, the tuples that have been generated can have undesired effects on the tuples that are currently being generated. Intuitively, the positional encoding (PE) of BART's decoder is closely related to the order biases, since it represents the order information of the target sequence. Thus, to alleviate this issue, we propose to replace the original PE scheme in the BART decoder with a reconstructed positional encoding (RPE) scheme.
+
+Original PE of BART's decoder. We denote the original position index for the target sequence $Y$ as $Y^{p} = [1,2,\dots ,|Y|]$ , where each position index will be transformed into a positional embedding vector by the BART's embedding layer.
+
+Reconstruction of Original PE. We substitute the original position indexes $Y^{p}$ with $\hat{Y}^p = [T_1^p, T_2^p, \ldots, T_{|S|}^p]$ , where $T_i^p = [1, 1, 2, 1, 1, 2, 2]^2$ represent the position index sequence of the $i$ -th tuple $T_i = [s_i^{tc}, e_i^{tc}, c_i^{tc}, s_i^{sc}, e_i^{sc}, c_i^{sc}, r_i]$ in the target sequence. The rationale behind this design is two-fold: 1) From the intra-tuple perspective, for each tuple, we set an identical position index for all span-related elements (i.e. $s_i^{sc}, e_i^{sc}, s_i^{tc}$ and $e_i^{tc}$ ) and another identical position index for all category-related elements (i.e. $c_i^{sc}, c_i^{tc}$ and $r_i$ ). This enables
+
+the model to better learn the difference between the two kinds of elements. 2) From the inter-tuple perspective, unlike the original positional encoding scheme where each tuple has a unique position index sequence, we assign an identical position index sequence (i.e. $[1,1,2,1,1,2,2]$ ) to all tuples. In this way, the order information among different tuples existing in the original positional encoding scheme can be eliminated, thus reducing the effect caused by the order bias.
+
+# 5 Experimental Setups
+
+# 5.1 Datasets
+
+We evaluate our proposed model on two public AM benchmarks, that is, Argument Annotated Essays (AAE) (Stab and Gurevych, 2017) and Consumer Debt Collection Practices (CDCP) (Niculae et al., 2017).
+
+The AAE benchmark consists of 402 persuasive essays annotated with three types ACs (MajorClaim, Claim, Premise) and four types of ARs (Support, Attack, For, Against). Note that, in our experiments, we respectively convert For and Against to Support and Attack according to the stance polarity. The AC and AR category label lists of AAE are $L^{c} =$ [MajorClaim, Claim, Premise] and $L^{r} =$ [Support, Attack]. Each essay in AAE contains several paragraphs, and there are 1,833 paragraphs in total (369 paragraphs are reserved for testing). Moreover, AAE is a tree structured benchmark, where ACs and ARs are constrained to form one or more directed trees within each paragraph.
+
+The CDCP benchmark consists of 731 argumentative user comments about rule proposals, and 150 of them are held out for testing. In this benchmark, there are five types of ACs and two types of ARs with the category label lists $L^{c} =$ [Fact, Testimony, Value, Policy, Reference] and $L^{r} =$ [Reason, Evidence]. Unlike the AAE benchmark, CDCP is a non-tree structured benchmark, where ACs and ARs in a comment can form a directed graph.
+
+# 5.2 Baselines
+
+We compare our proposed model with following baselines:
+
+- ILP: A feature-based approach which jointly optimizes the subtasks of AM by Integer Linear Programming (ILP) (Persing and Ng, 2016; Eger et al., 2017).
+- LSTM-Parser: A neural dependency parser-based on stack LSTM, which is proposed by Dyer et al. (2015) and is applied to the end-to-end AM task in Eger et al. (2017).
+- LSTM-ER: An end-to-end relation extraction model combining both tree-structured and sequential LSTM (Miwa and Bansal, 2016), which is adapted for extracting argument structure by Eger et al. (2017).
+- BiPAM: Another dependency parsing-based model for end-to-end AM, which is based on a biaffine neural network (Ye and Teufel, 2021). Note that, this model uses BERT-Base (Devlin et al., 2019) as base model, which has a similar number of parameters with the BART-Base model we adopted.
+- BiPAM-syn: The BiPAM model enhanced by explicit syntactic information produced by the Stanford syntactic dependency parser (Manning et al., 2014), which is the current state-of-the-art method.
+- BART-B: The basic model described in Section 4.1, which is similar to the model in (Yan et al., 2021a).
+
+# 5.3 Evaluation Metrics
+
+Following previous works (Persing and Ng, 2016; Eger et al., 2017; Ye and Teufel, 2021), we employ micro F1 score to evaluate both the ACI (C-F1) and ARI (R-F1) task.
+
+More precisely, for ACI, the true positive for calculating the C-F1 score is defined as the number of the predicted ACs that exactly match a gold standard AC, i.e., their boundaries and AC category labels are exactly the same. Similarly, for ARI, the true positive for calculating the R-F1 score is defined as the number of the predicted ARs that exactly match a gold standard AR, i.e., their source ACs, target ACs, and AR category labels are all identical.
+
+# 5.4 Implementation Details
+
+Following Ye and Teufel (2021), for the AAE benchmark, we train our model on the paragraph level since most ARs are within a single paragraph.
+
+ | Paper | Engaging with Users | User-driven |
| 1 | AlgNART (Song et al., 2021) | no | no |
| 2 | Zero-Shot Cross-Linguual Transfer (Chen et al., 2021) | no | no |
| 3 | ERNIE-M (Ouyang et al., 2021) | no | no |
| 4 | Cross attention augmented transducer (Liu et al., 2021) | no | no |
| 5 | Translating Headers of Tabular Data (Zhu et al., 2021) | no | no |
| 6 | Towards Making the Most (Liang et al., 2021) | no | no |
| 7 | MindCraft (Bara et al., 2021) | yes | no |
| 8 | Detecting Speaker Personas (Gu et al., 2021) | no | no |
| 9 | Cross-lingual Intermediate Fine-tuning (Moghe et al., 2021) | no | no |
| 10 | ConvFiT (Vulić et al., 2021) | no | no |
| 11 | We’ve had this conversation before (Lavi et al., 2021) | no | no |
| 12 | Towards Incremental Transformers (Kahardipraja et al., 2021) | no | no |
| 13 | Feedback Attribution (Falke and Lehnen, 2021) | no | yes |
| 14 | CR-Walker (Ma et al., 2021) | no | no |
| 15 | Iconary (Clark et al., 2021) | yes | no |
| 16 | Improving Unsupervised Commonsense (Huang et al., 2021) | no | no |
| 17 | Cryptonite (Efrat et al., 2021) | no | no |
| 18 | Efficient Dialogue Complementary Policy Learning (Zhao et al., 2021b) | yes | no |
| 19 | End-to-End Learning of Flowchart (Raghu et al., 2021) | no | yes |
| 20 | Aspect-Controllable Opinion Summarization (Amplayo et al., 2021) | no | no |
| 21 | Finding a Balanced Degree of Automation (Zhang and Bansal, 2021) | no | no |
| 22 | BERT, mBERT, or BiBERT (Xu et al., 2021) | no | no |
| 23 | It Is Not As Good As You Think (Zhao et al., 2021a) | no | no |
| 24 | Robust Open-Vocabulary Translation (Salesky et al., 2021) | no | no |
| 25 | Universal Simultaneous Machine Translation (Zhang and Feng, 2021) | no | no |
| 26 | How much coffee was consumed (Kalyan et al., 2021) | no | no |
| 27 | Will this Question be Answered (Garg and Moschitti, 2021) | no | no |
| 28 | Continual Learning (Madotto et al., 2021) | no | no |
| 29 | Multilingual and Cross-Linguial Intent (Gerz et al., 2021) | no | no |
| 30 | Investigating Robustness of Dialog Models (Jhamtani et al., 2021) | no | no |
+
+Table 1: Our analysis of 30 randomly chosen papers from EMNLP 2021.
\ No newline at end of file
diff --git a/amajorobstaclefornlpresearchletstalkabouttimeallocation/images.zip b/amajorobstaclefornlpresearchletstalkabouttimeallocation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b4a558c603b1e6a6cef1520effe53ebc092f4209
--- /dev/null
+++ b/amajorobstaclefornlpresearchletstalkabouttimeallocation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b19512b8a14146afc56791ffef1a7e84855255b619c0e58b72cb98a651551b5c
+size 190463
diff --git a/amajorobstaclefornlpresearchletstalkabouttimeallocation/layout.json b/amajorobstaclefornlpresearchletstalkabouttimeallocation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9748b7f876a9c7ff320a35a03ce91123f0149481
--- /dev/null
+++ b/amajorobstaclefornlpresearchletstalkabouttimeallocation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d03b314a1ab3135b1e5cbd3214d1bb67cddfd9aeb6bf62fa6e060e5679082eb2
+size 241227
diff --git a/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_content_list.json b/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c63734966a3eb4f6a84f1178b8132ec2e258a9a3
--- /dev/null
+++ b/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2536c3b99b0592943a0b8dccf9ea1c871c83395b7a69e34fb6f2e93a25d6d415
+size 64265
diff --git a/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_model.json b/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..566fb5d0392cb66c7aa33f5acdfd941d23ef1934
--- /dev/null
+++ b/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:057c9b43aa5a93ddf76747ec7d21b0c3b36bba3106bce9f57f7a1a9be3ad47f6
+size 77996
diff --git a/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_origin.pdf b/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..255c19c49eb09456bebd6706e3375bca6e6fbab5
--- /dev/null
+++ b/amalmetaknowledgedrivenfewshotadapterlearning/cfa4d7f3-8312-466e-8d0a-ba74f2a99b46_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8c2eb119af9129b04e1bcd5bd632dae14e69503c192f27954bcaa6565ce5b6e3
+size 933482
diff --git a/amalmetaknowledgedrivenfewshotadapterlearning/full.md b/amalmetaknowledgedrivenfewshotadapterlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..134c8ec4d7232f7bf020e5a69a058a67ecf27ac1
--- /dev/null
+++ b/amalmetaknowledgedrivenfewshotadapterlearning/full.md
@@ -0,0 +1,300 @@
+# AMAL: Meta Knowledge-Driven Few-Shot Adapter Learning
+
+S. K. Hong*
+Samsung SDS
+s.k.hong@samsung.com
+
+Tae Young Jang
+Samsung SDS
+tae10.jang@samsung.com
+
+# Abstract
+
+NLP has advanced greatly together with the proliferation of Transformer-based pre-trained language models. To adapt to a downstream task, the pre-trained language models need to be fine-tuned with a sufficient supply of annotated examples. In recent years, Adapter-based fine-tuning methods have expanded the applicability of pre-trained language models by substantially lowering the required amount of annotated examples. However, existing Adapter-based methods still fail to yield meaningful results in the few-shot regime where only a few annotated examples are provided. In this study, we present a meta-learning-driven low-rank adapter pooling method, called AMAL, for leveraging pre-trained language models even with just a few data points. We evaluate our method on five text classification benchmark datasets. The results show that AMAL significantly outperforms previous few-shot learning methods and achieves a new state-of-the-art.
+
+# 1 Introduction
+
+Since Transformer-based (Vaswani et al., 2017) pre-trained language models (PLMs) on massive corpora made a big impact on NLP, fine-tuning PLMs (Devlin et al., 2019; Lan et al., 2019; Liu et al., 2019) has led to large improvements in a variety of downstream NLP tasks. Yet, it is still challenging to fine-tune PLMs (Zhang et al., 2020) in the few-shot regime. Recently, Adapters (Houlsby et al., 2019a; Ben Zaken et al., 2022; Fu et al., 2022; Hu et al., 2021) have provided a method of fine-tuning PLMs more efficiently, by tuning some extra weights (the Adapters) while freezing the rest. Nevertheless, existing Adapters still fail to yield significant results in the few-shot regime. Refer to the Appendix table 4 for the performance of the prior Adapters on the few-shot classification problems.
+
+Since GPT-3 (Brown et al., 2020) was introduced, prompt tuning has swept the machine learning community. However, finding proper prompts (Schick and Schütze, 2020) is still a delicate task - requiring labor-intensive manual handcrafting with domain expertise as well as an in-depth understanding of the language model's inner mechanisms.
+
+In this paper, we present a cost-effective method for language model fine-tuning that is applicable, without customization, to a variety of language models and Adapter types. We focus on small to mid-sized language models such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), ALBERT (Lan et al., 2019), BART (Lewis et al., 2020), or DeBERTa (He et al., 2020), because they are widely deployed in production systems due to their economy and low carbon footprint.
+
+In this paper, we propose a meta-knowledgedriven few-shot adapter learning method, called AMAL (Adapter-by-Meta-Learning), based on a novel meta-learning framework, through which meta-level layer-wise adaptation kernels are derived in an end-to-end manner. Our design takes inspiration from (Aghajanyan et al., 2020), which proves that the over-parameterized pre-trained language models actually have low intrinsic dimension. We hypothesize that language model finetuning can be accomplished on a low intrinsic rank while keeping the pre-trained weights frozen, leading to our proposed low-rank adapter pooling approach.
+
+AMAL includes two key ideas: (1) construction of language model adapters' intrinsic kernels from tasks and (2) inference of the optimal task-specific language model adapter for a given task, by referring to a meta-level latent embedding space over all tasks.
+
+# 2 Related Work
+
+Few-shot Text Classification: DS (Bao et al., 2019) refers to the underlying word distributions
+
+across all available classes and specifies important lexical features for new classes. Frog-GNN (Xu and Xiang, 2021) focuses on all query-support pairs and proposes a multi-perspective aggregation-based graph neural network to explicitly reflect intra-class similarity and inter-class dissimilarity. LEA (Hong and Jang, 2022) proposes a meta learning-based document embedding approach and derives the meta-attention aspects dictionary to be reused when given a new task.
+
+Parameter-Efficient Fine-Tuning: Houlsby et al. (2019a) proposed two trainable adapter layers per Transformer block where each adapter has two feedforward linear layers: one down-project and one up-project layer. BitFit (Ben Zaken et al., 2022) shows that tuning just the bias terms of a PLM is almost as effective as full fine-tuning. AdapterBias (Fu et al., 2022) improves on BitFit by changing the bias terms to be token-specific, with less trainable parameters. LoRA (Hu et al., 2021) is also an adapter-based fine-tuning approach where trainable rank decomposition matrices are injected into each layer of the Transformer architecture while the weights of the pre-trained model are frozen.
+
+AMAL can be seen as similar with LoRA in terms of using the low-rank decomposition technique. However as a meta learning-based approach, AMAL can be applied to a broad range of language models and all existing adapter-based methods, including LoRA.
+
+# 3 Background
+
+# 3.1 Few-Shot Text Classification
+
+We deal with the few-shot text classification problem to demonstrate AMAL's few-shot language model adaptation performance. As usual, $C$ -way $K$ -shot indicates that $K$ -annotated examples are only given for each of the $C$ number of classes for a task (denoted as $\tau_{i}$ ), leading to the total number of examples as $K_{\tau_i} = K\times |\mathcal{C}|$ .
+
+# 3.2 Pre-Trained Language Models
+
+We experiment with BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), ALBERT (Lan et al., 2019), BART (Lewis et al., 2020) and DeBERTa (He et al., 2020) as the underlying PLMs. They add a dummy token to an original tokens sequence so that the PLMs end up with providing the corresponding embedding (denoted [CLS]). In this study, the [CLS] plays an role in probing the distinctive properties for every incoming task.
+
+
+Figure 1: Low rank adapter pooling
+
+# 3.3 Meta Learning
+
+In the meta-learning setting, tasks are divided into a meta-training set $(\mathcal{S}^{tr})$ , meta-validation set $(\mathcal{S}^{val})$ and meta-test set $(\mathcal{S}^{test})$ as disjoint sets of classes. Our meta-learning strategy follows the overall procedure of optimization-based meta-learning (Finn et al., 2017) so that our proposed low-rank adapters are learned by alternating between two different complementary processes: (1) low-rank adapter pooling (inner update) and (2) meta-optimization (outer update). For a task $\tau_{i}\sim p(\tau)$ , the task data $\mathcal{D}_{\tau_i} = \{(x^i,y^i)\}$ consist of $\mathcal{D}_{\tau_i}^{tr}$ and $\mathcal{D}_{\tau_i}^{val}$ during the meta-training phase. In the meta-testing, the dataset of a new task $\tau_{i}$ is given as $\mathcal{D}_{\tau_i} = (\mathcal{D}_{\tau_i}^{tr},\mathcal{D}_{\tau_i}^{te})$ with a few annotated data points in the study.
+
+# 4 Proposed Method: AMAL
+
+In this section, we present the implementation of AMAL. The design implies the hypothesis that the language model adaptation can be performed on a low intrinsic rank. Here, we describe AMAL by employing the original Adapter (Houlsby et al., 2019b) method. Importantly, AMAL is orthogonal to existing Adapter methods and can be combined with any of them. AMAL offers a task-specific adapter for an incoming task. AMAL alternates between two update processes during meta-training: (1) low-rank adapter pooling and (2) meta-optimization.
+
+# 4.1 Low Rank Adapter Model
+
+As shown in Figure 1, as an element of the adapter, each projection matrix $\mathcal{P}_l\in \mathbb{R}^{d\times m}$ of the $l$ -th layer is decomposed into three matrices:
+
+$$
+\mathcal {P} _ {l} = \mathcal {U} _ {l} \times \mathcal {E} _ {l} ^ {\tau_ {i}} \times \mathcal {V} _ {l} ^ {T} \tag {1}
+$$
+
+where $l$ is the layer's index, and $\mathcal{U}_l\in \mathbb{R}^{d\times r}$ $\mathcal{E}_l^{\tau_i}\in \mathbb{R}^{r\times r}$ $\mathcal{V}_l\in \mathbb{R}^{m\times r}$ given the PLM's original dimension $d$ , the adapter's bottleneck dimension $m$
+
+and the rank $r$ ( $r \ll \min(d, m)$ ). $\mathcal{E}_l^{\tau_i}$ is a diagonal matrix. For notational simplicity, we drop the distinction for the two different adapters (i.e., lower and upper) and likewise the distinction between up and down-projections. Importantly, $\mathcal{E}_l^{\tau_i}$ is the $l$ -th layer's low-rank adapter pooler for the task $\tau_i$ , $\mathcal{U}_l$ the $l$ -th layer's left adapter kernels, and $\mathcal{V}_l$ the right adapter kernels.
+
+# 4.2 Low Rank Adapter Pooling (inner update)
+
+The aim of the pooling is to derive the task-specific composition from the established adapter-kernels, $\mathcal{U}$ and $\nu$ , which are obtained in the meta-optimization process.
+
+To obtain the optimal adapter for a task $\tau_{i}$ , there are two important steps in the pooling process: (1) encoding the task $\tau_{i}$ into a low-dimensional latent embedding space $\mathcal{Z}$ and (2) producing the task-specific adapter pooler from the latent embedding $z^{\tau_i}$ . The encoding pipeline is taken from Rusu et al. (2018). The reason why we employ the latent embedding space is to enable AMAL to summarize the properties extracted from tasks into the low-dimensional latent space $\mathcal{Z}$ , instead of operating directly in the high dimensional parameter space. First, each task is fed into the encoding process, which is formulated as follows:
+
+$$
+z _ {n} ^ {\tau_ {i}} = \frac {1}{N K ^ {2}} \sum_ {k _ {n} = 1} ^ {K} \sum_ {n = 1} ^ {N} \sum_ {k _ {m} = 1} ^ {K} f _ {\theta_ {r}} \left(f _ {\theta_ {e}} \left(c _ {k _ {n}} ^ {\tau_ {i}}\right), f _ {\theta_ {e}} \left(c _ {k _ {m}} ^ {\tau_ {i}}\right)\right), \tag {2}
+$$
+
+where $z_{n}^{\tau_{i}}$ denotes the latent space embedding for the particular class $n$ under a given task $\tau_{i}$ , $N$ indicates the total number of classes under the task, $K$ denotes the total number of examples under each class, $f_{\theta_r}$ indicates the relation network (Sung et al., 2018), and $f_{\theta_e}$ is an encoder network to transform the delegate embedding [CLS] (denoted as $c_{j}^{\tau_{i}}$ for the case of the $j$ th text instance of a specific task $\tau_{i}$ ) prior to the relation network. As a result, the class embedding $z_{n}^{\tau_{i}}$ is led to keep track of the pairwise relationship with other classes, and the task-specific embedding $z^{\tau_{i}}$ is the concatenation of $z_{1}^{\tau_{i}}, \ldots, z_{n}^{\tau_{i}}$ .
+
+Subsequently, the task-specific latent embedding is delivered to the decoding process, which renders the latent embedding to generate the associated low-rank pooler. The decoding process is formulated as follows:
+
+$$
+\mathcal {E} ^ {\tau_ {i}} = f _ {\theta_ {d}} \left(z ^ {\tau_ {i}}\right) \tag {3}
+$$
+
+where $\mathcal{E}^{\tau_i}$ denotes the low rank adapter pooler for the task $\tau_{i}$ , $f_{\theta_r}$ indicates the decoder neural net
+
+Algorithm 1 Our Proposed Meta-Training
+Require: Meta training set $S^{tr} \in \tau$ , $r$ (rank), $d$ , $m$
+Require: Learning rates $\alpha, \beta, \lambda, \gamma$
+Output: $\mathcal{U}, \mathcal{V}, \theta_e, \theta_r, \theta_d, \theta_\tau$
+1: Randomly initialize $\mathcal{U}, \mathcal{V}, \theta_e, \theta_r, \theta_d, \theta_\tau$
+2: Let $\phi = \{\mathcal{U}, \mathcal{V}, \theta_e, \theta_r, \theta_d, \theta_\tau\}$
+3: while not converged do
+4: for number of tasks in batch do
+5: Sample task instance $\tau_i \sim S^{tr}$
+6: Let $(\mathcal{D}^{tr}, \mathcal{D}^{val}) = \tau_i$
+7: Initialize $\theta_{\tau_i}' = \theta_\tau$ and $z^{\tau_i'} = z^{\tau_i}$
+8: for number of adaptation steps do
+9: Encode [CLS] to $z^{\tau_i'}$ using $f_{\theta_e}$ and $f_{\theta_r}$
+10: Produce $\mathcal{E}_{\tau_i}'$ from $z^{\tau_i'}$ using $f_{\theta_d}$
+11: Generate document embeddings using $H^{\tau_i}$
+12: Compute Task-Adaptation loss $\mathcal{L}_{\tau_i}^{tr}$
+13: Perform gradient step w.r.t. $z^{\tau_i'}$ and $\theta_{\tau_i}'$
+14: $z^{\tau_i'} \gets z^{\tau_i'} - \alpha \nabla_{z^{\tau_i'}} \mathcal{L}_{\tau_i}^{tr}$
+15: $\theta_{\tau_i}' \gets \theta_{\tau_i}' - \alpha \nabla_{\theta_{\tau_i}'} \mathcal{L}_{\tau_i}^{tr}$
+16: end for
+17: Generate document embeddings using $H^{\tau_i}$
+18: Compute Meta-Optimization loss $\mathcal{L}_{\tau_i}^{val}$
+19: end for
+20: Perform gradient step w.r.t $\phi$
+21: $\phi \gets \phi - \beta \nabla_{\phi} \sum_{\tau_i} \mathcal{L}_{\tau_i}^{val} + \lambda \cdot \Omega + \gamma \cdot \mathcal{R}$
+22: end while
+
+work, and $z^{\tau_i}$ is the task's latent embedding. To sum up, a new task is eventually converted into the task-specific low-rank adapter pooler via modulation on the low-dimensional latent space.
+
+# 4.3 Meta-Optimization (outer update)
+
+As noted in Algorithm 1, AMAL updates three neural network blocks (i.e., $\theta_{e},\theta_{r},\theta_{d}$ ) as well as the left adapter kernels $\mathcal{U}$ and the right adapter kernels $\mathcal{V}$ , by minimizing the following objective function in the meta-optimization process:
+
+$$
+\min _ {\theta_ {e}, \theta_ {r}, \theta_ {d}, \mathcal {U}, \mathcal {V}} \sum_ {\tau_ {i}} \left(\mathcal {L} _ {\tau_ {i}} ^ {v a l} + \lambda \cdot \Omega + \gamma \cdot \mathcal {R}\right) \tag {4}
+$$
+
+where $\Omega$ indicates a weighted KL-divergence term, i.e., $D_{KL}(q(z^{\tau_i}|\mathcal{D}_n^{tr})||p(z^{\tau_i}))$ where $p(z^{\tau_i}) = \mathcal{N}(0,\mathcal{I})$ , to regularize the latent space with the aim to learn a disentangled embedding. $\mathcal{R}$ denotes a penalty term to attain near-orthogonality in the construction of $\mathcal{U}$ and $\mathcal{V}$ , and is formulated as follows:
+
+$$
+\mathcal {R} = \left\| \mathcal {U} \mathcal {U} ^ {T} - \mathcal {I} \right\| _ {F} + \left\| \mathcal {V} \mathcal {V} ^ {T} - \mathcal {I} \right\| _ {F} \tag {5}
+$$
+
+where $F$ denotes the Frobenius norm, and both $\mathcal{U}$ and $\mathcal{V}$ are randomly initialized. All the hyperparameters are equivalently kept all over the layers.
+
+Table 1: Results of 5-way 1-shot and 5-way 5-shot classification
+
+| Question and disambiguations | T5 output | JPR@5 relevant segments | Error analysis |
| Q: Who won the mayor race in st petersburg florida?Disambig-Q1: Who won the 2017 mayor race in st petersburg florida?Kriseman | Rick Kriseman Disambig-Q2: Who won the 2013 mayor race in st petersburg florida?Kriseman | Rick Kriseman Disambig-Q3: Who won the 2009 mayor race in st petersburg florida?Foster | Bill Foster | the city of st. petersburg, florida has had several may-oral elections. democratick rissemann won the 2016 mayoral election, while re- publican former mayor rick baker did so in the 2017 mayoral election. krise- man defeated baker in the 2017 general election with 51.62% of the vote. | 2017 St. Petersburg, Florida mayoral election: ...Incumbent mayor Rick Kriseman, a Democrat, opted to seek re-election. His opponent in the runoff was Republican former mayor Rick Baker ...Incumbent Rick Kriseman won the re-election...Rick Kriseman: On November 7, 2017, Kriseman was re-elected for a second term, winning the general election with 51.62% of the vote. He defeated former Republican Mayor Rick Baker... | T5 hallucinates (i) the 2016 mayoral elec-tions (there were no elections in 2016) and (ii) the winner of the 2017 mayoral elections (Kriseman won elections, but the model claims Baker to be the winner). |
| Q: Who was the mother of dragons married to?Disambig-Q1: Who was the mother of dragons first married to?Khal Drogo | Drogo Disambig-Q2: Who was the mother of dragons married to second?Hizdahr zo LoraqDisambig-Q3: What leader of Dothraki is the mother of dragons married to?Khal DrogoDisambig-Q4: What Ghiscari noble is the mother of dragons married to?Hizdahr zo Loraq | dragons are often married to multiple people in a song of ice and fire storyline. in a song of ice and fire, the mother of dragons is known as elizabeth/liz pennykettle, a woman probably in her thirties who makes dragons out of clay and sells them at pottery fairs. she is the mother of lucy pennyket- tle, wife of the blinded ex- monk arthur, and landlady of david rain. elabeth/liz pennykettle - a woman prob- ably in her thirties who makes dragons out of clay and sells them at pottery fairs. she is the mother of lucy pennykettle, wife of the blinded ex-monk arthur, and landlady of david rain. | A Song of Ice and Fire: ...Daenerys is married off by her elder brother Viserys Targaryen to ...Daenerys Targaryen: Viserys marries Daenerys to the pow- erful Dothraki warlord Khal Drogo in exchange for his mil- itary support ...The Last Dragon Chronicles: Elizabeth/Liz Pennykettle - A woman probably in her thirties who makes dragons out of clay and sells them at pottery fairs. | In the first sentence of the answer, T5 hallucinates that dragons are married to humans in the A Song of Ice and Fire storyline. In the second sentence, T5 suddenly jumps from a discussion of A Song of Ice and Fire to The Last Dragon Chronicles, thereby misleading the reader.The blue text high- lights repeated con- tent. |
| Q: When was under god added to the pledge of allience?Disambig-Q1: When was under god officially added to the pledge of allience?June 14, 1954Disambig-Q2: When did Bowman popularly introduce the pledge of allience with under god added to a society meeting in Chicago?February 12, 1948Disambig-Q3: When was the bill signed into law that under god was added to the pledge of Alliegence?Flag Day | June 14, 1954Disambig-Q4: When was under god added to the pledge of Alliegence for the first time?February 12, 1948Disambig-Q5: When did the board of directors of the Knights of Columbus adopt a resolution to add under god to the pledge of Alliegence?April 30, 1951 | the pledge of allegiance of the united states is an expression of allegiance to the flag of the united states and the republic of the united states of america. it was originally composed by cap- tain george thatcher balch, a union army officer dur ing the civil war and later a teacher of patriotism in new york city schools. the form of the pledge used today was largely devised by francis bellamy in 1892, and for- mally adopted by congress as the pledge in 1942. the most recent alteration of its wording came on flag day, june 14, 1954. [Miss- ing mention of "under god" words] | Pledge of Allegiance Louis Al- bert Bowman, an attorney from Illinois, was the first to suggest the addition of "under God" to the pledge ...At a meeting on February 12, 1948, he led the society in reciting ...Pledge of Allegiance In 1951, the Knights of Columbus, the world's largest Catholic frat- nal service organization, also began including the words "un- der God" in the Pledge of Allie- giance.Pledge of Allegiance Congress passed the necessary legisl- tion and Eisenhower signed the bill into law on Flag Day, June 14, 1954. Eisenhower said: The phrase "under God" was incorporated into the Pledge of Allegiance on June 14, 1954, by a Joint Resolution of Congress amending § 4 of the Flag Code enacted in 1942. | The T5 output intro- duces the Pledge of Allegiance and men- tions some of the right dates (June 14, 1954), but does not mention that alter- ation on June 14, 1954, included the words "under god" to the Pledge. |
+
+history of the Pledge of Allegiance but does not mention the target phrase («under God»).
+
+Repetitions Finally, we observe a somewhat technical issue of repetitions in the generated answers, as shown in the second row of Table 6.
\ No newline at end of file
diff --git a/asqafactoidquestionsmeetlongformanswers/images.zip b/asqafactoidquestionsmeetlongformanswers/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..3aac14e8586ccfb109a5185a9510abf746442076
--- /dev/null
+++ b/asqafactoidquestionsmeetlongformanswers/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:adae28e90638b8225af86a9a008c5e821504607caaa6cd4342be722d349b5085
+size 836895
diff --git a/asqafactoidquestionsmeetlongformanswers/layout.json b/asqafactoidquestionsmeetlongformanswers/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..207cf110fb263125a18dc36f53b0504371ddf4b1
--- /dev/null
+++ b/asqafactoidquestionsmeetlongformanswers/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c1b30a5e95e0c409a147febca80640f38ee69520107921918660e7700db0f245
+size 439689
diff --git a/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_content_list.json b/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a626ea7ab25dd85ae328448ecfa7309c29b97b67
--- /dev/null
+++ b/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9cf5122203fffc8f6d6532227fff2498d2cc96f23ad0af2b508f117262e485c8
+size 164266
diff --git a/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_model.json b/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..46fc28e5fa65b27b029e025251de0d3f19a9d981
--- /dev/null
+++ b/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8f75f2a1a676c016e4d314e31be3cc8c18e38b59556845f7b68ea6f65500501c
+size 236068
diff --git a/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_origin.pdf b/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..51e0bfe08428a3bf632bde8716e39bf7b8ed2375
--- /dev/null
+++ b/asurveyofactivelearningfornaturallanguageprocessing/a50155db-50c9-4386-a449-b4a803284856_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f5d4afea5ad7b5723e9b0332cbfc2c99eaab3f4b0b21f2c656a779c98d3eee92
+size 654172
diff --git a/asurveyofactivelearningfornaturallanguageprocessing/full.md b/asurveyofactivelearningfornaturallanguageprocessing/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..aa9a9d20ddc6504123b37c5943d76c48cd9a16b2
--- /dev/null
+++ b/asurveyofactivelearningfornaturallanguageprocessing/full.md
@@ -0,0 +1,612 @@
+# A Survey of Active Learning for Natural Language Processing
+
+Zhisong Zhang, Emma Strubell, Eduard Hovy
+
+Language Technologies Institute, Carnegie Mellon University
+
+zhisongz@cs.cmu.edu, strubell@cmu.edu, hovy@cmu.edu
+
+# Abstract
+
+In this work, we provide a literature review of active learning (AL) for its applications in natural language processing (NLP). In addition to a fine-grained categorization of query strategies, we also investigate several other important aspects of applying AL to NLP problems. These include AL for structured prediction tasks, annotation cost, model learning (especially with deep neural models), and starting and stopping AL. Finally, we conclude with a discussion of related topics and future directions.
+
+# 1 Introduction
+
+The majority of modern natural language processing (NLP) systems are based on data-driven machine learning models. The success of these models depends on the quality and quantity of the available target training data. While these models can obtain impressive performance if given enough supervision, it is usually expensive to collect large amounts of annotations, especially considering that the labeling process can be laborious and challenging for NLP tasks (§3.2). Active learning (AL), an approach that aims to achieve high accuracy with fewer training labels by allowing a model to choose the data to be annotated and used for learning, is a widely-studied approach to tackle this labeling bottleneck (Settles, 2009).
+
+Active learning has been studied for more than twenty years (Lewis and Gale, 1994; Lewis and Catlett, 1994; Cohn et al., 1994, 1996) and there have been several literature surveys on this topic (Settles, 2009; Olsson, 2009; Fu et al., 2013; Aggarwal et al., 2014; Hino, 2020; Schröder and Niekler, 2020; Ren et al., 2021; Zhan et al., 2022). Nevertheless, there is still a lack of an AL survey for NLP that includes recent advances. Settles (2009) and Olsson (2009) provide great surveys covering AL for NLP, but these surveys are now more than a decade old. In the meantime, the field of NLP has been transformed by deep learning. While
+
+
+Figure 1: Counts of AL (left) and "neural" (right) papers in the ACL Anthology over the past twenty years.
+
+other more recent surveys cover deep active learning, they are either too specific, focused only on text classification (Schröder and Niekler, 2020), or too general, covering AI applications more broadly (Ren et al., 2021; Zhan et al., 2022). Moreover, applying AL to NLP tasks requires specific considerations, e.g. handling complex output structures and trade-offs in text annotation cost ( $\S 3$ ), which have not been thoroughly discussed.
+
+In order to provide an NLP-specific AL survey, we start by searching the ACL Anthology for AL-related papers. We simply search for the keyword "active" in paper titles and then perform manual filtering. We also gradually include relevant papers missed by keyword search and papers from other venues encountered by following reference links throughout the surveying process. The distribution of AL-related papers in the ACL Anthology over the past twenty years is shown in Figure 1, which also includes rough counts of works concerning neural models by searching for the word "neural" in titles. The overall trend is interesting. There is a peak around the years of 2009 and 2010, while the counts drop and fluctuate during the mid-2010s, which corresponds to the time when neural models became prominent in NLP. We observe a renewed interest in AL research in recent years, which is
+
+Algorithm 1 A typical active learning procedure.
+Input: An unlabeled data pool $\mathcal{U}$
+Output: The final labeled dataset $\mathcal{L}$ and trained model $\mathcal{M}$ 1: $\mathcal{L},\mathcal{U}\gets$ seed(U) Start (5.1) 2: $\mathcal{M}\leftarrow$ train(L,U) Model Learning (84) 3: while not stop Criterion() do Stop (5.2) 4: $\mathcal{I}\gets$ query(M,U) Query (82,83) 5: $\mathcal{I}'\gets$ annotate(I) Annotate (83) 6: $\mathcal{U}\gets \mathcal{U} - \mathcal{I};\mathcal{L}\gets \mathcal{L}\cup \mathcal{I}'$ 7: $\mathcal{M}\gets$ train(L,U) Model Learning (84) 8: return $\mathcal{L},\mathcal{M}_f$
+
+primarily focused on deep active learning (Ren et al., 2021; Zhan et al., 2022).
+
+# 1.1 Overview
+
+We mainly examine the widely utilized pool-based scenario (Lewis and Gale, 1994), where a pool of unlabeled data is available and instances are drawn from the pool to be annotated. Algorithm 1 illustrates a typical AL procedure, which consists of a loop of instance selection with the current model and model training with updated annotations. The remainder of this survey is organized corresponding to the main steps in this procedure:
+
+- In §2, we discuss the core aspect of AL: Query strategies, with a fine-grained categorization over informativeness (§2.1), representativeness (§2.2) and the combination of these two (§2.3).
+- In §3, we cover the two additional important topics of querying and annotating for NLP tasks: AL for structured prediction tasks (§3.1) and the cost of annotation with AL (§3.2).
+- In §4, we discuss model and learning: the query-successor model mismatch scenario (§4.1) and AL with advanced learning techniques (§4.2).
+- In §5, we examine methods for starting (§5.1) and stopping (§5.2) AL.
+
+In §6, we conclude with related and future directions. We also include representative AL works for various NLP tasks in Appendix A and some other aspects of AL for NLP in Appendix B.
+
+# 2 Query Strategies
+
+# 2.1 Informativeness
+
+Informativeness-based query strategies mostly assign an informative measure to each unlabeled instance individually. The instance(s) with the highest measure will be selected.
+
+# 2.1.1 Output Uncertainty
+
+Uncertainty sampling (Lewis and Gale, 1994) is probably the simplest and the most commonly
+
+utilized query strategy. It prefers the most uncertain instances judged by the model outputs. For probabilistic models, entropy-based (Shannon, 1948), least-confidence (Culotta and McCallum, 2005) and margin-sampling (Scheffer et al., 2001; Schein and Ungar, 2007) are three typical uncertainty sampling strategies (Settles, 2009). Schröder et al. (2022) revisit some of these uncertainty-based strategies with Transformer-based models and provide empirical results for text classification. For non-probabilistic models, similar ideas can be utilized, such as selecting the instances that are close to the decision boundary in an SVM (Schohn and Cohn, 2000; Tong and Koller, 2001).
+
+Another way to measure output uncertainty is to check the divergence of a model's predictions with respect to an instance's local region. If an instance is near the decision boundary, the model's outputs may be different within its local region. In this spirit, recent works examine different ways to check instances' local divergence, such as nearest-neighbour searches (Margatina et al., 2021), adversarial perturbation (Zhang et al., 2022b) and data augmentation (Jiang et al., 2020).
+
+# 2.1.2 Disagreement
+
+Uncertainty sampling usually considers the outputs of only one model. In contrast, disagreement-based strategies utilize multiple models and select the instances that are most disagreed among them. This is also a widely-adopted algorithm, of which the famous query-by-committee (QBC; Seung et al., 1992) is an example. The disagreement can be measured by vote entropy (Engelson and Dagan, 1996), KL-divergence (McCallum and Nigam, 1998) or variation ratio (Freeman, 1965).
+
+To construct the model committee, one can train a group of distinct models. Moreover, taking a Bayesian perspective over the model parameters is also applicable (Houlsby et al., 2011). Especially with neural models, (Gal and Ghahramani, 2016) show that dropout could approximate inference and measure model uncertainty. This deep Bayesian method has been applied to AL for computer vision (CV) tasks (Gal et al., 2017) as well as various NLP tasks (Siddhant and Lipton, 2018; Shen et al., 2018; Shelmanov et al., 2021).
+
+# 2.1.3 Gradient
+
+Gradient information can be another signal for querying, with the motivation to choose the instances that would most strongly impact the model.
+
+In this strategy, informativeness is usually measured by the norm of the gradients. Since we do not know the gold labels for unlabeled instances, the loss is usually calculated as the expectation over all labels. This leads to the strategy of expected gradient length (EGL), introduced by Settles et al. (2007) and later applied to sequence labeling (Settles and Craven, 2008) and speech recognition (Huang et al., 2016). Zhang et al. (2017) explore a variation for neural networks where only the gradients of word embeddings are considered and show its effectiveness for text classification.
+
+# 2.1.4 Performance Prediction
+
+Predicting performance can be another indicator for querying. Ideally, the selected instances should be the ones that most reduce future errors if labeled and added to the training set. This motivates the expected error reduction strategy (Roy and McCallum, 2001), which chooses instances that lead to the least expected error if added to retrain a model. This strategy can be computationally costly since retraining is needed for each candidate.
+
+Recently, methods have been proposed to learn another model to select instances that lead to the fewest errors, usually measured on a held-out development set. Reinforcement learning and imitation learning have been utilized to train such policy models (Bachman et al., 2017; Fang et al., 2017; Liu et al., 2018a,b). This learning-to-select strategy may have some constraints. First, it requires labeled data (maybe from another domain) to train the policy. To mitigate this reliance, Vu et al. (2019) use the current task model as an imperfect annotator for AL simulations. Moreover, the learning signals may be unstable for complex tasks, as Koshorek et al. (2019) show for semantic tasks.
+
+A similar and simpler idea is to select the most erroneous or ambiguous instances with regard to the current task model, which can also be done with another performance-prediction model. Yoo and Kweon (2019) directly train a smaller model to predict the instance losses for CV tasks, which have been also adopted for NLP (Cai et al., 2021; Shen et al., 2021). In a similar spirit, Wang et al. (2017) employ a neural model to judge the correctness of the model prediction for SRL and Brantley et al. (2020) learn a policy to decide whether expert querying is required for each state in sequence labeling. Inspired by data maps (Swayamdipta et al., 2020), Zhang and Plank (2021) train a model to select ambiguous instances whose average correct-
+
+ness over the training iterations is close to a predefined threshold. For machine translation (MT), special techniques can be utilized to seek erroneous instances, such as using a backward translator to check round-trip translations (Haffari et al., 2009; Zeng et al., 2019) or quality estimation (Logacheva and Specia, 2014a,b).
+
+# 2.2 Representativeness
+
+Only considering the informativeness of individual instances may have the drawback of sampling bias (Dasgupta, 2011; Prabhu et al., 2019) and the selection of outliers (Roy and McCallum, 2001; Karamcheti et al., 2021). Therefore, representativeness, which measures how instances correlate with each other, is another major factor to consider when designing AL query strategies.
+
+# 2.2.1 Density
+
+With the motivation to avoid outliers, density-based strategies prefer instances that are more representative of the unlabeled set. Selecting by $n$ -gram or word counts (Ambati et al., 2010a; Zhao et al., 2020b) can be regarded as a simple way of density measurement. Generally, the common measurement is an instance's average similarity to all other instances (McCallum and Nigam, 1998; Settles and Craven, 2008). While it may be costly to calculate similarities of all instance pairs, considering only $k$ -nearest neighbor instances has been proposed as an alternative option (Zhu et al., 2008c, 2009).
+
+# 2.2.2 Discriminative
+
+Another direction is to select instances that are different from already labeled instances. Again, for NLP tasks, simple feature-based metrics can be utilized for this purpose by preferring instances with more unseen $n$ -grams or out-of-vocabulary words (Eck et al., 2005; Bloodgood and Callison-Burch, 2010; Erdmann et al., 2019). Generally, similarity scores can also be utilized to select the instances that are less similar to the labeled set (Kim et al., 2006; Zhang et al., 2018; Zeng et al., 2019). Another interesting idea is to train a model to discriminate the labeled and unlabeled sets. Gissin and Shalev-Shwartz (2019) directly train a classifier for this purpose, while naturally adversarial training can be also adopted (Sinha et al., 2019; Deng et al., 2018). In domain adaptation scenarios, the same
+
+motivation leads to the usage of a domain separator to filter instances (Rai et al., 2010).
+
+# 2.2.3 Batch Diversity
+
+Ideally, only one most useful instance would be selected in each iteration. However, it is more efficient and practical to adopt batch-mode AL (Settles, 2009), where each time a batch of instances is selected. In this case, we need to consider the dissimilarities not only between selected instances and labeled ones but also within the selected batch.
+
+To select a batch of diverse instances, there are two common approaches. 1) Iterative selection collects the batch in an iterative greedy way (Brinker, 2003; Shen et al., 2004). In each iteration, an instance is selected by comparing it with previously chosen instances to avoid redundancy. Some more advanced diversity-based criteria, like coreset (Geifman and El-Yaniv, 2017; Sener and Savarese, 2018) and determinantal point processes (Shi et al., 2021), can also be approximated in a similar way. 2) Clustering-based methods partition the unlabeled data into clusters and select instances among them (Tang et al., 2002; Xu et al., 2003; Shen et al., 2004; Nguyen and Smeulders, 2004; Zhdanov, 2019; Yu et al., 2022). Since the chosen instances come from different clusters, diversity can be achieved to some extent.
+
+For the calculation of similarity, in addition to comparing the input features or intermediate neural representations, other methods are also investigated, such as utilizing model-based similarity (Hazra et al., 2021), gradients (Ash et al., 2020; Kim, 2020), and masked LM surprisal embeddings (Yuan et al., 2020).
+
+# 2.3 Hybrid
+
+There is no surprise that informativeness and representativeness can be combined for instance querying, leading to hybrid strategies. A simple combination can be used to merge multiple criteria into one. This can be achieved by a weighted sum (Kim et al., 2006; Chen et al., 2011) or multiplication (Settles and Craven, 2008; Zhu et al., 2008c).
+
+There are several strategies to naturally integrate multiple criteria. Examples include (uncertainty) weighted clustering (Zhdanov, 2019), diverse gradient selection (Ash et al., 2020; Kim, 2020) where the gradients themselves contain uncertainty information (§2.1.3) and determinantal point processes (DPP) with quality-diversity decomposition (Shi et al., 2021).
+
+Moreover, multi-step querying, which applies multiple criteria in series, is another natural hybrid method. For example, one can consider first filtering certain highly uncertain instances and then performing clustering to select a diverse batch from them (Xu et al., 2003; Shen et al., 2004; Mirroshandel et al., 2011). An alternative strategy of selecting the most uncertain instances per cluster has also been utilized (Tang et al., 2002).
+
+Instead of statically merging into one query strategy, dynamic combination may better fit the AL learning process, since different strategies may excel at different AL phases. For example, at the start of AL, uncertainty sampling may be unreliable due to little labeled data, and representativeness-based methods could be preferable, whereas in later stages where we have enough data and target finer-grained decision boundaries, uncertainty may be a suitable strategy. DUAL (Donmez et al., 2007) is such a dynamic strategy that can switch from a density-based selector to an uncertainty-based one. Ambati et al. (2011b) further propose GraDUAL, which gradually switches strategies within a switching range. Wu et al. (2017) adopt a similar idea with a pre-defined monotonic function to control the combination weights.
+
+# 3 Query and Annotation
+
+# 3.1 AL for Structured Prediction
+
+AL has been widely studied for classification tasks, while in NLP, many tasks involve structured prediction. In these tasks, the system needs to output a structured object consisting of a group of interdependent variables (Smith, 2011), such as a label sequence or a parse tree. Special care needs to be taken when querying and annotating for these more complex tasks (Thompson et al., 1999). One main decision is whether to annotate full structures for input instances ( $\S 3.1.1$ ), or allow the annotation of only partial structures ( $\S 3.1.2$ ).
+
+# 3.1.1 Full-structure AL
+
+First, if we regard the full output structure of an instance as a whole and perform query and annotation at the full-instance level, then AL for structured prediction tasks is not very different than for simpler classification tasks. Nevertheless, considering that the output space is usually exponentially large and infeasible to explicitly enumerate, querying may require further inspection.
+
+Some uncertainty sampling strategies, such as
+
+entropy, need to consider the full output space. Instead of the infeasible explicit enumeration, dynamic-programming algorithms that are similar to the ones in decoding and inference processes can be utilized, such as algorithms for tree-entropy (Hwa, 2000, 2004) and sequence-entropy (Mann and McCallum, 2007; Settles and Craven, 2008).
+
+Instead of considering the full output space, top-k approximation is a simpler alternative that takes $k$ -best predicted structures as a proxy. This is also a frequently utilized method (Tang et al., 2002; Kim et al., 2006; Rocha and Sanchez, 2013).
+
+For disagreement-based strategies, the measurement of partial disagreement may be required, since full-match can be too strict for structured objects. Fine-grained evaluation scores can be reasonable choices for this purpose, such as F1 score for sequence labeling (Ngai and Yarowsky, 2000).
+
+Since longer instances usually have larger uncertainties and might be preferred, length normalization is a commonly-used heuristic to avoid this bias (Tang et al., 2002; Hwa, 2000, 2004; Shen et al., 2018). Yet, Settles and Craven (2008) argue that longer sequences should not be discouraged and may contain more information.
+
+Instead of directly specifying the full utility of an instance, aggregation is also often utilized by gathering utilities of its sub-structures, usually along the factorization of the structured modeling. For example, the sequence uncertainty can be obtained by summing or averaging the uncertainties of all the tokens (Settles and Craven, 2008). Other aggregation methods are also applicable, such as weighted sum by word frequency (Ringger et al., 2007) or using only the most uncertain (least probable) one (Myers and Palmer, 2021; Liu et al., 2022).
+
+# 3.1.2 Partial-structure AL
+
+A structured object can be decomposed into smaller sub-structures with different training utilities. For example, in a dependency tree, functional relations are usually easier to judge while prepositional attachment links may be more informative for the learning purpose. This naturally leads to AL with partial structures, where querying and annotating can be performed at the sub-structure level.
+
+Factorizing full structures into the finest-grained sub-structures and regarding them as the annotation units could be a natural choice. Typical examples include individual tokens for sequence labeling (Marcheggiani and Artières, 2014), word boundaries for segmentation (Neubig et al., 2011;
+
+Li et al., 2012b), syntactic-unit pairs for dependency parsing (Sassano and Kurohashi, 2010) and mention pairs for coreference (Gasperin, 2009; Miller et al., 2012; Sachan et al., 2015). The querying strategy for the sub-structures can be similar to the classification cases, though inferences are usually needed to calculate marginal probabilities. Moreover, if full structures are desired as annotation outputs, semi-supervised techniques such as self-training (§4.2) could be utilized to assign pseudo labels to the unannotated parts (Tomanek and Hahn, 2009b; Majidi and Crane, 2013).
+
+At many times, choosing larger sub-structures is preferable, since partial annotation still needs the understanding of larger contexts and frequently jumping among different contexts may require more reading time (§3.2.1). Moreover, increasing the sampling granularity may mitigate the missed class effect, where certain classes may be overlooked (Tomanek et al., 2009). Typical examples of larger sub-structures include sub-sequences for sequence labeling (Shen et al., 2004; Chaudhary et al., 2019; Radmard et al., 2021), word-wise head edges for dependency parsing (Flannery and Mori, 2015; Li et al., 2016), neighborhood pools (Laws et al., 2012) or mention-wise anaphoric links (Li et al., 2020; Espeland et al., 2020) for coreference, and phrases for MT (Bloodgood and Callison-Burch, 2010; Miura et al., 2016; Hu and Neubig, 2021). In addition to increasing granularity, grouping queries can also help to make annotation easier, such as adopting a two-stage selection of choosing uncertain tokens from uncertain sentences (Mirroshandel and Nasr, 2011; Flannery and Mori, 2015) and selecting nearby instances in a row (Miller et al., 2012).
+
+For AL with partial structures, output modeling is of particular interest since the model needs to learn from partial annotations. If directly using local discriminative models where each substructure is decided independently, learning with partial annotations is straightforward since the annotations are already complete to the models (Neubig et al., 2011; Flannery and Mori, 2015). For more complex models that consider interactions among output sub-structures, such as global models, special algorithms are required to learn from incomplete annotations (Scheffer et al., 2001; Wanvarie et al., 2011; Marcheggiani and Artières, 2014; Li et al., 2016). One advantage of these more complex models is the interaction of the partial labels
+
+and the remaining parts. For example, considering the output constraints for structured prediction tasks, combining the annotated parts and the constraints may reduce the output space of other parts and thus lower their uncertainties, leading to better queries (Roth and Small, 2006; Sassano and Kurohashi, 2010; Mirroshandel and Nasr, 2011). More generally, the annotation of one label can intermediately influence others with cheap re-inference, which can help batch-mode selection (Marcheggiani and Artières, 2014) and interactive correction (Culotta and McCallum, 2005).
+
+In addition to classical structured-prediction tasks, classification tasks can also be cast as structured predictions with partial labeling. Partial feedback is an example that is adopted to make the annotating of classification tasks simpler, especially when there are a large number of target labels. For example, annotators may find it much easier to answer yes/no questions (Hu et al., 2019) or rule out negative classes (Lippincott and Van Durme, 2021) than to identify the correct one.
+
+# 3.2 Annotation Cost
+
+AL mainly aims to reduce real annotation cost and we discuss several important topics on this point.
+
+# 3.2.1 Cost Measurement
+
+Most AL works adopt simple measurements of unit cost, that is, assuming that annotating each instance requires the same cost. Nevertheless, the annotation efforts for different instances may vary (Settles et al., 2008). For example, longer sentences may cost more to annotate than shorter ones. Because of this, many works assume unit costs to tokens instead of sequences, which may still be inaccurate. Especially, AL tends to select difficult and ambiguous instances, which may require more annotation efforts (Hachey et al., 2005; Lynn et al., 2012). It is important to properly measure annotation cost since the measurement directly affects the evaluation of AL algorithms. The comparisons of query strategies may vary if adopting different cost measurement (Haertel et al., 2008a; Bloodgood and Callison-Burch, 2010; Chen et al., 2015).
+
+Probably the best cost measurement is the actual annotation time (Baldridge and Palmer, 2009). Especially, when the cost comparisons are not that straightforward, such as comparing annotating data against writing rules (Ngai and Yarowsky, 2000) or partial against full annotations ( $\S 3.1$ ; Flannery and Mori, 2015; Li et al., 2016, 2020), time-based
+
+evaluation is an ideal choice. This requires actual annotating exercises rather than simulations.
+
+Since cost measurement can also be used for querying (§3.2.2), it would be helpful to be able to predict the real cost before annotating. This can be cast as a regression problem, for which several works learn a linear cost model based on input features (Settles et al., 2008; Ringger et al., 2008; Haertel et al., 2008a; Arora et al., 2009).
+
+# 3.2.2 Cost-sensitive Querying
+
+Given the goal of reducing actual cost, the querying strategies should also take it into consideration. That is, we want to select not only high-utility instances but also low-cost ones. A natural cost-sensitive querying strategy is return-on-investment (ROI; Haertel et al., 2008b; Settles et al., 2008; Donmez and Carbonell, 2008). In this strategy, instances with higher net benefit per unit cost are preferred, which is equivalent to dividing the original querying utility by cost measure. Tomanek and Hahn (2010) evaluate the effectiveness of ROI together with two other strategies, including constraining maximal cost budget per instance and weighted rank combination. Haertel et al. (2015) provide further analytic and empirical evaluation, showing that ROI can reduce total cost.
+
+In real AL scenarios, things can be much more complex. For example, there can be multiple annotators with different expertise (Baldridge and Palmer, 2009; Huang et al., 2017; Cai et al., 2020), and the annotators may refuse to answer or make mistakes (Donmez and Carbonell, 2008). Being aware of these scenarios, Donmez and Carbonell (2008) propose proactive learning to jointly select the optimal oracle and instance. Li et al. (2017) further extend proactive learning to NER tasks.
+
+# 3.2.3 Directly Reducing Cost
+
+In addition to better query strategies, there are other ways of directly reducing annotation cost, such as computer-assisted annotation. In AL, models and annotators usually interact in an indirect way where models only query the instances to present to the annotators, while there could be closer interactions.
+
+Pre-annotation is such an idea, where not only the raw data instances but also the model's best or top- $k$ predictions are sent to the annotators to help them make decisions. If the model's predictions are reasonable, the annotators can simply select or make a few corrections to obtain the gold annotations rather than creating from scratch.
+
+This method has been shown effective when combined with AL (Baldridge and Osborne, 2004; Vlachos, 2006; Ringger et al., 2008; Skeppstedt, 2013; Canizares-Díaz et al., 2021). Post-editing for MT is also a typical example (Dara et al., 2014).
+
+Moreover, the models could provide help at real annotating time. For example, Culotta and McCallum (2005) present an interactive AL system where the user's corrections can propagate to the model, which generates new predictions for the user to further refine. Interactive machine translation (IMT) adopts a similar idea, where the annotator corrects the first erroneous character, based on which the model reproduces the prediction. AL has also been combined with IMT to further reduce manual efforts (González-Rubio et al., 2012; Peris and Casacuberta, 2018; Gupta et al., 2021).
+
+# 3.2.4 Wait Time
+
+In AL iterations, the annotators may need to wait for the training and querying steps (Line 3 and 4 in Algorithm 1). This wait time may bring some hidden costs, thus more efficient querying and training would be preferable for faster turnarounds.
+
+To speed up querying, sub-sampling is a simple method to deal with large unlabeled pools (Roy and McCallum, 2001; Ertekin et al., 2007; Tsvigun et al., 2022). For some querying strategies, pre-calculating and caching unchanging information can also help to speed up (Ashrafi Asli et al., 2020; Citovsky et al., 2021). In addition, approximation with $k$ -nearest neighbours can also be utilized to calculate density (Zhu et al., 2009) or search for instances after adversarial attacks (Ru et al., 2020).
+
+To reduce training time, a seemingly reasonable strategy is to apply incremental training across AL iterations, that is, continuing training previous models on the new instances. However, Ash and Adams (2020) show that this type of warm-start may lead to sub-optimal performance for neural models and many recent AL works usually train models from scratch (Hu et al., 2019; Ein-Dor et al., 2020). Another method is to use an efficient model for querying and a more powerful model for final training. However, this might lead to sub-optimal results, which will be discussed in §4.1.
+
+Another idea to reduce wait time is to simply allow querying with stale information. Actually, batch-mode AL (§2.2.3) is such an example where instances in the same batch are queried with the same model. Haertel et al. (2010) propose parallel AL, which maintains separate loops of annotating,
+
+training, and scoring, and allows dynamic and parameterless instance selection at any time.
+
+# 4 Model and Learning
+
+# 4.1 Model Mismatch
+
+While it is natural to adopt the same best-performing model throughout the AL process, there are cases where the query and final (successor) models can mismatch (Lewis and Catlett, 1994). Firstly, more efficient models are preferable for querying to reduce wait time (§3.2.4). Moreover, since data usually outlive models, re-using AL-base data to train another model would be desired (Baldridge and Osborne, 2004; Tomanek et al., 2007). Several works show that model mismatch may make the gains from AL be negligible or even negative (Baldridge and Osborne, 2004; Lowell et al., 2019; Shelmanov et al., 2021), which raises concerns about the utilization of AL in practice.
+
+For efficiency purposes, distillation can be utilized to improve querying efficiency while keeping reasonable AL performance. Shelmanov et al. (2021) show that using a smaller distilled version of a pre-trained model for querying does not lead to too much performance drop. Tsvigun et al. (2022) combine this idea with pseudo-labeling and sub-sampling to further reduce computational cost. Similarly, Nguyen et al. (2022) keep a smaller proxy model for query and synchronize the proxy with the main model by distillation.
+
+# 4.2 Learning
+
+AL can be combined with other advanced learning techniques to further reduce required annotations.
+
+Semi-supervised learning. Since AL usually assumes an unlabeled pool, semi-supervised learning can be a natural fit. Combining these two is not a new idea: (McCallum and Nigam, 1998) adopt the EM algorithm to estimate the outputs of unlabeled data and utilize them for learning. This type of self-training or pseudo-labeling technique is often utilized in AL (Tomanek and Hahn, 2009b; Majidi and Crane, 2013; Yu et al., 2022). With a similar motivation, (Dasgupta and Ng, 2009) use an unsupervised algorithm to identify the unambiguous instances to train an active learner. For the task of word alignment, which can be learned in an unsupervised manner, incorporating supervision with AL can bring further improvements in a data-efficient way (Ambati et al., 2010b,c).
+
+Transfer learning. AL can be easily combined with transfer learning, another technique to reduce required annotations. Utilizing pre-trained models is already a good example (Ein-Dor et al., 2020; Yuan et al., 2020; Tamkin et al., 2022) and continual training (Gururangan et al., 2020) can also be applied (Hua and Wang, 2022; Margatina et al., 2022). Moreover, transductive learning is commonly combined with AL by transferring learning signals from different domains (Chan and Ng, 2007; Shi et al., 2008; Rai et al., 2010; Saha et al., 2011; Wu et al., 2017; Kasai et al., 2019; Yuan et al., 2022) or languages (Qian et al., 2014; Fang and Cohn, 2017; Fang et al., 2017; Chaudhary et al., 2019, 2021; Moniz et al., 2022). In addition to the task model, the model-based query policy (\$2.1.4) is also often obtained with transfer learning.
+
+Weak supervision. AL can also be combined with weakly supervised learning. Examples include learning from inputs and execution results for semantic parsing (Ni et al., 2020), labeling based on identical structure vectors for entity representations (Qian et al., 2020), learning from gazetteers and dictionaries for sequence labeling (Brantley et al., 2020) and interactively discovering labeling rules (Zhang et al., 2022a).
+
+Data augmentation. Augmentation is also applicable in AL and has been explored with iterative back-translation (Zhao et al., 2020b), mixup for sequence labeling (Zhang et al., 2020) and phrase-to-sentence augmentation for MT (Hu and Neubig, 2021). As discussed in §2.1.1, augmentation can also be helpful for instance querying (Jiang et al., 2020; Zhang et al., 2022b). Another interesting scenario involving augmentation and AL is query synthesis, which directly generates instances to be annotated instead of selecting existing unlabeled ones. Though synthesizing texts is still a hard problem generally, there have been successful applications for simple classification tasks (Schumann and Rehbein, 2019; Quteineh et al., 2020).
+
+# 5 Starting and Stopping AL
+
+# 5.1 Starting AL
+
+While there are cases where there are already enough labeled data to train a reasonable model and AL is utilized to provide further improvements (Bloodgood and Callison-Burch, 2010; Geifman and El-Yaniv, 2017), at many times we are facing the cold-start problem, where instances need to be
+
+selected without a reasonable model. Especially, how to select the seed data to start the AL process is an interesting question, which may greatly influence the performance in initial AL stages (Tomanek et al., 2009; Horbach and Palmer, 2016).
+
+Random sampling is probably the most commonly utilized strategy, which is reasonable since it preserves the original data distribution. Some representativeness-based querying strategies (§2.2) can also be utilized, for example, selecting points near the clustering centroids is a way to obtain representative and diverse seeds (Kang et al., 2004; Zhu et al., 2008c; Hu et al., 2010). Moreover, some advanced learning techniques (§4.2) can also be helpful here, such as transfer learning (Wu et al., 2017) and unsupervised methods (Vlachos, 2006; Dasgupta and Ng, 2009). In addition, language model can be a useful tool, with which Dligach and Palmer (2011) select low-probability words in the context of word sense disambiguation and Yuan et al. (2020) choose cluster centers with surprisal embeddings by pre-trained contextualized LMs.
+
+# 5.2 Stopping AL
+
+When adopting AL in practice, it would be desirable to know the time to stop AL when the model performance is already near the upper limits, before running out of all the budgets. For this purpose, a stopping criterion is needed, which checks certain metrics satisfying certain conditions. There can be simple heuristics. For example, AL can be stopped when all unlabeled instances are no closer than any of the support vectors with an SVM (Schohn and Cohn, 2000; Ertekin et al., 2007) or no new $n$ -grams remain in the unlabeled set for MT (Bloodgood and Callison-Burch, 2010). Nevertheless, these are specific to the underlying models or target tasks. For the design of a general stopping criterion, there are three main aspects to consider: metric, dataset and condition.
+
+For the metric, measuring performance on a development set seems a natural option. However, the results would be unstable if this set is too small and it would be impractical to assume a large development set. Cross-validation on the training set also has problems since the labeled data by AL is usually biased. In this case, metrics from the query strategies can be utilized. Examples include uncertainty or confidence (Zhu and Hovy, 2007; Vlachos, 2008), disagreement (Tomanek et al., 2007; Tomanek and Hahn, 2008; Olsson and Tomanek,
+
+2009), estimated performance (Laws and Schütze, 2008), expected error (Zhu et al., 2008a), confidence variation (Ghayoomi, 2010), as well as actual performance on the selected instances (Zhu and Hovy, 2007). Moreover, comparing the predictions between consecutive AL iterations is another reasonable option (Zhu et al., 2008b; Bloodgood and Vijay-Shanker, 2009a).
+
+The dataset to calculate the stopping metric requires careful choosing. The results could be unstable if not adopting a proper set (Tomanek and Hahn, 2008). Many works suggest that a separate unlabeled dataset should be utilized (Tomanek and Hahn, 2008; Vlachos, 2008; Bloodgood and VijayShanker, 2009a; Beatty et al., 2019; Kurlandski and Bloodgood, 2022). Since the stopping metrics usually do not rely on gold labels, this dataset could potentially be very large to provide more stable results, though wait time would be another factor to consider in this case (§3.2.4).
+
+The condition to stop AL is usually comparing the metrics to a pre-defined threshold. Earlier works only look at the metric at the current iteration, for example, stopping if the uncertainty or the error is less than the threshold (Zhu and Hovy, 2007). In this case, the threshold is hard to specify since it relies on the model and the task. (Zhu et al., 2008b) cascade multiple stopping criteria to mitigate this reliance. A more stable option is to track the change of the metrics over several AL iterations, such as stopping when the confidence consistently drops (Vlachos, 2008), the changing rate flattens (Laws and Schütze, 2008) or the predictions stabilize across iterations (Bloodgood and Vijay-Shanker, 2009a; Bloodgood and Grothendieck, 2013).
+
+Pullar-Strecker et al. (2021) provide an empirical comparison over common stopping criteria and would be a nice reference. Moreover, stopping AL can be closely related to performance prediction and early stopping. Especially, the latter can be of particular interest to AL since learning in early AL stages need to face the low-resource problem and how to perform early stopping may also require careful considerations.
+
+# 6 Related Topics and Future Directions
+
+# 6.1 Related Topics
+
+There are many related topics that could be explored together with AL. Other data-efficient learning methods such as semi-supervised and transfer
+
+learning are naturally compatible with AL (§4.2). Curriculum learning (Bengio et al., 2009), which arranges training instances in a meaningful order, may also be integrated with AL (Platanios et al., 2019; Zhao et al., 2020a; Jafarpour et al., 2021). Uncertainty (Gawlikowski et al., 2021), outlier detection (Hodge and Austin, 2004) and performance prediction (Xia et al., 2020) can be related to instance querying. Crowdsourcing can be adopted to further reduce annotation cost (§B). Model efficiency (Menghani, 2021) would be crucial to reduce wait time (§3.2.4). AL is a typical type of human-in-the-loop framework (Wang et al., 2021), and it will be interesting to explore more human-computer interaction techniques in AL.
+
+# 6.2 Future Directions
+
+Complex tasks. AL is mostly adopted for simple classification, while there are many more complex tasks in NLP. For example, except for MT, generation tasks have been much less thoroughly explored with AL. Tasks with more complex inputs such as NLI and QA also require extra care when using AL; obtaining unlabeled data is already non-trivial. Nevertheless, preliminary work has shown that AL can be helpful for data collection for such tasks (Mussmann et al., 2020).
+
+Beyond direct target labeling. In addition to directly annotating target labels, AL can also be utilized in other ways to help the target task, such as labeling features or rationales (Melville and Sindhwani, 2009; Druck et al., 2009; Sharma et al., 2015), annotating explanations (Liang et al., 2020), evaluation (Mohankumar and Khapra, 2022) and rule discovery (Zhang et al., 2022a).
+
+AL in practice. Most AL works simulate annotations on an existing labeled dataset. Though this method is convenient for algorithm development, it ignores several challenges of applying AL in practice. As discussed in this survey, real annotation cost (§3.2.1), efficiency and wait time (§3.2.4), data reuse (§4.1) and starting and stopping (§5) are all important practical aspects which may not emerge in simulation. Moreover, since the AL process usually cannot be repeated multiple times, how to select the query strategy and other hyper-parameters remains a great challenge. It will be critical to address these issues to bring AL into practical use (Rehbein et al., 2010; Attenberg and Provost, 2011; Settles, 2011; Lowell et al., 2019) and make it more widely utilized (Tomanek and Olsson, 2009).
+
+# Limitations
+
+There are several limitations of this work. First, we mainly focus on AL works in the context of NLP, while AL works in other fields may also present ideas that could be utilized for NLP tasks. For example, many querying strategies originally developed with CV tasks could be naturally adopted to applications in NLP (Ren et al., 2021). We encourage the readers to refer to other surveys mentioned in §1 for additional related AL works. Moreover, the descriptions in this survey are mostly brief in order to provide a more comprehensive coverage within page limits. We mainly present the works in meaningful structured groups rather than plainly describing them in unstructured sequences, and we hope that this work can serve as an index where more details can be found in corresponding works. Finally, this is a pure survey without any experiments or empirical results. It would be helpful to perform comparative experiments over different AL strategies, which could provide more meaningful guidance (Zhan et al., 2022). We leave this to future work.
+
+# References
+
+Charu C Aggarwal, Xiangnan Kong, Quanquan Gu, Jiawei Han, and S Yu Philip. 2014. Active learning: A survey. In Data Classification, pages 599-634. Chapman and Hall/CRC.
+Vamshi Ambati, Sanjika Hewavitharana, Stephan Vogel, and Jaime Carbonell. 2011a. Active learning with multiple annotations for comparable data classification task. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web, pages 69-77, Portland, Oregon. Association for Computational Linguistics.
+Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2010a. Active learning and crowd-sourcing for machine translation. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA).
+Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2010b. Active learning-based elicitation for semi-supervised word alignment. In Proceedings of the ACL 2010 Conference Short Papers, pages 365-370, Uppsala, Sweden. Association for Computational Linguistics.
+Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2010c. Active semi-supervised learning for improving word alignment. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pages 10-17, Los Angeles,
+
+California. Association for Computational Linguistics.
+Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2011b. Multi-strategy approaches to active learning for statistical machine translation. In Proceedings of Machine Translation Summit XIII: Papers, Xiamen, China.
+Sankaranarayanan Ananthakrishnan, Rohit Prasad, David Stallard, and Prem Natarajan. 2010a. Discriminative sample selection for statistical machine translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 626-635, Cambridge, MA. Association for Computational Linguistics.
+Sankaranarayanan Ananthakrishnan, Rohit Prasad, David Stallard, and Prem Natarajan. 2010b. A semi-supervised batch-mode active learning strategy for improved statistical machine translation. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 126-134, Uppsala, Sweden. Association for Computational Linguistics.
+Shilpa Arora, Eric Nyberg, and Carolyn P. Rose. 2009. Estimating annotation cost for active learning in a multi-annotator environment. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 18–26, Boulder, Colorado. Association for Computational Linguistics.
+Jordan Ash and Ryan P Adams. 2020. On warm-starting neural network training. Advances in Neural Information Processing Systems, 33:3884-3894.
+Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2020. Deep batch active learning by diverse, uncertain gradient lower bounds. In International Conference on Learning Representations.
+Seyed Arad Ashrafi Asli, Behnam Sabeti, Zahra Majdabadi, Preni Golazizian, Reza Fahmi, and Omid Momenzadeh. 2020. Optimizing annotation effort using active learning strategies: A sentiment analysis case study in Persian. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2855-2861, Marseille, France. European Language Resources Association.
+Jordi Atserias, Giuseppe Attardi, Maria Simi, and Hugo Zaragoza. 2010. Active learning for building a corpus of questions for parsing. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA).
+Josh Attenberg and Seyeda Ertekin. 2013. Class imbalance and active learning. *Imbalanced Learning: Foundations, Algorithms, and Applications*, pages 101-149.
+
+Josh Attenberg and Foster Provost. 2011. Inactive learning? difficulties employing active learning in practice. ACM SIGKDD Explorations Newsletter, 12(2):36-41.
+Philip Bachman, Alessandro Sordoni, and Adam Trischler. 2017. Learning algorithms for active learning. In International Conference on Machine Learning, pages 301-310. PMLR.
+Guirong Bai, Shizhu He, Kang Liu, Jun Zhao, and Zaiqing Nie. 2020. Pre-trained language model based active learning for sentence matching. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1495-1504, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Jason Baldridge and Miles Osborne. 2003. Active learning for HPSG parse selection. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 17-24.
+Jason Baldridge and Miles Osborne. 2004. Active learning and the total cost of annotation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 9-16, Barcelona, Spain. Association for Computational Linguistics.
+Jason Baldridge and Alexis Palmer. 2009. How well does active learning actually work? Time-based evaluation of cost-reduction strategies for language documentation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 296-305, Singapore. Association for Computational Linguistics.
+Garrett Beatty, Ethan Kochis, and Michael Bloodgood. 2019. The use of unlabeled data versus labeled data for stopping active learning for text classification. In 2019 IEEE 13th International Conference on Semantic Computing (ICSC), pages 287-294. IEEE.
+Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41-48.
+Michael Bloodgood and Chris Callison-Burch. 2010. Bucking the trend: Large-scale cost-focused active learning for statistical machine translation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 854-864, Uppsala, Sweden. Association for Computational Linguistics.
+Michael Bloodgood and John Grothendieck. 2013. Analysis of stopping active learning based on stabilizing predictions. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 10-19, Sofia, Bulgaria. Association for Computational Linguistics.
+Michael Bloodgood and K. Vijay-Shanker. 2009a. A method for stopping active learning based on stabilizing predictions and the need for user-adjustable
+
+stopping. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 39-47, Boulder, Colorado. Association for Computational Linguistics.
+Michael Bloodgood and K. Vijay-Shanker. 2009b. Taking into account the differences between actively and passively acquired data: The case of active learning with support vector machines for imbalanced datasets. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 137-140, Boulder, Colorado. Association for Computational Linguistics.
+Kianté Brantley, Amr Sharaf, and Hal Daume III. 2020. Active imitation learning with noisy guidance. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2093-2105, Online. Association for Computational Linguistics.
+Klaus Brinker. 2003. Incorporating diversity in active learning with support vector machines. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages 59-66.
+Tingting Cai, Zhiyuan Ma, Hong Zheng, and Yangming Zhou. 2021. Ne-lp: normalized entropy-and loss prediction-based sampling for active learning in Chinese word segmentation on ehrs. Neural Computing and Applications, 33(19):12535-12549.
+Tingting Cai, Yangming Zhou, and Hong Zheng. 2020. Cost-quality adaptive active learning for Chinese clinical named entity recognition. In 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 528-533. IEEE.
+Hian Canizares-Diaz, Alejandro Piad-Morffis, Suilan Estevez-Velarde, Yoan Gutierrez, Yudivian Almeida Cruz, Andres Montoyo, and Rafael Muñoz-Guillena. 2021. Active learning for assisted corpus construction: A case study in knowledge discovery from biomedical text. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 216-225, Held Online. INCOMA Ltd.
+Kai Cao, Xiang Li, Miao Fan, and Ralph Grishman. 2015. Improving event detection with active learning. In Proceedings of the International Conference on Recent Advances in Natural Language Processing, pages 72-77, Hissar, Bulgaria. INCOMA Ltd. Shoumen, BULGARIA.
+Yee Seng Chan and Hwee Tou Ng. 2007. Domain adaptation with active learning for word sense disambiguation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 49-56, Prague, Czech Republic. Association for Computational Linguistics.
+
+Aditi Chaudhary, Antonios Anastasopoulos, Zaid Sheikh, and Graham Neubig. 2021. Reducing confusion in active learning for part-of-speech tagging. Transactions of the Association for Computational Linguistics, 9:1-16.
+Aditi Chaudhary, Jiateng Xie, Zaid Sheikh, Graham Neubig, and Jaime Carbonell. 2019. A little annotation does a lot of good: A study in bootstrapping low-resource named entity recognizers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5164-5174, Hong Kong, China. Association for Computational Linguistics.
+Chenhua Chen, Alexis Palmer, and Caroline Sporleder. 2011. Enhancing active learning for semantic role labeling via compressed dependency trees. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 183-191, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.
+Jinying Chen, Andrew Schein, Lyle Ungar, and Martha Palmer. 2006. An empirical study of the behavior of active learning for word sense disambiguation. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 120-127, New York City, USA. Association for Computational Linguistics.
+Yukun Chen, Thomas A Lasko, Qiaozhu Mei, Joshua C Denny, and Hua Xu. 2015. A study of active learning methods for named entity recognition in clinical text. Journal of biomedical informatics, 58:11-18.
+Gui Citovsky, Giulia DeSalvo, Claudio Gentile, Lazaros Karydas, Anand Rajagopalan, Afshin Rostamizadeh, and Sanjiv Kumar. 2021. Batch active learning at scale. Advances in Neural Information Processing Systems, 34.
+David Cohn, Les Atlas, and Richard Ladner. 1994. Improving generalization with active learning. Machine learning, 15(2):201-221.
+David A Cohn, Zoubin Ghahramani, and Michael I Jordan. 1996. Active learning with statistical models. Journal of artificial intelligence research, 4:129-145.
+Aron Culotta and Andrew McCallum. 2005. Reducing labeling effort for structured prediction tasks. In AAAI, volume 5, pages 746-751.
+Aswarth Abhilash Dara, Josef van Genabith, Qun Liu, John Judge, and Antonio Toral. 2014. Active learning for post-editing based incrementally retrained MT. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers, pages 185-189, Gothenburg, Sweden. Association for Computational Linguistics.
+
+Sajib Dasgupta and Vincent Ng. 2009. Mine the easy, classify the hard: A semi-supervised approach to automatic sentiment classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 701-709, Suntec, Singapore. Association for Computational Linguistics.
+Sanjoy Dasgupta. 2011. Two faces of active learning. Theoretical computer science, 412(19):1767-1781.
+Yue Deng, KaWai Chen, Yilin Shen, and Hongxia Jin. 2018. Adversarial active learning for sequences labeling and generation. In *IJCAI*, pages 4012-4018.
+Dmitriy Dligach and Martha Palmer. 2011. Good seed makes a good crop: Accelerating active learning using language modeling. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 6-10, Portland, Oregon, USA. Association for Computational Linguistics.
+Pinar Donmez and Jaime G Carbonell. 2008. Proactive learning: cost-sensitive active learning with multiple imperfect oracles. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 619-628.
+Pinar Donmez, Jaime G Carbonell, and Paul N Bennett. 2007. Dual strategy active learning. In European Conference on Machine Learning, pages 116-127. Springer.
+Gregory Druck, Burr Settles, and Andrew McCallum. 2009. Active learning by labeling features. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 81-90, Singapore. Association for Computational Linguistics.
+Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip Cohen, and Mark Johnson. 2018. Active learning for deep semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 43-48, Melbourne, Australia. Association for Computational Linguistics.
+Matthias Eck, Stephan Vogel, and Alex Waibel. 2005. Low cost portability for statistical machine translation based on n-gram frequency and TF-IDF. In Proceedings of the Second International Workshop on Spoken Language Translation, Pittsburgh, Pennsylvania, USA.
+Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active Learning for BERT: An Empirical Study. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7949-7962, Online. Association for Computational Linguistics.
+
+Sean P. Engelson and Ido Dagan. 1996. Minimizing manual annotation cost in supervised training from corpora. In 34th Annual Meeting of the Association for Computational Linguistics, pages 319-326, Santa Cruz, California, USA. Association for Computational Linguistics.
+Alexander Erdmann, David Joseph Wrisley, Benjamin Allen, Christopher Brown, Sophie Cohen-Bodenès, Micha Elsner, Yukun Feng, Brian Joseph, Beatrice Joyeux-Prunel, and Marie-Catherine de Marneffe. 2019. Practical, efficient, and customizable active learning for named entity recognition in the digital humanities. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2223-2234, Minneapolis, Minnesota. Association for Computational Linguistics.
+Seyda Ertekin, Jian Huang, Leon Bottou, and Lee Giles. 2007. Learning on the border: active learning in imbalanced data classification. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 127-136.
+Nuno Escudeiro and Alípio Jorge. 2010. D-confidence: An active learning strategy which efficiently identifies small classes. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pages 18-26, Los Angeles, California. Association for Computational Linguistics.
+Vebjørn Espeland, Beatrice Alex, and Benjamin Bach. 2020. Enhanced labelling in active learning for coreference resolution. In Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference, pages 111-121, Barcelona, Spain (online). Association for Computational Linguistics.
+Meng Fang and Trevor Cohn. 2017. Model transfer for tagging low-resource languages using a bilingual dictionary. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 587-593, Vancouver, Canada. Association for Computational Linguistics.
+Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 595-605, Copenhagen, Denmark. Association for Computational Linguistics.
+Meng Fang, Jie Yin, and Dacheng Tao. 2014. Active learning for crowdsourcing using knowledge transfer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28.
+Daniel Flannery and Shinsuke Mori. 2015. Combining active learning and partial annotation for domain adaptation of a Japanese dependency parser. In Proceedings of the 14th International Conference on
+
+Parsing Technologies, pages 11-19, Bilbao, Spain.
+Association for Computational Linguistics.
+Linton C Freeman. 1965. Elementary applied statistics: for students in behavioral science. New York: Wiley.
+Lisheng Fu and Ralph Grishman. 2013. An efficient active learning framework for new relation types. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 692-698, Nagoya, Japan. Asian Federation of Natural Language Processing.
+Yifan Fu, Xingquan Zhu, and Bin Li. 2013. A survey on instance selection for active learning. Knowledge and information systems, 35(2):249-283.
+Atsushi Fujii, Kentaro Inui, Takenobu Tokunaga, and Hozumi Tanaka. 1998. Selective sampling for example-based word sense disambiguation. Computational Linguistics, 24(4):573-597.
+Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, pages 1050-1059. PMLR.
+Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. In International Conference on Machine Learning, pages 1183-1192. PMLR.
+Caroline Gasperin. 2009. Active learning for anaphora resolution. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 1-8, Boulder, Colorado. Association for Computational Linguistics.
+Jakob Gawlikowski, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseok Lee, Matthias Hunt, Jianxiang Feng, Anna Kraspe, Rudolph Triebel, Peter Jung, Ribana Roscher, et al. 2021. A survey of uncertainty in deep neural networks. arXiv preprint arXiv:2107.03342.
+Yonatan Geifman and Ran El-Yaniv. 2017. Deep active learning over the long tail. arXiv preprint arXiv:1711.00941.
+Masood Ghayoomi. 2010. Using variance as a stopping criterion for active learning of frame assignment. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pages 1-9, Los Angeles, California. Association for Computational Linguistics.
+Daniel Gissin and Shai Shalev-Shwartz. 2019. Discriminative active learning. arXiv preprint arXiv:1907.06347.
+Jesús González-Rubio, Daniel Ortiz-Martínez, and Francisco Casacuberta. 2012. Active learning for interactive machine translation. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 245-254, Avignon, France. Association for Computational Linguistics.
+
+Daniel Grießhaber, Johannes Maucher, and Ngoc Thang Vu. 2020. Fine-tuning BERT for low-resource natural language understanding via active learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1158-1171, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Kamal Gupta, Dhanvanth Boppana, Rejwanul Haque, Asif Ekbal, and Pushpak Bhattacharyya. 2021. Investigating active learning in interactive neural machine translation. In Proceedings of Machine Translation Summit XVIII: Research Track, pages 10-22, Virtual. Association for Machine Translation in the Americas.
+Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.
+Ben Hachey, Beatrice Alex, and Markus Becker. 2005. Investigating the effects of selective sampling on the annotation task. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 144–151, Ann Arbor, Michigan. Association for Computational Linguistics.
+Hossein Hadian and Hossein Sameti. 2014. Active learning in noisy conditions for spoken language understanding. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1081-1090, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.
+Robbie Haertel, Paul Felt, Eric K. Ringger, and Kevin Seppi. 2010. Parallel active learning: Eliminating wait time with minimal staleness. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pages 33-41, Los Angeles, California. Association for Computational Linguistics.
+Robbie Haertel, Eric Ringger, Kevin Seppi, James Carroll, and Peter McClanahan. 2008a. Assessing the costs of sampling methods in active learning for annotation. In Proceedings of ACL-08: HLT, Short Papers, pages 65–68, Columbus, Ohio. Association for Computational Linguistics.
+Robbie Haertel, Eric Ringger, Kevin Seppi, and Paul Felt. 2015. An analytic and empirical evaluation of return-on-investment-based active learning. In Proceedings of The 9th Linguistic Annotation Workshop, pages 11-20, Denver, Colorado, USA. Association for Computational Linguistics.
+Robbie A Haertel, Kevin D Seppi, Eric K Ringger, and James L Carroll. 2008b. Return on investment for active learning. In Proceedings of the NIPS workshop on cost-sensitive learning, volume 72.
+
+Gholamreza Haffari, Maxim Roy, and Anoop Sarkar. 2009. Active learning for statistical phrase-based machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 415-423, Boulder, Colorado. Association for Computational Linguistics.
+Gholamreza Haffari and Anoop Sarkar. 2009. Active learning for multilingual statistical machine translation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 181-189, Suntec, Singapore. Association for Computational Linguistics.
+Rishi Hazra, Parag Dutta, Shubham Gupta, Mohammed Abdul Qaathir, and Ambedkar Dukkipati. 2021. Active $^2$ learning: Actively reducing redundancies in active learning methods for sequence tagging and machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1982-1995, Online. Association for Computational Linguistics.
+Rui He, Shan He, and Ke Tang. 2021. Multi-domain active learning: A comparative study. arXiv preprint arXiv:2106.13516.
+Hideitsu Hino. 2020. Active learning: Problem settings and recent developments. arXiv preprint arXiv:2012.04225.
+Victoria Hodge and Jim Austin. 2004. A survey of outlier detection methodologies. Artificial intelligence review, 22(2):85-126.
+Andrea Horbach and Alexis Palmer. 2016. Investigating active learning for short-answer scoring. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pages 301-311, San Diego, CA. Association for Computational Linguistics.
+Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. 2011. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745.
+Junjie Hu and Graham Neubig. 2021. Phrase-level active learning for neural machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 1087-1099, Online. Association for Computational Linguistics.
+Peiyun Hu, Zack Lipton, Anima Anandkumar, and Deva Ramanan. 2019. Active learning with partial feedback. In International Conference on Learning Representations.
+Rong Hu, Brian Mac Namee, and Sarah Jane Delany. 2010. Off to a good start: Using clustering to select the initial training set in active learning. In Twenty-Third International FLAIRS Conference.
+
+Xinyu Hua and Lu Wang. 2022. Efficient argument structure extraction with transfer learning and active learning. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 423-437, Dublin, Ireland. Association for Computational Linguistics.
+Jiaji Huang, Rewon Child, Vinay Rao, Hairong Liu, Sanjeev Satheesh, and Adam Coates. 2016. Active learning for speech recognition: the power of gradients. arXiv preprint arXiv:1612.03226.
+Sheng-Jun Huang, Jia-Lve Chen, Xin Mu, and Zhi-Hua Zhou. 2017. Cost-effective active learning from diverse labelers. In *IJCAI*, pages 1879–1885.
+Rebecca Hwa. 2000. Sample selection for statistical grammar induction. In 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 45-52, Hong Kong, China. Association for Computational Linguistics.
+Rebecca Hwa. 2004. Sample selection for statistical parsing. Computational Linguistics, 30(3):253-276.
+Fariz Ikhwantri, Samuel Louvan, Kemal Kurniawan, Bagas Abisena, Valdi Rachman, Alfan Farizki Wicaksono, and Rahmad Mahendra. 2018. Multi-task active learning for neural semantic role labeling on low resource conversational corpus. In Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP, pages 43-50, Melbourne. Association for Computational Linguistics.
+Makoto Imamura, Yasuhiro Takayama, Nobuhiro Kaji, Masashi Toyoda, and Masaru Kitsuregawa. 2009. A combination of active learning and semi-supervised learning starting with positive and unlabeled examples for word sense disambiguation: An empirical study on Japanese web search query. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 61-64, Suntec, Singapore. Association for Computational Linguistics.
+Borna Jafarpour, Dawn Sepehr, and Nick Pogrebnyakov. 2021. Active curriculum learning. In Proceedings of the First Workshop on Interactive Learning for Natural Language Processing, pages 40-45, Online. Association for Computational Linguistics.
+Zhuoren Jiang, Zhe Gao, Yu Duan, Yangyang Kang, Changlong Sun, Qiong Zhang, and Xiaozhong Liu. 2020. Camouflaged Chinese spam content detection with semi-supervised generative active learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3080-3085, Online. Association for Computational Linguistics.
+Jaeho Kang, Kwang Ryel Ryu, and Hyuk-Chul Kwon. 2004. Using cluster-based sampling to select initial training set for active learning in text classification. In Pacific-Asia conference on knowledge discovery and data mining, pages 384-388. Springer.
+
+Siddharth Karamcheti, Ranjay Krishna, Li Fei-Fei, and Christopher Manning. 2021. Mind your outliers! investigating the negative impact of outliers on active learning for visual question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7265-7281, Online. Association for Computational Linguistics.
+Jungo Kasai, Kun Qian, Sairam Gurajada, Yunyao Li, and Lucian Popa. 2019. Low-resource deep entity resolution with transfer and active learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5851-5861, Florence, Italy. Association for Computational Linguistics.
+Seokhwan Kim, Yu Song, Kyungduk Kim, Jeong-Won Cha, and Gary Geunbae Lee. 2006. MMR-based active machine learning for bio named entity recognition. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 69-72, New York City, USA. Association for Computational Linguistics.
+Yekyung Kim. 2020. Deep active learning for sequence labeling based on diversity and uncertainty in gradient. In Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems, pages 1-8, Suzhou, China. Association for Computational Linguistics.
+Omri Koshorek, Gabriel Stanovsky, Yichu Zhou, Vivek Srikumar, and Jonathan Berant. 2019. On the limits of learning to actively learn semantic representations. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 452-462, Hong Kong, China. Association for Computational Linguistics.
+Luke Kurlandski and Michael Bloodgood. 2022. Impact of stop sets on stopping active learning for text classification. arXiv preprint arXiv:2201.05460.
+Florian Laws, Florian Heimerl, and Hinrich Schütze. 2012. Active learning for coreference resolution. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 508-512, Montreal, Canada. Association for Computational Linguistics.
+Florian Laws, Christian Scheible, and Hinrich Schütze. 2011. Active learning with Amazon Mechanical Turk. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1546-1556, Edinburgh, Scotland, UK. Association for Computational Linguistics.
+Florian Laws and Hinrich Schütze. 2008. Stopping criteria for active learning of named entity recognition. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 465-472, Manchester, UK. Coling 2008 Organizing Committee.
+
+Meisin Lee, Lay-Ki Soon, Eu Gene Siew, and Ly Fie Sugianto. 2022. CrudeOilNews: An annotated crude oil news corpus for event extraction. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 465-479, Marseille, France. European Language Resources Association.
+David D Lewis and Jason Catlett. 1994. Heterogeneous uncertainty sampling for supervised learning. In Machine learning proceedings 1994, pages 148-156. Elsevier.
+David D Lewis and William A Gale. 1994. A sequential algorithm for training text classifiers. In SIGIR'94, pages 3-12. Springer.
+Belinda Z. Li, Gabriel Stanovsky, and Luke Zettlemoyer. 2020. Active learning for coreference resolution using discrete annotation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8320-8331, Online. Association for Computational Linguistics.
+Maolin Li, Nhung Nguyen, and Sophia Ananiadou. 2017. Proactive learning for named entity recognition. In BioNLP 2017, pages 117-125, Vancouver, Canada., Association for Computational Linguistics.
+Shoushan Li, Shengfeng Ju, Guodong Zhou, and Xiaojun Li. 2012a. Active learning for imbalanced sentiment classification. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 139-148, Jeju Island, Korea. Association for Computational Linguistics.
+Shoushan Li, Guodong Zhou, and Chu-Ren Huang. 2012b. Active learning for Chinese word segmentation. In Proceedings of COLING 2012: Posters, pages 683-692, Mumbai, India. The COLING 2012 Organizing Committee.
+Zhenghua Li, Min Zhang, Yue Zhang, Zhanyi Liu, Wenliang Chen, Hua Wu, and Haifeng Wang. 2016. Active learning for dependency parsing with partial annotation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 344-354, Berlin, Germany. Association for Computational Linguistics.
+Weixin Liang, James Zou, and Zhou Yu. 2020. ALICE: Active learning with contrastive natural language explanations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4380-4391, Online. Association for Computational Linguistics.
+Bill Yuchen Lin, Dong-Ho Lee, Frank F. Xu, Ouyu Lan, and Xiang Ren. 2019. AlpacaTag: An active learning-based crowd annotation framework for sequence tagging. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 58-63, Florence, Italy. Association for Computational Linguistics.
+
+Thomas Lippincott and Ben Van Durme. 2021. Active learning and negative evidence for language identification. In Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances, pages 47-51, Online. Association for Computational Linguistics.
+Bing Liu, Harrison Scells, Guido Zuccon, Wen Hua, and Genghong Zhao. 2021. ActiveEA: Active learning for neural entity alignment. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3364-3374, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018a. Learning how to actively learn: A deep imitation learning approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1874-1883, Melbourne, Australia. Association for Computational Linguistics.
+Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018b. Learning to actively learn neural machine translation. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 334-344, Brussels, Belgium. Association for Computational Linguistics.
+Mingyi Liu, Zhiying Tu, Tong Zhang, Tonghua Su, Xiaofei Xu, and Zhongjie Wang. 2022. Ltp: A new active learning strategy for crf-based named entity recognition. Neural Processing Letters, pages 1-22.
+Varvara Logacheva and Lucia Specia. 2014a. Confidence-based active learning methods for machine translation. In Proceedings of the EACL 2014 Workshop on Humans and Computer-assisted Translation, pages 78-83, Gothenburg, Sweden. Association for Computational Linguistics.
+Varvara Logacheva and Lucia Specia. 2014b. A quality-based active sample selection strategy for statistical machine translation. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 2690-2695, Reykjavik, Iceland. European Language Resources Association (ELRA).
+Shayne Longpre, Julia Reisler, Edward Greg Huang, Yi Lu, Andrew Frank, Nikhil Ramesh, and Chris DuBois. 2022. Active learning over multiple domains in natural language tasks. arXiv preprint arXiv:2202.00254.
+David Lowell, Zachary C. Lipton, and Byron C. Wallace. 2019. Practical obstacles to deploying active learning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 21-30, Hong Kong, China. Association for Computational Linguistics.
+
+Teresa Lynn, Jennifer Foster, Mark Dras, and Elaine Ui Dhonnchadha. 2012. Active learning and the Irish treebank. In Proceedings of the Australasian Language Technology Association Workshop 2012, pages 23-32, Dunedin, New Zealand.
+François Mairesse, Milica Gašić, Filip Jurčíček, Simon Keizer, Blaise Thomson, Kai Yu, and Steve Young. 2010. Phrase-based statistical language generation using graphical models and active learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1552-1561, Uppsala, Sweden. Association for Computational Linguistics.
+Saeed Majidi and Gregory Crane. 2013. Active learning for dependency parsing by a committee of parsers. In Proceedings of the 13th International Conference on Parsing Technologies (IWPT 2013), pages 98-105, Nara, Japan. Association for Computational Linguistics.
+Cyrielle Mallart, Michel Le Nouy, Guillaume Gravier, and Pascale Sébillot. 2021. Active learning for interactive relation extraction in a French newspaper's articles. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 886-894, Held Online. INCOMA Ltd.
+Gideon Mann and Andrew McCallum. 2007. Efficient computation of entropy gradient for semi-supervised conditional random fields. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers, pages 109-112, Rochester, New York. Association for Computational Linguistics.
+Diego Marcheggiani and Thierry Artières. 2014. An experimental comparison of active learning strategies for partially labeled sequences. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 898-906, Doha, Qatar. Association for Computational Linguistics.
+Katerina Margatina, Loic Barrault, and Nikolaos Aletras. 2022. On the importance of effectively adapting pretrained language models for active learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 825-836, Dublin, Ireland. Association for Computational Linguistics.
+Katerina Margatina, Giorgos Vernikos, Loic Barrault, and Nikolaos Aletras. 2021. Active learning by acquiring contrastive examples. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 650-663, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Héctor Martínez Alonso, Barbara Plank, Anders Johannsen, and Anders Søgaard. 2015. Active learning for sense annotation. In Proceedings of the
+
+20th Nordic Conference of Computational Linguistics (NODALIDA 2015), pages 245-249, Vilnius, Lithuania. Linkoping University Electronic Press, Sweden.
+Andrew McCallum and Kamal Nigam. 1998. Employing em and pool-based active learning for text classification. In Proceedings of the Fifteenth International Conference on Machine Learning, pages 350-358.
+Prem Melville and Vikas Sindhwani. 2009. Active dual supervision: Reducing the cost of annotating examples and features. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 49-57, Boulder, Colorado. Association for Computational Linguistics.
+Vânia Mendonça, Ricardo Rei, Luisa Coheur, and Alberto Sardinha. 2022. Onceception: Active learning with expert advice for real world machine translation. arXiv preprint arXiv:2203.04507.
+Gaurav Menghani. 2021. Efficient deep learning: A survey on making deep learning models smaller, faster, and better. arXiv preprint arXiv:2106.08962.
+Timothy Miller, Dmitriy Dligach, and Guergana Savova. 2012. Active learning for coreference resolution. In BioNLP: Proceedings of the 2012 Workshop on Biomedical Natural Language Processing, pages 73-81, Montreal, Canada. Association for Computational Linguistics.
+Seyed Abolghasem Mirroshandel, Gholamreza Ghassem-Sani, and Alexis Nasr. 2011. Active learning strategies for support vector machines, application to temporal relation classification. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 56-64, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.
+Seyed Abolghasem Mirroshandel and Alexis Nasr. 2011. Active learning for dependency parsing using partially annotated sentences. In Proceedings of the 12th International Conference on Parsing Technologies, pages 140-149, Dublin, Ireland. Association for Computational Linguistics.
+Akiva Miura, Graham Neubig, Michael Paul, and Satoshi Nakamura. 2016. Selecting syntactic, nonredundant segments in active learning for machine translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 20-29, San Diego, California. Association for Computational Linguistics.
+Akash Kumar Mohankumar and Mitesh Khapra. 2022. Active evaluation: Efficient NLG evaluation with few pairwise comparisons. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8761-8781, Dublin, Ireland. Association for Computational Linguistics.
+
+Joel Moniz, Barun Patra, and Matthew Gormley. 2022. On efficiently acquiring annotations for multilingual models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 69-85, Dublin, Ireland. Association for Computational Linguistics.
+Ali Mottaghi, Prathusha K Sarma, Xavier Amatriain, Serena Yeung, and Anitha Kannan. 2020. Medical symptom recognition from patient text: An active learning approach for long-tailed multilabel distributions. arXiv preprint arXiv:2011.06874.
+Stephen Mussmann, Robin Jia, and Percy Liang. 2020. On the Importance of Adaptive Data Collection for Extremely Imbalanced Pairwise Tasks. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 3400-3413, Online. Association for Computational Linguistics.
+Skatje Myers and Martha Palmer. 2021. Tuning deep active learning for semantic role labeling. In Proceedings of the 14th International Conference on Computational Semantics (IWCS), pages 212-221, Groningen, The Netherlands (online). Association for Computational Linguistics.
+Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable Japanese morphological analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 529-533, Portland, Oregon, USA. Association for Computational Linguistics.
+Grace Ngai and David Yarowsky. 2000. Rule writing or annotation: Cost-efficient resource usage for base noun phrase chunking. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 117-125, Hong Kong. Association for Computational Linguistics.
+Hieu T Nguyen and Arnold Smeulders. 2004. Active learning using pre-clustering. In Proceedings of the twenty-first international conference on Machine learning, page 79.
+Minh Van Nguyen, Nghia Ngo, Bonan Min, and Thien Nguyen. 2022. FAMIL: A fast active learning framework for multilingual information extraction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations, pages 131-139, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics.
+Ansong Ni, Pengcheng Yin, and Graham Neubig. 2020. Merging weak and active supervision for semantic parsing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8536-8543.
+Fredrik Olsson. 2009. A literature survey of active machine learning in the context of natural language processing.
+
+Fredrik Olsson and Katrin Tomanek. 2009. An intrinsic stopping criterion for committee-based active learning. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 138-146, Boulder, Colorado. Association for Computational Linguistics.
+Álvaro Peris and Francisco Casacuberta. 2018. Active learning for interactive neural machine translation of data streams. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 151-160, Brussels, Belgium. Association for Computational Linguistics.
+Stanislav Peshterliev, John Kearney, Abhyuday Jagannatha, Imre Kiss, and Spyros Matsoukas. 2019. Active learning for new domains in natural language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 90-96, Minneapolis, Minnesota. Association for Computational Linguistics.
+Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, and Tom Mitchell. 2019. Competence-based curriculum learning for neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1162-1172, Minneapolis, Minnesota. Association for Computational Linguistics.
+Ameya Prabhu, Charles Dognin, and Maneesh Singh. 2019. Sampling bias in deep active classification: An empirical study. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4058-4068, Hong Kong, China. Association for Computational Linguistics.
+Zac Pullar-Strecker, Katharina Dost, Eibe Frank, and Jörg Wicker. 2021. Hitting the target: Stopping active learning at the cost-based optimum. arXiv preprint arXiv:2110.03802.
+Kun Qian, Poornima Chozhiyath Raman, Yunyao Li, and Lucian Popa. 2020. Learning structured representations of entity names using Active Learning and weak supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6376-6383, Online. Association for Computational Linguistics.
+Longhua Qian, Haotian Hui, Ya'nan Hu, Guodong Zhou, and Qiaoming Zhu. 2014. Bilingual active learning for relation classification via pseudo parallel corpora. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 582-592, Baltimore, Maryland. Association for Computational Linguistics.
+
+Husam Quteineh, Spyridon Samothrakis, and Richard Sutcliffe. 2020. Textual data augmentation for efficient active learning on tiny datasets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7400-7410, Online. Association for Computational Linguistics.
+Puria Radmard, Yassir Fathullah, and Aldo Lipani. 2021. Subsequence based deep active learning for named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4310-4321, Online. Association for Computational Linguistics.
+Piyush Rai, Avishek Saha, Hal Daumé, and Suresh Venkatasubramanian. 2010. Domain adaptation meets active learning. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pages 27-32, Los Angeles, California. Association for Computational Linguistics.
+Ines Rehbein and Josef Ruppenhofer. 2011. Evaluating the impact of coder errors on active learning. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 43-51, Portland, Oregon, USA. Association for Computational Linguistics.
+Ines Rehbein, Josef Ruppenhofer, and Alexis Palmer. 2010. Bringing active learning to life. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 949-957, Beijing, China. Coling 2010 Organizing Committee.
+Roi Reichart and Ari Rappoport. 2009. Sample selection for statistical parsers: Cognitively driven algorithms and evaluation measures. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 3-11, Boulder, Colorado. Association for Computational Linguistics.
+Roi Reichart, Katrin Tomanek, Udo Hahn, and Ari Rappoport. 2008. Multi-task active learning for linguistic annotations. In Proceedings of ACL-08: HLT, pages 861-869, Columbus, Ohio. Association for Computational Linguistics.
+Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B Gupta, Xiaojiang Chen, and Xin Wang. 2021. A survey of deep active learning. ACM Computing Surveys (CSUR), 54(9):1-40.
+Eric Ringger, Marc Carmen, Robbie Haertel, Kevin Seppi, Deryle Lonsdale, Peter McClanahan, James Carroll, and Noel Ellison. 2008. Assessing the costs of machine-assisted corpus annotation through a user study. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA).
+
+Eric Ringger, Peter McClanahan, Robbie Haertel, George Busby, Marc Carmen, James Carroll, Kevin Seppi, and Deryle Lonsdale. 2007. Active learning for part-of-speech tagging: Accelerating corpus annotation. In Proceedings of the Linguistic Annotation Workshop, pages 101-108, Prague, Czech Republic. Association for Computational Linguistics.
+Martha-Alicia Rocha and Joan-Andreu Sanchez. 2013. Towards the supervised machine translation: Real word alignments and translations in a multi-task active learning process. In Proceedings of Machine Translation Summit XIV: Posters, Nice, France.
+Dan Roth and Kevin Small. 2006. Margin-based active learning for structured output spaces. In European Conference on Machine Learning, pages 413-424. Springer.
+Dan Roth and Kevin Small. 2008. Active learning for pipeline models. In AAAI, pages 683-688.
+Guy Rotman and Roi Reichart. 2022. Multi-task active learning for pre-trained transformer-based models. arXiv preprint arXiv:2208.05379.
+Nicholas Roy and Andrew McCallum. 2001. Toward optimal active learning through sampling estimation of error reduction. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 441-448.
+Dongyu Ru, Jiangtao Feng, Lin Qiu, Hao Zhou, Mingxuan Wang, Weinan Zhang, Yong Yu, and Lei Li. 2020. Active sentence learning by adversarial uncertainty sampling in discrete space. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 4908-4917, Online. Association for Computational Linguistics.
+Mrinmaya Sachan, Eduard Hovy, and Eric P Xing. 2015. An active learning approach to coreference resolution. In Twenty-Fourth International Joint Conference on Artificial Intelligence.
+Avishek Saha, Piyush Rai, Hal Daumé, Suresh Venkatasubramanian, and Scott L DuVall. 2011. Active supervised domain adaptation. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 97-112. Springer.
+Manabu Sassano. 2002. An empirical study of active learning with support vector machines for Japanese word segmentation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 505-512, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
+Manabu Sassano and Sadao Kurohashi. 2010. Using smaller constituents rather than sentences in active learning for Japanese dependency parsing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 356-365, Uppsala, Sweden. Association for Computational Linguistics.
+
+Tobias Scheffer, Christian Decomain, and Stefan Wrobel. 2001. Active hidden markov models for information extraction. In International Symposium on Intelligent Data Analysis, pages 309-318. Springer.
+Andrew I Schein and Lyle H Ungar. 2007. Active learning for logistic regression: an evaluation. Machine Learning, 68(3):235-265.
+Greg Schohn and David Cohn. 2000. Less is more: Active learning with support vector machines. In Proceedings of the Seventeenth International Conference on Machine Learning, pages 839-846.
+Christopher Schröder and Andreas Niekler. 2020. A survey of active learning for text classification using deep neural networks. arXiv preprint arXiv:2008.07267.
+Christopher Schröder, Andreas Niekler, and Martin Potthast. 2022. Revisiting uncertainty-based query strategies for active learning with transformers. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 2194-2203, Dublin, Ireland. Association for Computational Linguistics.
+Raphael Schumann and Ines Rehbein. 2019. Active learning via membership query synthesis for semi-supervised sentence classification. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 472-481, Hong Kong, China. Association for Computational Linguistics.
+Priyanka Sen and Emine Yilmaz. 2020. Uncertainty and traffic-aware active learning for semantic parsing. In Proceedings of the First Workshop on Interactive and Executable Semantic Parsing, pages 12-17, Online. Association for Computational Linguistics.
+Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations.
+Seungmin Seo, Donghyun Kim, Youbin Ahn, and Kyong-Ho Lee. 2022. Active learning on pre-trained language model with task-independent triplet loss. Proceedings of the AAAI Conference on Artificial Intelligence.
+Burr Settles. 2009. Active learning literature survey.
+Burr Settles. 2011. From theories to queries: Active learning in practice. In Active learning and experimental design workshop in conjunction with AISTATS 2010, pages 1-18. JMLR Workshop and Conference Proceedings.
+Burr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 1070-1079, Honolulu, Hawaii. Association for Computational Linguistics.
+
+Burr Settles, Mark Craven, and Lewis Friedland. 2008. Active learning with real annotation costs. In Proceedings of the NIPS workshop on cost-sensitive learning, volume 1.
+Burr Settles, Mark Craven, and Soumya Ray. 2007. Multiple-instance active learning. Advances in neural information processing systems, 20.
+H Sebastian Seung, Manfred Opper, and Haim Sompolinsky. 1992. Query by committee. In Proceedings of the fifth annual workshop on Computational learning theory, pages 287-294.
+Claude Elwood Shannon. 1948. A mathematical theory of communication. The Bell system technical journal, 27(3):379-423.
+Manali Sharma, Di Zhuang, and Mustafa Bilgic. 2015. Active learning with rationales for text classification. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 441-451, Denver, Colorado. Association for Computational Linguistics.
+Artem Shelmanov, Dmitri Puzyrev, Lyubov Kupriyanova, Denis Belyakov, Daniil Larionov, Nikita Khromov, Olga Kozlova, Ekaterina Artemova, Dmitry V. Dylov, and Alexander Panchenko. 2021. Active learning for sequence tagging with deep pre-trained models and Bayesian uncertainty estimates. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1698-1712, Online. Association for Computational Linguistics.
+Dan Shen, Jie Zhang, Jian Su, Guodong Zhou, and Chew-Lim Tan. 2004. Multi-criteria-based active learning for named entity recognition. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 589-596, Barcelona, Spain.
+Shirong Shen, Zhen Li, and Guilin Qi. 2021. Active learning for event extraction with memory-based loss prediction model. arXiv preprint arXiv:2112.03073.
+Yanyao Shen, Hyokun Yun, Zachary C. Lipton, Yakov Kronrod, and Animashree Anandkumar. 2018. Deep active learning for named entity recognition. In International Conference on Learning Representations.
+Tianze Shi, Adrian Benton, Igor Malioutov, and Ozan Irsoy. 2021. Diversity-aware batch active learning for dependency parsing. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2616-2626, Online. Association for Computational Linguistics.
+Xiaoxiao Shi, Wei Fan, and Jiangtao Ren. 2008. Actively transfer domain knowledge. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 342-357. Springer.
+
+Aditya Siddhant and Zachary C. Lipton. 2018. Deep Bayesian active learning for natural language processing: Results of a large-scale empirical study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2904-2909, Brussels, Belgium. Association for Computational Linguistics.
+Samarth Sinha, Sayna Ebrahimi, and Trevor Darrell. 2019. Variational adversarial active learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5972-5981.
+Maria Skeppstedt. 2013. Annotating named entities in clinical text by combining pre-annotation and active learning. In 51st Annual Meeting of the Association for Computational Linguistics Proceedings of the Student Research Workshop, pages 74–80, Sofia, Bulgaria. Association for Computational Linguistics.
+Noah A Smith. 2011. Linguistic structure prediction. Synthesis lectures on human language technologies, 4(2):1-274.
+Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast - but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 254-263, Honolulu, Hawaii. Association for Computational Linguistics.
+Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9275-9293, Online. Association for Computational Linguistics.
+Alex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, and Noah Goodman. 2022. Active learning helps pretrained models learn the intended task. arXiv preprint arXiv:2204.08491.
+Min Tang, Xiaoqiang Luo, and Salim Roukos. 2002. Active learning for statistical natural language parsing. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 120-127, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
+Cynthia A Thompson, Mary Elaine Califf, and Raymond J Mooney. 1999. Active learning for natural language parsing and information extraction. In Proceedings of the Sixteenth International Conference on Machine Learning, pages 406-414.
+Katrin Tomanek and Udo Hahn. 2008. Approximating learning curves for active-learning-driven annotation. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA).
+
+Katrin Tomanek and Udo Hahn. 2009a. Reducing class imbalance during active learning for named entity annotation. In Proceedings of the fifth international conference on Knowledge capture, pages 105-112.
+Katrin Tomanek and Udo Hahn. 2009b. Semi-supervised active learning for sequence labeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1039-1047, Suntec, Singapore. Association for Computational Linguistics.
+Katrin Tomanek and Udo Hahn. 2010. A comparison of models for cost-sensitive active learning. In *Coling* 2010: Posters, pages 1247–1255, Beijing, China. Coling 2010 Organizing Committee.
+Katrin Tomanek, Florian Laws, Udo Hahn, and Hinrich Schütze. 2009. On proper unit selection in active learning: Co-selection effects for named entity recognition. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 9-17, Boulder, Colorado. Association for Computational Linguistics.
+Katrin Tomanek and Fredrik Olsson. 2009. A web survey on the use of active learning to support annotation of text data. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 45-48, Boulder, Colorado. Association for Computational Linguistics.
+Katrin Tomanek, Joachim Wermter, and Udo Hahn. 2007. An approach to text corpus construction which cuts annotation costs and maintains reusability of annotated data. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 486-495, Prague, Czech Republic. Association for Computational Linguistics.
+Simon Tong and Daphne Koller. 2001. Support vector machine active learning with applications to text classification. Journal of machine learning research, 2(Nov):45-66.
+Akim Tsvigun, Artem Shelmanov, Gleb Kuzmin, Leonid Sanochkin, Daniil Larionov, Gleb Gusev, Manvel Avetisian, and Leonid Zhukov. 2022. Towards computationally feasible deep active learning. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1198-1218, Seattle, United States. Association for Computational Linguistics.
+Andreas Vlachos. 2006. Active annotation. In Proceedings of the Workshop on Adaptive Text Extraction and Mining (ATEM 2006).
+Andreas Vlachos. 2008. A stopping criterion for active learning. Computer Speech & Language, 22(3):295-312.
+
+Thuy-Trang Vu, Ming Liu, Dinh Phung, and Gholamreza Haffari. 2019. Learning how to active learn by dreaming. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4091-4101, Florence, Italy. Association for Computational Linguistics.
+Chenguang Wang, Laura Chiticariu, and Yunyao Li. 2017. Active learning for black-box semantic role labeling with neural factors. In *IJCAI*.
+Zijie J. Wang, Dongjin Choi, Shenyu Xu, and Diyi Yang. 2021. Putting humans in the natural language processing loop: A survey. In Proceedings of the First Workshop on Bridging Human-Computer Interaction and Natural Language Processing, pages 47-52, Online. Association for Computational Linguistics.
+Dittaya Wanvarie, Hiroya Takamura, and Manabu Okumura. 2011. Active learning with subsequence sampling strategy for sequence labeling tasks. Information and Media Technologies, 6(3):680-700.
+Fangzhao Wu, Yongfeng Huang, and Jun Yan. 2017. Active sentiment domain adaptation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1701-1711, Vancouver, Canada. Association for Computational Linguistics.
+Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, and Graham Neubig. 2020. Predicting performance for natural language processing tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8625-8646, Online. Association for Computational Linguistics.
+Min Xiao and Yuhong Guo. 2013. Online active learning for cost sensitive domain adaptation. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 1-9, Sofia, Bulgaria. Association for Computational Linguistics.
+Zhao Xu, Kai Yu, Volker Tresp, Xiaowei Xu, and Jizhi Wang. 2003. Representative sampling for text classification using support vector machines. In European conference on information retrieval, pages 393-407. Springer.
+Yan Yan, Romer Rosales, Glenn Fung, and Jennifer G Dy. 2011. Active learning from crowds. In Proceedings of the 28th International Conference on International Conference on Machine Learning, pages 1161-1168.
+Donggeun Yoo and In So Kweon. 2019. Learning loss for active learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 93-102.
+Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, and Chao Zhang. 2022. AcTune: Uncertainty-based active self-training for active fine-tuning of pretrained
+
+language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1422-1436, Seattle, United States. Association for Computational Linguistics.
+Michelle Yuan, Hsuan-Tien Lin, and Jordan Boyd-Graber. 2020. Cold-start active learning through self-supervised language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7935-7948, Online. Association for Computational Linguistics.
+Michelle Yuan, Patrick Xia, Chandler May, Benjamin Van Durme, and Jordan Boyd-Graber. 2022. Adapting coreference resolution models through active learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7533-7549, Dublin, Ireland. Association for Computational Linguistics.
+Xiangkai Zeng, Sarthak Garg, Rajen Chatterjee, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Empirical evaluation of active learning techniques for neural MT. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 84–93, Hong Kong, China. Association for Computational Linguistics.
+Xueying Zhan, Qingzhong Wang, Kuan-hao Huang, Haoyi Xiong, Dejing Dou, and Antoni B Chan. 2022. A comparative survey of deep active learning. arXiv preprint arXiv:2203.13450.
+Mike Zhang and Barbara Plank. 2021. Cartography active learning. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 395-406, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Pei Zhang, Xueying Xu, and Deyi Xiong. 2018. Active learning for neural machine translation. In 2018 International Conference on Asian Language Processing (IALP), pages 153-158. IEEE.
+Rongzhi Zhang, Yue Yu, Pranav Shetty, Le Song, and Chao Zhang. 2022a. Prompt-based rule discovery and boosting for interactive weakly-supervised learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 745-758, Dublin, Ireland. Association for Computational Linguistics.
+Rongzhi Zhang, Yue Yu, and Chao Zhang. 2020. SeqMix: Augmenting active sequence labeling via sequence mixup. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8566-8579, Online. Association for Computational Linguistics.
+Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, and Mingyuan Zhou. 2022b. ALLSH: Active learning guided by local sensitivity and hardness. In Findings of the Association
+
+for Computational Linguistics: NAACL 2022, pages 1328-1342, Seattle, United States. Association for Computational Linguistics.
+Ye Zhang, Matthew Lease, and Byron Wallace. 2017. Active discriminative text representation learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31.
+Mingjun Zhao, Haijiang Wu, Di Niu, and Xiaoli Wang. 2020a. Reinforced curriculum learning on pretrained neural machine translation models. In Proceedings of the AAAI Conference on Artificial Intelligence, 05, pages 9652-9659.
+Shanheng Zhao and Hwee Tou Ng. 2014. Domain adaptation with active learning for coreference resolution. In Proceedings of the 5th International Workshop on Health Text Mining and Information Analysis (Louhi), pages 21-29, Gothenburg, Sweden. Association for Computational Linguistics.
+Yuekai Zhao, Haoran Zhang, Shuchang Zhou, and Zhihua Zhang. 2020b. Active learning approaches to enhancing neural machine translation. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1796–1806, Online. Association for Computational Linguistics.
+Yunpeng Zhao, Mattia Prosperi, Tianchen Lyu, Yi Guo, Le Zhou, and Jiang Bian. 2020c. Integrating crowdsourcing and active learning for classification of work-life events from tweets. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, pages 333-344. Springer.
+Fedor Zhdanov. 2019. Diverse mini-batch active learning. arXiv preprint arXiv:1901.05954.
+Zhong Zhou and Alex Waibel. 2021. Active learning for massively parallel translation of constrained text into low resource languages. In Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages (LoResMT2021), pages 32-43, Virtual. Association for Machine Translation in the Americas.
+Hua Zhu, Wu Ye, Sihan Luo, and Xidong Zhang. 2020. A multitask active learning framework for natural language understanding. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4900-4914, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Jingbo Zhu and Eduard Hovy. 2007. Active learning for word sense disambiguation with methods for addressing the class imbalance problem. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 783-790, Prague, Czech Republic. Association for Computational Linguistics.
+
+Jingbo Zhu, Huizhen Wang, and Eduard Hovy. 2008a. Learning a stopping criterion for active learning for word sense disambiguation and text classification. In Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I.
+Jingbo Zhu, Huizhen Wang, and Eduard Hovy. 2008b. Multi-criteria-based strategy to stop active learning for data annotation. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 1129-1136, Manchester, UK. Coling 2008 Organizing Committee.
+Jingbo Zhu, Huizhen Wang, Benjamin K Tsou, and Matthew Ma. 2009. Active learning with sampling by uncertainty and density for data annotations. IEEE Transactions on audio, speech, and language processing, 18(6):1323-1331.
+Jingbo Zhu, Huizhen Wang, Tianshun Yao, and Benjamin K Tsou. 2008c. Active learning with sampling by uncertainty and density for word sense disambiguation and text classification. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 1137-1144, Manchester, UK. Coling 2008 Organizing Committee.
+
+# A Tasks
+
+In this section, we list representative works for different NLP tasks. According to the output structures, the tasks are further categorized into four groups: classification, sequence labeling, complex structured prediction, and generation.
+
+Classification denotes the tasks whose output consists of only one variable. Text classification that assigns a target label to an input text sequence is a typical example. Pairwise classification and word-level classification are also commonly seen in NLP.
+
+- Text classification: Please refer to the paper table mentioned in (§C) for related works. We do not list them here since there are too many.
+- Pairwise classification: (Grießhaber et al., 2020; Bai et al., 2020; Mussmann et al., 2020)
+- Word sense disambiguation (WSD): (Fujii et al., 1998; Chen et al., 2006; Chan and Ng, 2007; Zhu and Hovy, 2007; Zhu et al., 2008c; Imamura et al., 2009; Martínez Alonso et al., 2015)
+
+Sequence labeling is probably the most commonly seen structured prediction task in NLP. It aims to predict a sequence of labels, among which there may be interactions and constraints.
+
+- Part-of-speech (POS): (Engelson and Dagan, 1996; Ringger et al., 2007; Haertel et al., 2008a; Marcheggiani and Artières, 2014; Fang and Cohn, 2017; Brantley et al., 2020; Chaudhary et al., 2021)
+- (Named) entity recognition (NER/ER): (Shen et al., 2004; Culotta and McCallum, 2005; Kim et al., 2006; Settles and Craven, 2008; Tomanek and Hahn, 2009b; Marcheggiani and Artières, 2014; Chen et al., 2015; Li et al., 2017; Shen et al., 2018; Siddhant and Lipton, 2018; Erdmann et al., 2019; Chaudhary et al., 2019; Brantley et al., 2020; Hazra et al., 2021; Shelmanov et al., 2021; Radmard et al., 2021)
+- Segmentation: (Ngai and Yarowsky, 2000; Sassano, 2002; Neubig et al., 2011; Li et al., 2012b; Marcheggiani and Artières, 2014; Cai et al., 2021)
+- Natural language understanding (NLU): (Hadian and Sameti, 2014; Deng et al., 2018; Peshterliev et al., 2019; Zhu et al., 2020)
+
+Complex structure prediction in this work denotes the structure prediction tasks that are more complex than sequence labeling, and have explicit connections (alignments) between inputs and outputs. They usually aim to extract relational structures among input elements.
+
+- Parsing: (Hwa, 2000; Tang et al., 2002; Baldridge and Osborne, 2003, 2004; Hwa, 2004; Reichart and Rappoport, 2009; Sassano and Kurohashi, 2010; Atserias et al., 2010; Mirroshandel and Nasr, 2011; Majidi and Crane, 2013; Flannery and Mori, 2015; Li et al., 2016; Shi et al., 2021)
+- Semantic role labeling (SRL): (Roth and Small, 2006; Wang et al., 2017; Ikhwantri et al., 2018; Siddhant and Lipton, 2018; Koshorek et al., 2019; Myers and Palmer, 2021)
+- Coreference: (Gasperin, 2009; Miller et al., 2012; Laws et al., 2012; Zhao and Ng, 2014; Sachan et al., 2015; Li et al., 2020; Espeland et al., 2020; Yuan et al., 2022)
+- Relation-related: (Roth and Small, 2008; Bloodgood and Vijay-Shanker, 2009b; Mirroshandel et al., 2011; Fu and Grishman, 2013; Canizares-Diaz et al., 2021; Mallart et al., 2021; Seo et al., 2022; Zhang et al., 2022a)
+- Event-related: (Cao et al., 2015; Shen et al., 2021; Lee et al., 2022)
+- Word alignment: (Ambati et al., 2010b,c; Rocha and Sanchez, 2013)
+- Entity alignment/resolution: (Kasai et al., 2019; Liu et al., 2021)
+
+Generation refers to the tasks that aim to generate a sequence of tokens. We differentiate them from plain structured prediction tasks since there are usually no explicit alignments between input and output sub-parts in the supervision and such alignments are usually implicitly modeled, especially in recent sequence-to-sequence neural models. MT is a typical generation task, where we further separate traditional statistical machine translation (SMT) and recent neural machine translation (NMT). We also include semantic parsing here, since recent works usually cast it as a sequence-to-sequence generation task.
+
+- SMT: (Eck et al., 2005; Haffari et al., 2009; Haffari and Sarkar, 2009; Ananthakrishnan et al., 2010b; Bloodgood and Callison-Burch, 2010; Ambati et al., 2010a; Ananthakrishnan et al., 2010a; González-Rubio et al., 2012; Rocha and
+
+Sanchez, 2013; Logacheva and Specia, 2014a,b; Miura et al., 2016)
+
+- NMT: (Peris and Casacuberta, 2018; Liu et al., 2018b; Zhang et al., 2018; Zeng et al., 2019; Zhao et al., 2020b; Hu and Neubig, 2021; Gupta et al., 2021; Zhou and Waibel, 2021; Hazra et al., 2021; Mendonca et al., 2022)
+- Semantic parsing: (Duong et al., 2018; Ni et al., 2020; Sen and Yilmaz, 2020)
+- Others: (Mairesse et al., 2010; Deng et al., 2018)
+
+# B Other Aspects
+
+We describe some other aspects that are frequently seen when applying AL to NLP.
+
+Crowdsourcing and Noise. Crowdsourcing is another way to reduce annotation costs by including non-expert annotations (Snow et al., 2008). Naturally, AL and crowdsourcing may also be combined with the hope to further reduce cost (Ambati et al., 2010a; Laws et al., 2011; Yan et al., 2011; Fang et al., 2014; Zhao et al., 2020c). One specific factor to consider in this case is the noises in the crowdsourced data, since noisy data may have a negative impact on the effectiveness of AL (Rehbein and Ruppenhofer, 2011). Cost-sensitive querying strategies (\$3.2.2) can be utilized to select both annotators and instances by estimating labelers' reliability (Yan et al., 2011; Fang et al., 2014). Requiring multiple annotations per instance and then consolidating is also applicable (Laws et al., 2011). Lin et al. (2019) provide a framework that enables automatic crowd consolidation for AL on the tasks of sequence labeling.
+
+Multiple Targets. In many cases, we may want to consider multiple targets rather than only one, for example, annotating instances in multiple domains (Xiao and Guo, 2013; He et al., 2021; Longpre et al., 2022) or multiple languages (Haffari and Sarkar, 2009; Qian et al., 2014; Moniz et al., 2022). Moreover, there may be multiple target tasks, where multi-task learning (MTL) can interact with AL (Reichart et al., 2008; Ambati et al., 2011a; Rocha and Sanchez, 2013; Ikhwantri et al., 2018; Zhu et al., 2020; Rotman and Reichart, 2022). In these scenarios with multiple targets, naturally, strategies that consider all the targets are usually more preferable. Reichart et al. (2008) show that a query strategy that considers all target tasks obtains the overall best performance for MTL. Moniz et al. (2022) suggest that joint learning across multiple
+
+languages using a single model outperforms other strategies such as equally dividing budgets or allocating only for a high-resource language and then performing the transfer.
+
+Data Imbalance. Imbalance is a frequently occurring phenomenon in NLP and AL can have interesting interactions with it. On the one hand, as in plain learning scenarios, AL should take data imbalance into considerations, with modifications to the model (Bloodgood and Vijay-Shanker, 2009b), learning algorithm (Zhu and Hovy, 2007) and query strategies (Tomanek et al., 2009; Escudeiro and Jorge, 2010; Li et al., 2012a). On the other hand, AL can be utilized to address the data imbalance problem and build better data (Ertekin et al., 2007; Tomanek and Hahn, 2009a; Attenberg and Ertekin, 2013; Mottaghi et al., 2020; Mussmann et al., 2020).
+
+# C Surveying Process
+
+In this section, we provide more details of our surveying process:
+
+- For the ACL Anthology, we search for papers with the keyword "active" in titles (by grepping the "Full Anthology BibTeX file") $^{4)}$ . There can be related papers that are missed from this simple keyword search, but as we read along the filtered list, we gradually include the notable missing ones.
+- We also include papers outside the ACL Anthology. First, we look for papers by searching with the key phrase "active learning" on Arxiv (in the field of cs.CL, excluding those already appearing in ACL Anthology). Moreover, we also collect related works in other venues, such as AI/ML conferences and journals. For these venues, we do (can) not perform extensive searches due to high volume (and that many are unrelated to our focus on NLP). We mainly collect related papers in these adjacent venues by following the references from the papers already surveyed.
+
+We also create a table for the related papers (with detailed categorizations), which can be found at this link: https://github.com/zzsfornlp/zmsp/blob/main/msp2/docs/al4nlp/readme.md.
\ No newline at end of file
diff --git a/asurveyofactivelearningfornaturallanguageprocessing/images.zip b/asurveyofactivelearningfornaturallanguageprocessing/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c0db231c23288014d71bea575530e9d68b543de5
--- /dev/null
+++ b/asurveyofactivelearningfornaturallanguageprocessing/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0404782d687acd89716f486b28de3452cf9eea30be165a0e6eac4a541ac9d757
+size 24267
diff --git a/asurveyofactivelearningfornaturallanguageprocessing/layout.json b/asurveyofactivelearningfornaturallanguageprocessing/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7c7430e12ca723d7d16ee36d3aa69644cd5042a1
--- /dev/null
+++ b/asurveyofactivelearningfornaturallanguageprocessing/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0b343ea61e6029793c94c7438789b5642cbbf33a240cefdbf45a755083c29ea3
+size 767956
diff --git a/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_content_list.json b/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..67cf86114a4528f6609d01a14f6535f9efdc4820
--- /dev/null
+++ b/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0b6b82476719625f400d0257c1736b813dbc79d9ed10da003b393e74e6323f4e
+size 90646
diff --git a/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_model.json b/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fcbd2f99f8e371c904639a1e78d71041da1d0d61
--- /dev/null
+++ b/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:67de09a33cc6b97402e89d5ff6a3793f3005bb831581836696c885f66bd3541a
+size 111456
diff --git a/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_origin.pdf b/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a4534a8eb5ae579a4aa2eae612f2556b6a1a9d17
--- /dev/null
+++ b/asurveyofcomputationalframinganalysisapproaches/e811bf69-c941-4418-a1e5-5cc43c598afa_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e11eaec1472439d8aaf51a3d7223fc6850687e2f698d032070258bc1e84de21c
+size 453352
diff --git a/asurveyofcomputationalframinganalysisapproaches/full.md b/asurveyofcomputationalframinganalysisapproaches/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..647b6f0041925ea00ce1937dfa9cc9a95c1bffee
--- /dev/null
+++ b/asurveyofcomputationalframinganalysisapproaches/full.md
@@ -0,0 +1,341 @@
+# A Survey of Computational Framing Analysis Approaches
+
+Mohammad Ali
+
+College of Information Studies
+University of Maryland, College Park
+mali24@umd.edu
+
+Naeemul Hassan
+
+Philip Merrill College of Journalism
+College of Information Studies
+University of Maryland, College Park
+nhassan@umd.edu
+
+# Abstract
+
+Framing analysis is predominantly qualitative and quantitative, examining a small dataset with manual coding. Easy access to digital data in the last two decades prompts scholars in both computation and social sciences to utilize various computational methods to explore frames in large-scale datasets. The growing scholarship, however, lacks a comprehensive understanding and resources of computational framing analysis methods. Aiming to address the gap, this article surveys existing computational framing analysis approaches and puts them together. The research is expected to help scholars and journalists gain a deeper understanding of how frames are being explored computationally, better equip them to analyze frames in large-scale datasets, and, finally, work on advancing methodological approaches.
+
+# 1 Introduction
+
+Vaccine hesitancy has long been recognized as a problem despite research evidence favoring the vaccine's effectiveness (Sallam, 2021). Understanding how the vaccination is framed by news media might provide a solution to vaccine hesitancy because a frame determines "how [people] evaluate [a problem] and choose to act upon it" (Entman, 1993, p. 54). Like this, exploration of many other problems (e.g., gun violence) warrants analysis of frames, especially in large-scale datasets in this era.
+
+Traditionally, researchers explore frames using qualitative and quantitative methods that require manual labor and can handle small amounts of data (D'angelo, 2018; Reese et al., 2001). Production of and easy access to large volumes of digital data in the last two decades prompt scholars to harness the exploration of frames in such big data computationally (Card et al., 2015; Liu et al., 2019; Walter and Ophir, 2019; van Atteveldt and Peng, 2018).
+
+Prior studies proposed various computational methods (e.g., topic modeling and neural network).
+
+As the scholarship is growing, a scarcity appeared regarding a comprehensive understanding and resources of computational framing analysis methods (Nicholls and Culpepper, 2021; Sanfilippo et al., 2008). Researchers might be confused with multiple approaches to this analysis, raising questions: how many computational framing analysis methods exist, and which one they should apply?
+
+To address the problem and help researchers with such questions, we survey existing computational framing analysis approaches and put the methods and relevant resources together. As such, the survey is guided by the following three research questions:
+
+RQ1. What computational methods do researchers use to explore frames in large-scale datasets?
+
+RQ2. How do researchers conceptualize a frame in computational framing analysis studies?
+
+RQ3. How do researchers use computational methods in exploring frames?
+
+The primary contributions of this article are: a) it provides a comprehensive understanding and resources of existing computational framing analysis methods and puts them together for interested scholars to gain deeper knowledge and start building on that, and b) it adds new thoughts to the ongoing discussion on advancing the computational methods of framing analysis.
+
+# 2 What is Frame or Framing?
+
+This section provides a conceptual understanding of framing. A classic example of framing concerns a debate over whether to permit Ku Klux Klan to hold a public rally. One news story with the headline "Ku Klux Klan Tests OSU's Commitment to Free Speech" reported the rally as a free speech issue, while another one with the headline "Possible Ku Klux Klan Rally Raises Safety Concerns" reported it as a disruption of public order. As reflected in the headlines, the two stories used different frames. People who read the free speech news story expressed higher tolerance toward KKK's
+
+
+Figure 1: Framing devices deployed in the headlines of two news reports published by The New York Times and The Guardian on the 2022 Buffalo mass shooting.
+
+rally compared to those who read the public order news story (Nelson et al., 1997, p. 581). Figure 1 shows similar frames deployed in two news headlines on the 2022 Buffalo mass shooting.
+
+Scholars are not agreed upon any unified framing definition (Hertog and McLeod, 2001; Van Dijk, 2016). However, a prominent definition, widely used in both traditional and computational framing studies, was provided by Entman (1993). He says:
+
+To frame is to select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation for the item described. (p. 52)
+
+As per this definition, a frame is largely determined by its outcome effects, such as four functions: a) defining problems, b) diagnosing causes, c) making judgments, and d) suggesting remedies. The functions depend on how some selected aspects of "perceived" reality are made salient. In 2003, he defined it a bit differently, "Framing entails selecting and highlighting some facets of events or issues, and making connections among them so as to promote a particular interpretation, evaluation, and/or solution" (Entman, 2003, p. 417). This definition seems to have made a few shifts, such as from "causal interpretation" to "interpretation," from "moral evaluation" to "evaluation," and from "treatment recommendation" to "solution." The salient aspects are also interconnected.
+
+While approaching frames as cultural phenomena, Hertog and McLeod (2001) identified a frame as a cultural "[structure] of meaning that includes a set of core concepts and ideas," including "conflicts, metaphors, myths, and narratives" (p. 160). A frame has also been explained as "a central organizing idea... for making sense of relevant events, suggesting what is at issue" (Gamson and Modigliani, 1989, p. 3). Reese et al. (2001) defined a frame from the sociological perspective and focused on six aspects (italicize): "Frames are organizing principles that are socially shared and persistent over time, that work symbolically to meaningfully structure the social world" (p. 11). In a recent definition, D'angelo (2018) defined news framing as "how journalists, their sources, and audiences work within conditions that shape the messages they construct as well as the ways they understand and interpret these messages" (p. xxiv).
+
+To describe a frame's aspect highlighting some selected facets of an issue or event, Fairhurst (2005) utilized an analogy that "choosing language to frame people's actions and events is like moving a telescope into position" (p. 125). The selected aspects are then coherently organized in a way to make an argument, which finally promotes a particular interpretation, evaluation, and solution. This organization of selected aspects could even be subtle, as framing also "refers to subtle alterations in the statement or presentation of judgment and choice problems" (Iyengar, 1994, p. 11). Another crucial aspect of framing is "to choose one particular meaning (or set of meanings) over another" (Fairhurst and Sarr, 1996, p. 3) that is also supported by Entman (1993), who says a frame "operates by selecting and highlighting some features of reality while omitting others" (p. 53).
+
+Contexts in Framing. A frame is considered context-sensitive. It is shaped in four locations: i) communicator, ii) texts, iii) receiver, and iv) culture (Entman, 1993). The culture is the stock of commonly invoked frames and explained as (a part of) contexts. A news report's content is fully comprehensible when its contextual information is at the disposal of readers. They interpret a frame and its meaning following contextual information (Baden and D'Angelo, 2018; Tewksbury and Riles, 2018).
+
+Framing Devices. Framing devices can be defined as tools that are used to make a piece of information more salient, which is, in other words, "making a piece of information more noticeable,
+
+meaningful, or memorable to audiences" (Entman, 1993, p. 53). While conceptualizing a frame, we accumulated framing devices (see Table 1). To make the list concise and convenient, we combined similar devices and put them into four groups: a) content, b) action, c) context, and d) communicator. The devices or tools can be used to provide either higher or lower salience to selected aspects of reality. In some cases, multiple devices can be applied together as a new device. For example, jargon, metaphors, and contrast can together be used to develop a "story" (Fairhurst and Sarr, 1996).
+
+
+Figure 2: Summary of the Paper Selection Method
+
+# 3 Method
+
+We utilized three ways to identify and select relevant articles for a comprehensive understanding of computational framing analysis methods. First, we searched on Scopus, an abstract and citation database of Elsevier, using relevant keywords: ("computational framing analysis" OR "computational frames analysis" OR ("frame analysis" OR "framing analysis") AND "computational"). It provides 95 articles in the English language. We manually read their abstracts and sorted out 13 articles relating to computational framing analysis. In the sorting process, we read the articles' method sections if needed to make the decision. Other 82 articles were excluded due to their irrelevance. The excluded articles were related to "frames" in other fields, such as building structures (e.g., 2D plane frames) and mechanical engineering. Second, we searched on Google Scholar using the exact key
+
+words and included articles until the third page as no relevant article was found on the third page. This gave us ten relevant articles. Six articles were common in both the Scopus and the Google Scholar searches, resulting in 17 unique articles from both sources. Third, while reading through the 17 selected articles, we tracked down 20 more relevant articles cited in some of those articles. The 20 articles did not appear in the Scopus and Google Scholar searches probably because of the different keywords and phrases used in their titles and abstracts.
+
+Finally, we got a total of 37 articles selected for this survey (see Figure 2). The articles involve journals and conferences in both computation and social science disciplines. Reading through the articles and their supplemental materials (e.g., coding schema guiding the annotation), if any, we utilized an inductive way to scrutinize various aspects, including a) framing conceptualization, b) functions of computational framing analysis approaches, and c) results and their interpretation. We reported available datasets, codes, and other relevant resources, if any.
+
+# 4 Analysis
+
+This section presents an analysis of the selected articles in two broad parts. The first part answers RQ1, and the second part answers RQ2 and RQ3. Table 2 summarizes the articles, identified approaches, codebook, corpora, domains, and resources.
+
+Codebook, Corpora, & Approaches (RQ1). Analysis of the articles identified at least nine approaches and three major coding schema and annotated corpora for computational framing analysis. The approaches are in the categories of supervised, unsupervised, and mixed methods. A supervised method usually needs an annotated subset of data. Here, the model is first trained on a labeled dataset (training data) and then applied in a new similar dataset (test data) to classify or predict each instance (Kotsiantis et al., 2007). In contrast, an unsupervised method does not need any pre-annotated datasets. Instead, it explores all unlabeled data.
+
+Conceptualization & Functions (RQ2 & RQ3). As a way of answering RQ2 and RQ3, we explore how researchers conceptualize frames and utilize computational methods in analyzing frames in each approach, codebook, and corpora.
+
+# 4.1 Codebook & Corpora
+
+# 4.1.1 Policy Frames Codebook
+
+Boydstun et al. (2013) and Boydstun et al. (2014) proposed a codebook named "policy frames codebook" (PFC). The PFC consists of 14 categories of "frame dimensions" and an "other" category. The dimensions include "economic frames," "capacity and resources frames," "morality frames," etc. For example, a news report is labeled as an economic frame if it focuses on "the costs, benefits, or monetary/financial implications of the issue (to an individual, family, community, or to the economy as a whole)" (Boydstun et al., 2014, p. 6).
+
+They developed the codebook through brainstorming and iteration of applying it to random texts. With the codebook, they deployed 3,033 coders to manually code three sets of articles on immigration, tobacco, and same-sex marriage. Using the labeled documents, they finally developed a logistic regression binary text classifiers (i.e., present or absent) (Boydstun et al., 2013, 2014).
+
+# 4.1.2 Media Frames Corpus
+
+Using PFC, Card et al. (2015) offered a manually-annotated corpus of news reports named "media frames corpus" (MFC). The news reports were collected from three domains: immigration, smoking, and same-sex marriage. The MFC was applied in other studies (e.g., Field et al., 2018). Card et al. (2015) annotated the three datasets based on PFC's 15 framing dimensions (Boydstun et al., 2013). The authors, however, did not apply the annotations to any new datasets. In 2016, they added four more categories—pro, neutral, anti, and irrelevant.
+
+Conceptualization in PFC & MFC. Boydstun et al. (2013, 2014) conceptualized framing by resorting to the widely used framing definition of Entman (1993). Overall, they put "language" at the center of identifying and analyzing frames. PFC's development is motivated by three framing concepts: a) frame selection varies based on various situations, b) frames evolve over time, and c) frames spread across issues, geographic locations, and institutions or organizations. Card et al. (2015) also used Entman (1993)'s definition in conceptualizing frames. They focused on some framing elements that work coherently as a framing package.
+
+Review. The authors conceptualized frames with existing framing definitions. However, framing aspects they mentioned (e.g., Entman, 1993) were not utilized in developing the 15 "framing
+
+dimensions." Considering the development process and broader definitions of each frame, the 15 dimensions seem to be more fit with "topics," not frames. As per the framing theory, the categorization of these dimensions looks arbitrary and too broad to understand a frame's nuances. For example, a text is identified as an "economic frame" if it focuses on anything of the whole economy. Let's consider the Ku Klux Klan's example mentioned above. As per MFC's 15 dimensions, both KKK news reports could probably be identified as a "law and order, crime and justice frame" under the PFC. Here, it does not answer the "how" question at all. The dimensions, however, can be considered as topics. The MFC corpus inherited the same limitations as it was developed using the PFC codebook.
+
+# 4.1.3 Gun Violence Frame Corpus (GVFC)
+
+This article identified another annotated corpus named "Gun Violence Frame Corpus" (GVFC). It was applied in neural network-based models discussed later. In this dataset, the authors manually annotated 1,300 news headlines collected from 21 U.S. news media outlets. Using nine pre-defined codes drawn from literature, multiple coders annotated the headlines. Finally, they used a BERT model and made a frame prediction classifier. Its overall accuracy is 84.23
+
+Conceptualization. Liu et al. (2019) used Entman (1993)'s prominent definition to conceptualize framing. They highlighted various ways of constructing frames, such as word choice and labeling by journalists "to promote a certain side" (p. 504). The authors also focused on generic versus issue-specific frames. In terms of manual codes, they applied a deductive approach—first defining some frames and then manually labeling news articles into those pre-defined frames.
+
+Review. The article briefly conceptualized a frame and included the aspects of widely-used framing definition (e.g., Entman, 1993). However, all the framing codes in GVFC were not defined following how the framing was conceptualized. For example, a code was defined in the category of politics "... as long as [a] news headline mentions a politician's name" which seems not aligned with the nuances of their conceptualizations.
+
+# 4.2 Computational Approaches
+
+# 4.2.1 Topic Modeling
+
+Various prior studies utilized topic modeling (TM) to explore frames (e.g. DiMaggio et al., 2013).
+
+Method. The TM algorithm discovers latent themes in a large collection of documents (Blei, 2012). A topic is a probability distribution over a fixed vocabulary (p. 78). The algorithm produces a number $(k)$ of lists of words based on the words' higher probability of being in a list. Each list of words is considered to be a topic, and each topic has a different probability distribution. The latent Dirichlet allocation (LDA) topic model provides an assignment of each document to the topic(s). As a mixed-membership model, each of its documents may be assigned to multiple topics, considering that a document could have elements of multiple topics. DiMaggio et al. (2013) used the LDA topic modeling to explore frames. They view each topic as a frame, saying that a topic "includes terms that call attention to particular ways" (p. 593).
+
+Conceptualization. In the study of DiMaggio et al. (2013), they conceptualized a frame as "a set of discursive cues (e.g., words, images, and narrative) that suggests a particular interpretation of a person, event, organization, practice, condition, or situation" (p. 593). They cited Gamson et al. (1992)'s definition that a frame is "a central organizing principle that holds together and gives coherence and meaning to a diverse array of symbols." They considered each topic as a frame.
+
+Review. Here, the conceptualization of a frame looks consistent with the overall framing idea. However, the topic model's output (i.e., lists of words) and their interpretation seem not aligned with framing aspects. A list of words in the topic model comes without any connection among them due to its features (e.g., bag-of-words). The interpretation of each word list in DiMaggio et al. (2013) also indicates it as a theme or issue, not a frame. For example, they reported the results by utilizing words like "highlight," "emphasize," and "concerned with" (e.g., this topic highlights legislative actions). Framing nuances like a problem and causal interpretation could not be extracted here.
+
+# 4.2.2 Structural Topic Modeling (STM)
+
+Method. The STM model was also used to explore frames (e.g., Roberts et al., 2014). Compared to LDA topic modeling (Blei, 2012), STM allows including metadata or covariates in the model. With metadata (e.g., political ideology and time) added to the dataset and model, the STM allows researchers to interpret how the topics are associated with those metadata. For example, in terms of political ideology, such as conservatives and lib-
+
+erals, researchers might identify a topic as more aligned with conservatives and another topic with liberals. Metadata can also be used in predicting the topics' prevalence by metadata (Gilardi et al., 2021; Nicholls and Culpepper, 2021).
+
+In their study exploring topics in a corpus of newspaper texts, Gilardi et al. (2021) used some covariates, including time. Their results show how the topics are distributed over time across various states in the U.S. Since the authors followed DiMaggio et al. (2013)'s argument of considering a topic as a frame, their results' interpretation also focuses on themes or topics, instead of frames.
+
+Conceptualization. Gilardi et al. (2021) conceptualized a frame with Gamson et al. (1992) definition that a frame can be understood as a "storyline or unfolding narrative about an issue" (p 385). In terms of exploring frames by STM, Gilardi et al. (2021) relied on DiMaggio et al. (2013) argument that topics identified through TM can be viewed as frames.
+
+Review. Like the topic modeling approach (Gilardi et al., 2021), the STM algorithm is also constrained by considering a topic as a frame. So, the STM contains similar limitations in terms of framing analysis. Compared to topic modeling, the STM offers additional insights into the topics or themes through the analysis of covariates. Both methods are based on the bag-of-words idea, indicating the lack of semantic contextualization needed for exploring frames.
+
+# 4.2.3 Hierarchical Topic Modeling
+
+Method. Studies also used hierarchical topic modeling (HTM) to explore frames. Nguyen (2015) and Nguyen et al. (2015) introduced an HTM model named "Supervised Hierarchical Latent Dirichlet Allocation (SHLDA)" that aims to analyze frames in a large dataset. As the SHLDA works, each document in the corpus is associated with a continuous level of scores (e.g., conservative vs. liberal ideology). It produces a hierarchy of topics, where the first-level nodes are considered agendas and the second-level nodes as frames. Documents' scores help explain how the topics are framed concerning respective people's positions. Its document generative process combines the hierarchical LDA and hierarchical Dirichlet process (HDP). The authors applied it to three datasets and conducted qualitative and quantitative analyses to validate the models' agenda and frames.
+
+Conceptualization. Nguyen (2015) also used
+
+the framing definition of Entman (1993) in conceptualizing a frame. However, unlike Gilardi et al. (2021), Nguyen (2015) considered a topic as an agenda (e.g., what topics are talked about) and a sub-topic as a second-level agenda or a frame (e.g., how these topics are talked about).
+
+Review. As elaborated above, the SHLDA is one step ahead of topic modeling. However, a crucial incongruity remains in how they conceptualized a frame (e.g., sub-topics) and interpreted the results. Though there is a lack of unified framing definition, the idea of considering a sub-topic as a frame does not align with traditional framing conceptualization (Entman, 1993; McCombs et al., 1997; Ghanem, 1997). Like many prior framing studies, the SHLDA output might also be considered as simply topics and their relevant attributes, not frames. Moreover, Nguyen (2015)'s qualitative analysis to validate the output as frames is not systematically executed, and the presentation of its results does not illustrate any framing aspects (Entman, 1993)
+
+# 4.2.4 Cluster Analysis
+
+Method. The $k$ -means clustering algorithm is another unsupervised approach used to explore frames. Burscher et al. (2016) conducted two $k$ -means clustering in a dataset. One includes all words, and another includes selected words (i.e., nouns, adjectives, and adverbs). After creating document vectors with TFIDF in both groups, they conducted $k$ -means clustering to find clusters. As a centroid-based clustering approach works, a certain number of clusters $(k)$ is specified in advance, and each cluster is represented by its center. They select the number of clusters $(k)$ using the "elbow method." Each document is assigned to a cluster based on its relatively closer distance to that cluster center (Burscher et al., 2016). Unlike topic modeling, $k$ -means clustering is a single-membership approach where each document generally belongs to one cluster.
+
+Conceptualization. Burscher et al. (2016) conceptualized a frame in terms of "word frequencies" and mentioned words as highly reliable and less biased in producing frames. They "used word frequencies as features [of a frame] in [their] cluster analyses" (p. 533). They utilized traditional framing definition partially (e.g., presence or absence of certain keywords, stock phrases) (Entman, 1993).
+
+Review. As Burscher et al. (2016) conceptualized and interpreted frames in terms of word frequencies and co-occurrences, the framing devices
+
+listed in Table 1 suggest that word(s) are simply one of the many devices to construct a frame. They utilized such conceptualization that does not help explore frames despite their acknowledgment that "based on plain word features, a cluster analysis cannot reveal complex semantic and logical relationships like causality" (Burscher et al., 2016, p. 541). As a single-membership approach, this method is also against one of the core framing ideas that a framing device may belong to multiple frames. The results were presented with words, including "refer to." For example, "cluster B5 refers to nuclear power ... in Iran" (p. 439). The results indicate these as a topic or issue. It does not indicate "how" the "nuclear" issue was discussed and evaluated as a problem. Both conceptualization and output seem to illustrate certain topics, not frames.
+
+# 4.2.5 Neural Network Model
+
+Method. Some studies utilized the neural network approach to build frame-identifying classifiers and analyzed frames in various text documents (e.g., news reports and tweets). Mainly, two annotated datasets namely, MFC and GVFC, were used in building these models.
+
+MFC was utilized in a number of such studies, including probabilistic soft logic (PSL) (Johnson et al., 2017), LSTM neural network (Naderi and Hirst, 2017), recursive neural network (Ji and Smith, 2017), and transformer-based language models such as BERT and RoBERTa (Khanehzar et al., 2019; Cabot et al., 2020; Mendelsohn et al., 2021). Some studies used MFC's annotated news reports partially and some used the full corpus.
+
+Manually annotating the GVFC dataset, Liu et al. (2019) used it to build a classifier using BERT. It was later applied in other studies (e.g., Akyurek et al., 2020; Tourni et al., 2021; Bhatia et al., 2021).
+
+Conceptualization. As mentioned above, Liu et al. (2019) used traditional framing definitions (e.g., Entman, 1993) while conceptualizing a frame. The studies applying MFC in building a neural network-based classifier also conceptualize it by drawing works from prior studies in both social and computational science.
+
+Review. In terms of the approach, both groups of studies seem to have applied the state-of-the-art pre-trained models based on transfer learning that looks promising for advancing computational framing analysis. However, the quality of the annotated training dataset appears not up to the mark, which is reflected in the lack of results interpreta
+
+tion in those studies. As reviewed above, the MFC dataset seems more about categorizing a text into broad topics (e.g., "economic frames"), not frames. The subsequent studies applying MFC dataset also did not adequately justify MFC's 15 dimensions as frames. Their results mainly focused on the accuracy of the model built on MFC training dataset, but not whether the results provide framing nuances.
+
+Compared to MFC, GVFC's annotations look more coherent but still lack in capturing framing nuances, as mentioned above in sub-section 4.1.3. For example, based on GVFC's "politics" code, Liu et al. (2019) interpreted its result saying, "it appears that news media of all types have largely politicized the gun violence issue right after each major mass shooting" (p. 511). Here, the politi-cization result and its interpretation do not align with how the code is defined. The results might indicate the texts "discussed" "a politician" or politics, which is a simple topic or an issue, not any major framing element like problem definition and its coherent argument.
+
+# 4.2.6 Parsing Semantic Relations
+
+Another line of computational framing analysis relates to the exploration of semantic relations, going beyond the bag-of-word model.
+
+Method. Sturdza et al. (2018) operationalized Entman (1993)'s four framing elements as their semantic relations in texts. This approach proposed the utilization of a rule-based system that uses existing computational software, such as TurboParser, and implicature rules. Using the parser, the author proposed identifying syntactic structures in texts and then using a set of rules to transform the syntactic structure into semantic networks. The networks determine the semantic roles of each word (e.g., actors, events) through a set of sentiment analysis implicature rules using a sentiment lexicon.
+
+On the other hand, Ziems and Yang (2021) computationally parsed various attributes (e.g., race) of police shooting victims in news reports and explored how differently they are portrayed in news media. They called it "entity-centric framing." A recent study by Yu (2022) looked at iterative adverbs (e.g., again) in the political discourse considering the adverbs evoke different attitudinal subtexts. After extracting sentences with relevant adverbs, the author grouped the sentences through $k$ -means clustering and identified the most representative keywords in each cluster by a keyword mining tool.
+
+Conceptualization. In conceptualizing a frame,
+
+Sturdza et al. (2018) relied on four framing elements of Entman (1993, p. 52). However, two other studies lack adequate conceptualization of framing. For instance, Ziems and Yang (2021) mainly explored "entity-centric" frames but did not elaborate on it from existing literature.
+
+Review. Compared to the topic modeling method, this approach looks innovative in terms of understanding semantic relations between words and phrases. However, the idea seems not adequately exploited in understanding the nuances of frames. For example, Sturdza et al. (2018) did not apply the operationalization in a practical dataset. Results of Ziems and Yang (2021) reported frequency and correlations while Yu (2022)'s results ended up with clustering and keywords, instead of exploring the coherent argument and relations among various framing devices. However, by its design, the semantic relations approach holds the potential for being used in advancing the computational methods of framing analysis.
+
+# 4.2.7 Frequency-based Model
+
+Method. This model proposed using QDA Miner and its affiliated WordStat program to extract words, and phrases, and examine their repetitions across the corpus (Kang and Yang, 2022). In this model, Sanderink (2020) proposed little changes, which is to first determine certain frames (e.g., energy security) by reviewing prior scholarship. Researchers then prepare a codebook using QDA Miner. The codebook comprises words, phrases, and rules that capture various elements relating to each of the pre-determined frames. Finally, WordStat was used to calculate the frequency of words and phrases relating to each frame.
+
+Conceptualization. Scholars in this approach defined a frame in terms of word recurrence in a document. They also highlighted the ways of editing, interpreting, organizing, and presenting information for particular news content to be framed. They compared a frame with a theme.
+
+Review. The frame was not appropriately conceptualized here, as per the existing framing definitions (e.g., Entman, 1993). The consideration of only the frequency of words does not compromise the coherent meanings of frames.
+
+# 4.2.8 FrameAxis
+
+Method. FramAxis model explores "microframes," which is operationalized as a pair of antonyms, such as legal versus illegal and fast versus slow.
+
+The antonyms are obtained from WordNet. Then, the authors compute the bias of each microframe (average contribution of all words in a document to the microframe) and the intensity of each microframe (how strongly it is presented in documents). The microframes are analyzed along with the agent-object-action patterns identified by the semantic role labeling (SRL) model in the corpus.
+
+Conceptualization. A frame in this approach was conceptualized utilizing features of existing definitions. For example, they highlighted presenting some selected aspects of an issue and making them more salient, which aims to promote certain values, interpretations, or solutions.
+
+Review. Though the framing conceptualization is derived from prominent framing definitions, the core aspect of FrameAxis is the pair of antonyms, which again limits the coherent argument, problem definition, and other framing elements.
+
+# 4.2.9 Analysis of Topic Model Networks
+
+Walter and Ophir (2019) proposed this mixed method approach, "Analysis of Topic Model Networks" (ANTMN), that combines topic modeling and semantic network analysis. It was applied in other studies (e.g., Ophir et al., 2021).
+
+Method. ANTMN includes three steps. First, the authors apply LDA topic modeling (Blei, 2012) to the dataset. They label each topic by qualitatively examining three types of information: a) words with the highest loading over each topic, b) prevalent and exclusive words in each topic, and c) full documents that are the most representative of each topic. Second, ANTMN creates a semantic network, where the topics serve as nodes, and topics' similarity relationships serve as edges. The relationship is calculated based on the topics' cooccurrence in the documents. The output provides a fully connected, undirected, and weighted network. Finally, a community detection algorithm was used to cluster the topics into various communities in the network based on the topics' prevalence in similar documents (Walter and Ophir, 2019).
+
+Conceptualization. As the authors noted, ANTMN can analyze emphasis frames (e.g., highlighting one side), not equivalency frames (e.g., gain vs. loss issue). They conceptualize a frame as "a community[y] in a network of topics" (p. 248), based on linguistic patterns. Borrowing van Atteveldt and Peng (2018)'s idea of arranging various framing devices around an overarching idea (e.g., a cluster of relevant framing devices), they con
+
+sider each topic in topic modeling as a framing device. The cluster of topics was named as a frame in ANTMN. They embraced the patterns of a frame that "repeatedly invokes the same objects and traits, using identical or synonymous words and symbols in a series of similar communications that are concentrated in time" (Entman et al., 2009, p. 177).
+
+Review. A few things seem to have restricted ANTMN as a framing analysis model. As per the framing conceptualization, the topics (aka framing devices) under each network community need to be coherently connected with each other to render a coherent framing argument. The authors did not explain how the devices are coherently interconnected. This lack is reflected in the interpretation of the results. For instance, they reported a framing result, saying that "the largest community on the right consisted of topics about the cultural and economic consequences.... Articles dominated by these topics portrayed the impact of diseases on the economy at large.... (Walter and Ophir, 2019, p. 259). Here, the authors mentioned topics' names and what these topics portray with words like "consists of" and "portrayed." The results did not provide a coherent argument of the problem or how one aspect is interconnected with another. Though the output demonstrated some topics, the authors' claim of the communities as frames is not supported with adequate evidence.
+
+Despite the authors' claim of this method as unsupervised, manual human labor is still needed in at least two places: a) an examination of words and documents to label topics and b) an interpretation of findings. However, no systematic method was provided for executing the manual analysis.
+
+# 5 Discussion and Conclusion
+
+In this article, we surveyed 37 empirical studies and reported on nine computational approaches, three coding schema, and annotated corpora of how they conceptualize frames and utilize various computational methods to explore frames in large-scale datasets. Overall, existing methods and relevant resources are put together in this article. In the absence of a comprehensive understanding and resources of computational framing analysis methods, this article's insights will benefit framing scholars, especially those who are new, to gain deeper knowledge in this single article and build on that in further exploring frames in big data.
+
+Algorithmic Functions. As demonstrated
+
+above, most algorithms used in computational framing analysis were not originally built for this purpose. For example, LDA topic modeling is basically built to find broader themes in a large corpus (Blei, 2012). The works of Liu et al. (2019) and Walter and Ophir (2019), however, seem to be innovative in terms of their efforts to build a new or modified method to explore comparatively more nuances of frames (Nicholls and Culpepper, 2021). As state-of-the-art models, neural networks appeared promising, but appropriate training datasets need to be developed and used for that.
+
+Conceptualization of Frames. Though the computational methods mostly conceptualized a frame with prominent definitions (e.g., the definition of Entman, 1993), some of the methods embraced framing aspects partially. Some studies ended up operationalizing a frame in a way that is not supported by the core framing aspects. For instance, Boydstun et al. (2013, 2014) include its main aspects in developing PFC, which defined the 15 dimensions as "topics" in the name of frames. Nguyen (2015) simply equated a frame with second-level agendas or sub-topics without adequate conceptual support. Though Liu et al. (2019) and Walter and Ophir (2019) provided relatively stronger conceptualization, their results suggest that Liu et al. (2019)'s coding schema and Walter and Ophir (2019)'s network communities still lack in providing coherent definition and causal interpretation arguments.
+
+Interpretation of Results. Even if some studies conceptualized frames in a relatively comprehensive way, their results presentation and interpretation rarely went above describing relevant topics and themes, not frames, as their results lack illustrating the coherent problem, causal evaluations, or potential recommendations. An example mentioned under ANTMN above demonstrated such evidence. Similar gaps in terms of framing conceptualization and presentation of results and interpretations remain in other approaches as well (e.g., topic modeling and cluster analysis).
+
+Use of Framing Devices. The bag-of-word approach automatically excludes from analysis many potential framing devices listed in Table 1. The approaches examined in this article mostly utilize only one framing device (i.e., words). Considering the fact that framing analysis is a comprehensive approach involving multiple theoretical and practical aspects (D'angelo, 2018; Golan, 2021), even the
+
+qualitative framing analysis through manual labor is challenging work. From that perspective, computational approaches are in the nascent stage in addressing this social science problem of framing analysis. So, the scholarship needs better computational methods and tools that might explore frames as close as possible. For example, computational approaches might want to retrieve the problem definition and causal interpretation by including more framing devices (see Table 1) by going beyond the analysis of "words" in future studies.
+
+Overall, this survey article contributed to the literature on computational framing analysis in several ways. As the first survey paper, it put together existing computational framing analysis methods and resources in one place, which can benefit future scholars as at least a source of gaining more comprehensive knowledge on computational framing analysis approaches. With this knowledge, they can start further exploring frames in big data and advancing computational framing analysis methods. This article also contributed to the ongoing discussion and scholarly efforts on further improving the computational tools in framing analysis.
+
+Open Questions. The analysis and discussion offer at least three open questions to be discussed and addressed in future studies: a) How can a computational approach capture all relevant semantic relations, going beyond just words, for better exploration of frames, 2) How can the semantic relations in one text document be connected with or informed by that of other documents for a broader understanding of frames across multiple documents, c) Given the role of many framing devices, not only words, in constructing frames (see Table 1), how can we develop a computational model that captures salience deployed through other framing devices including sentences, omit, metaphors, size and placement of texts, culture, emotion, sources, catchphrase, exemplars, visual content, etc.
+
+A crucial part of framing analysis is to capture "how" a text is presented. Entman (1993)'s definition talks about "perceived reality" that also aligns with people's cognitive thoughts. In texts, the "perceived reality" is usually dissected between what is discussed and how it is framed. Though the "what" part is generally apparent, the main issue is to analyze the "how." In NLP, it appears difficult to automatically distinguish between the "what" and the "how." So, the framing analysis task in NLP is more complicated than for human analysts.
+
+Limitations. Selecting articles for this survey was a challenging task as the words "frame" and "framing" are used in studies of other disciplines (e.g., engineering). This prompted us to exploit multiple ways (e.g., Google Scholar and Scopus) to collect relevant articles as comprehensively as possible. Articles not matching the keyword searches might have been left out. So, the list might have some articles missing due to the search constraints. We excluded non-English articles.
+
+Regarding analysis, we mainly focused on methodological design and quality in terms of capturing and examining frames and framing devices. We did not focus and report on the accuracy of the models' performance. For example, we emphasized the quality of the training dataset (e.g., MFC) to explore frames, instead of the models' accuracies. As this survey article is conducted from a qualitative perspective, our results are constrained by quantitative insights (e.g., the frequency or percentage of applying particular methods in prior studies).
+
+# References
+
+Lene Aarøe. 2011. Investigating frame strength: The case of episodic and thematic frames. Political communication, 28(2):207-226.
+Afra Feyza Akyurek, Lei Guo, Randa Elanwar, Prakash Ishwar, Margrit Betke, and Derry Tanti Wijaya. 2020. Multi-label and multilingual news framing analysis. In Proceedings of the 58th annual meeting of the association for computational linguistics.
+Christian Baden and Paul D'Angelo. 2018. Reconstructing frames from intertextual news discourse. *Doing news framing analysis II: Empirical and theoretical perspectives*, pages 43-66.
+Monika Bednarek and Georgia Carr. 2021. Computer-assisted digital text analysis for journalism and communications research: Introducing corpus linguistic techniques that do not require programming. *Media International Australia*, 181(1):131-151.
+Vibhu Bhatia, Vidya Prasad Akavoor, Sejin Paik, Lei Guo, Mona Jalal, Alyssa Smith, David Assefa Tofu, Edward Edberg Halim, Yimeng Sun, Margrit Betke, et al. 2021. Openframing: Open-sourced tool for computational framing analysis of multilingual data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 242-250.
+David M Blei. 2012. Probabilistic topic models. Communications of the ACM, 55(4):77-84.
+
+Porismita Borah. 2008. Examining media content: A case study of newspaper coverage of dowry in india, 1999-2006. Asian Journal of Communication, 18(4):379-395.
+Amber E Boydstun, Dallas Card, Justin Gross, Paul Resnick, and Noah A Smith. 2014. Tracking the development of media frames within and across policy issues. Technical report, University of California, Davis.
+Amber E Boydstun, Justin H Gross, Philip Resnik, and Noah A Smith. 2013. Identifying media frames and frame dynamics within and across policy issues. In *New Directions in Analyzing Text as Data Workshop*, London.
+Bjorn Burscher, Rens Vliegenthart, and Claes H de Vreese. 2016. Frames beyond words: Applying cluster and sentiment analysis to news coverage of the nuclear power issue. Social Science Computer Review, 34(5):530-545.
+Pere-Lluis Huguet Cabot, Verna Dankers, David Abadi, Agneta Fischer, and Ekaterina Shutova. 2020. The pragmatics behind politics: Modelling metaphor, framing and emotion in political discourse. ACL Anthology.
+Dallas Card, Amber Boydstun, Justin H Gross, Philip Resnik, and Noah A Smith. 2015. The media frames corpus: Annotations of frames across issues. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 438-444.
+Paul DiMaggio, Manish Nag, and David Blei. 2013. Exploiting affinities between topic modeling and the sociological perspective on culture: Application to newspaper coverage of us government arts funding. Poetics, 41(6):570-606.
+Paul D'angelo. 2018. Doing news framing analysis ii. Empirical and Theoretical Perspectives.
+Robert M Entman. 1993. Framing: Towards clarification of a fractured paradigm. McQuail's reader in mass communication theory, pages 390-397.
+Robert M Entman. 2003. Cascading activation: Contesting the white house's frame after 9/11. Political Communication., 20(4):415-432.
+Robert M Entman, Jörg Matthes, and Lynn Pellicano. 2009. Nature, sources, and effects of news framing. In The handbook of journalism studies, pages 195-210. Routledge.
+Gail Fairhurst and Robert Sarr. 1996. The art of framing. San Francisco: Jossey-Bass.
+Gail T Fairhurst. 2005. Reframing the art of framing: Problems and prospects for leadership. Leadership, 1(2):165-185.
+
+Anjalie Field, Doron Kliger, Shuly Wintner, Jennifer Pan, Dan Jurafsky, and Yulia Tsvetkov. 2018. Framing and agenda-setting in russian news: a computational analysis of intricate political strategies. arXiv preprint arXiv:1808.09386.
+William A Gamson, David Croteau, William Hoynes, and Theodore Sasson. 1992. Media images and the social construction of reality. Annual review of sociology, 18(1):373-393.
+William A Gamson and Andre Modigliani. 1989. Media discourse and public opinion on nuclear power: A constructionist approach. American journal of sociology, 95(1):1-37.
+Salma Ghanem. 1997. Filling in the tapestry: The second level of agenda setting in me mcombs, dl shaw & dh weaver (eds.), communication and democracy (pp. 3-15).
+Fabrizio Gilardi, Charles R Shipan, and Bruno Wuest. 2021. Policy diffusion: The issue-definition stage. American Journal of Political Science, 65(1):21-35.
+Guy Golan. 2021. What is news framing? an informal conversation among framing scholars. https://www.youtube.com/watch?v=mArApGS-p1I&t=57s.
+Lei Guo, Chao Su, Sejin Paik, Vibhu Bhatia, Vidya Prasad Akavoor, Ge Gao, Margrit Betke, and Derry Wijaya. 2022. Proposing an open-sourced tool for computational framing analysis of multilingual data. Digital Journalism, pages 1-22.
+James K Hertog and Douglas M McLeod. 2001. A multiperspectival approach to framing analysis: A field guide. In Framing public life, pages 157-178. Routledge.
+Shanto Iyengar. 1994. Is anyone responsible?: How television frames political issues. University of Chicago Press.
+Yangfeng Ji and Noah Smith. 2017. Neural discourse structure for text categorization. arXiv preprint arXiv:1702.01829.
+Elise Jing and Yong-Yeol Ahn. 2021. Characterizing partisan political narrative frameworks about COVID-19 on twitter. *EPJ data science*, 10(1):53.
+Kristen Johnson, Di Jin, and Dan Goldwasser. 2017. Leveraging behavioral and social information for weakly supervised collective classification of political discourse on twitter. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 741-752.
+Yowei Kang and Kenneth CC Yang. 2022. Communicating racism and xenophobia in the era of donald trump: A computational framing analysis of the us-mexico cross-border wall discourses: Special issue on donald trump era and communicating race in america. *Howard Journal of Communications*, pages 1-20.
+
+Shima Khanehzar, Andrew Turpin, and Gosia Mikolajczak. 2019. Modeling political framing across policy issues and contexts. In Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association, pages 61-66.
+Sotiris B Kotsiantis, Ioannis Zaharakis, P Pintelas, et al. 2007. Supervised machine learning: A review of classification techniques. Emerging artificial intelligence applications in computer engineering, 160(1):3-24.
+Haewoon Kwak, Jisun An, and Yong-Yeol Ahn. 2020. A systematic media frame analysis of 1.5 million new york times articles from 2000 to 2017. In 12th ACM Conference on Web Science, pages 305-314.
+Haewoon Kwak, Jisun An, Elise Jing, and Yong-Yeol Ahn. 2021. Frameaxis: characterizing microframe bias and intensity with word embedding. PeerJ Computer Science, 7:e644.
+Pengxiang Li, Hichang Cho, Yuren Qin, and Anfan Chen. 2021. # metoo as a connective movement: Examining the frames adopted in the anti-sexual harassment movement in china. Social Science Computer Review, 39(5):1030-1049.
+Siyi Liu, Lei Guo, Kate Mays, Margrit Betke, and Derry Tanti Wijaya. 2019. Detecting frames in news headlines and its application to analyzing news framing trends surrounding us gun violence. In Proceedings of the 23rd conference on computational natural language learning (CoNLL).
+Maxwell McCombs, Juan Pablo Llamas, Esteban Lopez-Escobar, and Federico Rey. 1997. Candidate images in spanish elections: Second-level agenda-setting effects. Journalism & Mass Communication Quarterly, 74(4):703-717.
+Julia Mendelsohn, Ceren Budak, and David Jurgens. 2021. Modeling framing in immigration discourse on social media. arXiv preprint arXiv:2104.06443.
+Nona Naderi and Graeme Hirst. 2017. Classifying frames at the sentence level in news articles. *Policy*, 9:4-233.
+Thomas E Nelson, Rosalee A Clawson, and Zoe M Oxley. 1997. Media framing of a civil liberties conflict and its effect on tolerance. American Political Science Review, 91(3):567-583.
+Viet-An Nguyen. 2015. Guided probabilistic topic models for agenda-setting and framing. Ph.D. thesis, University of Maryland, College Park.
+Viet-An Nguyen, Jordan Boyd-Graber, Philip Resnik, and Kristina Miler. 2015. Tea party in the house: A hierarchical ideal point topic model and its application to republican legislators in the 112th congress. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1438-1448.
+
+Tom Nicholls and Pepper D Culpepper. 2021. Computational identification of media frames: Strengths, weaknesses, and opportunities. Political Communication, 38(1-2):159-181.
+Yotam Ophir, Dror Walter, Daniel Arnon, Ayse Lokmanoglu, Michele Tizzoni, Joëlle Carota, LORENZO D'Antiga, and Emanuele Nicastro. 2021. The framing of Covid-19 in Italian media and its relationship with community mobility: a mixed-method approach. Journal of health communication, 26(3):161-173.
+Stephen D Reese, Oscar H Gandy Jr, and August E Grant. 2001. Framing public life: Perspectives on media and our understanding of the social world. Routledge.
+Margaret E Roberts, Brandon M Stewart, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Kushner Gadarian, Bethany Albertson, and David G Rand. 2014. Structural topic models for open-ended survey responses. American journal of political science, 58(4):1064-1082.
+Malik Sallam. 2021. Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines, 9(2):160.
+Lisa Sanderink. 2020. Shattered frames in global energy governance: Exploring fragmented interpretations among renewable energy institutions. Energy research & social science, 61:101355.
+Antonio Sanfilippo, Lyndsey Franklin, Stephen Tratz, Gary Danielson, Nicholas Mileson, Roderick Riensche, and Liam McGrath. 2008. Automating frame analysis. In Social computing, behavioral modeling, and prediction, pages 239-248. Springer.
+Mihai D Sturdza et al. 2018. Automated framing analysis: A rule based system for news media text. Journal of Media Research-Revista de Studii Media, 11(32):94-110.
+Geoffrey Supran and Naomi Oreskes. 2021. Rhetoric and frame analysis of exonmobil's climate change communications. *One Earth*, 4(5):696-719.
+J Swenson. 1990. News coverage of the abortion issue. framing changes in the 1980s. paper presented to the committee on the status of women. Association for Education in Journalism and Mass Communication.
+James W Tankard Jr. 2001. The empirical approach to the study of media framing. In Framing public life, pages 111-121. Routledge.
+David Tewksbury and Julius Matthew Riles. 2018. Framing in an interactive news environment. Doing news framing analysis II. Empirical and theoretical perspectives, pages 137-162.
+Isidora Tourni, Lei Guo, Taufiq Husada Daryanto, Fabian Zhafransyah, Edward Edberg Halim, Mona Jalal, Boqi Chen, Sha Lai, Hengchang Hu, Margrit
+
+Betke, et al. 2021. Detecting frames in news headlines and lead images in us gun violence coverage. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4037-4050.
+Wouter van Atteveldt and Tai-Quan Peng. 2018. When communication meets computation: Opportunities, challenges, and pitfalls in computational communication science. Communication Methods and Measures, 12(2-3):81-92.
+TA Van Dijk. 2016. Analyzing frame analysis: A critical review of framing studies in social movement research. Technical report, Working paper version 4.0, 2 December. https://www.academia.edu/40286423....
+Dror Walter and Yotam Ophir. 2019. News frame analysis: An inductive mixed-method computational approach. Communication Methods and Measures, 13(4):248-266.
+Dror Walter and Yotam Ophir. 2021. Strategy framing in news coverage and electoral success: An analysis of topic model networks approach. Political Communication, 38(6):707-730.
+Kenneth CC Yang and Yowei Kang. 2020. Framing national security concerns in mobile telecommunication infrastructure debates: A text mining study of huawei. In Huawei goes global, pages 319-339. Springer.
+Tuukka Ylä-Anttila, Veikko Eranti, and Anna Kukkonen. 2021. Topic modeling for frame analysis: A study of media debates on climate change in india and usa. Global Media and Communication, page 17427665211023984.
+Qi Yu. 2022. "again, dozens of refugees drowned": A computational study of political framing evoked by presuppositions. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop, pages 31-43.
+Caleb Ziems and Diyi Yang. 2021. To protect and to serve? analyzing entity-centric framing of police violence. arXiv preprint arXiv:2109.05325.
+
+# A Appendix
+
+