ACL-OCL / Base_JSON /prefixG /json /gebnlp /2020.gebnlp-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:02:50.186457Z"
},
"title": "Gender-Aware Reinflection using Linguistically Enhanced Neural Models",
"authors": [
{
"first": "Bashar",
"middle": [],
"last": "Alhafni",
"suffix": "",
"affiliation": {
"laboratory": "Computational Approaches to Modeling Language Lab New York University Abu Dhabi",
"institution": "Carnegie Mellon University",
"location": {
"country": "Qatar"
}
},
"email": "alhafni@nyu.edu"
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": "",
"affiliation": {
"laboratory": "Computational Approaches to Modeling Language Lab New York University Abu Dhabi",
"institution": "Carnegie Mellon University",
"location": {
"country": "Qatar"
}
},
"email": "nizar.habash@nyu.edu"
},
{
"first": "Houda",
"middle": [],
"last": "Bouamor",
"suffix": "",
"affiliation": {
"laboratory": "Computational Approaches to Modeling Language Lab New York University Abu Dhabi",
"institution": "Carnegie Mellon University",
"location": {
"country": "Qatar"
}
},
"email": "hbouamor@qatar.cmu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present an approach for sentence-level gender reinflection using linguistically enhanced sequence-to-sequence models. Our system takes an Arabic sentence and a given target gender as input and generates a gender-reinflected sentence based on the target gender. We formulate the problem as a user-aware grammatical error correction task and build an encoderdecoder architecture to jointly model reinflection for both masculine and feminine grammatical genders. We also show that adding linguistic features to our model leads to better reinflection results. The results on a blind test set using our best system show improvements over previous work, with a 3.6% absolute increase in M 2 F 0.5. Bias Statement Most NLP systems are unaware of their users' preferred grammatical gender. Such systems typically generate a single output for a specific input without considering any user information. Beyond being simply incorrect in many cases, such output patterns create representational harm by propagating social biases and inequalities of the world we live in. While such biases can be traced back to the NLP systems' training data, balancing and cleaning the training data will not guarantee the correctness of a single output that is arrived at without accounting for user preferences. Our view is that NLP systems should utilize grammatical gender preference information to provide the correct user-aware output, particularly for gender-marking morphologically rich languages. When the grammatical gender preference information is unavailable to the systems, all gender-specific outputs should be generated and properly marked. We acknowledge that by limiting the choice of gender expression to the grammatical gender choices in Arabic, we exclude other alternatives such as non-binary gender or no-gender expressions. We are not aware of any sociolinguistics published research that discusses such alternatives for Arabic, although there are growing grassroots efforts, e.g., the Ebdal Project. 1",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present an approach for sentence-level gender reinflection using linguistically enhanced sequence-to-sequence models. Our system takes an Arabic sentence and a given target gender as input and generates a gender-reinflected sentence based on the target gender. We formulate the problem as a user-aware grammatical error correction task and build an encoderdecoder architecture to jointly model reinflection for both masculine and feminine grammatical genders. We also show that adding linguistic features to our model leads to better reinflection results. The results on a blind test set using our best system show improvements over previous work, with a 3.6% absolute increase in M 2 F 0.5. Bias Statement Most NLP systems are unaware of their users' preferred grammatical gender. Such systems typically generate a single output for a specific input without considering any user information. Beyond being simply incorrect in many cases, such output patterns create representational harm by propagating social biases and inequalities of the world we live in. While such biases can be traced back to the NLP systems' training data, balancing and cleaning the training data will not guarantee the correctness of a single output that is arrived at without accounting for user preferences. Our view is that NLP systems should utilize grammatical gender preference information to provide the correct user-aware output, particularly for gender-marking morphologically rich languages. When the grammatical gender preference information is unavailable to the systems, all gender-specific outputs should be generated and properly marked. We acknowledge that by limiting the choice of gender expression to the grammatical gender choices in Arabic, we exclude other alternatives such as non-binary gender or no-gender expressions. We are not aware of any sociolinguistics published research that discusses such alternatives for Arabic, although there are growing grassroots efforts, e.g., the Ebdal Project. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The recent advances in machine learning have propelled the field of Natural Language Processing (NLP) forward at a great pace and raised expectation about the quality of results and especially their impact in a social context, including not only race (Merullo et al., 2019) and politics , but also gender identities (Font and Costa-juss\u00e0, 2019; Dinan et al., 2019; Dinan et al., 2020) . Human-generated data, reflective of the gender discrimination and sexist stereotypes perpetrated through language and speaker's lexical choices, is considered the primary source of these biases (Maass and Arcuri, 1996; Menegatti and Rubini, 2017) . However, Habash et al. (2019) pointed out that NLP gender biases do not just exist in human-generated training data, and models built from it; but also stem from gender-blind (i.e., gender-unaware) systems designed to generate a single text output without considering any target gender information. Such systems propagate the biases of the models they use. One example is the I-ama-doctor/I-am-a-nurse problem in machine translation (MT) systems targeting many morphologically In contrast, gender-aware systems should be designed to produce outputs that are as gender-specific as the input information they have access to. Gender information may be contextualized (e.g., the input 'she is a doctor'), or linguistically provided (e.g., the gender feature provided in the user profile in social media). But, there may be contexts where the gender information is unavailable to the system (e.g., 'the student is a nurse'). In such cases, generating both gender-specific forms is more appropriate.",
"cite_spans": [
{
"start": 251,
"end": 273,
"text": "(Merullo et al., 2019)",
"ref_id": "BIBREF35"
},
{
"start": 316,
"end": 344,
"text": "(Font and Costa-juss\u00e0, 2019;",
"ref_id": "BIBREF16"
},
{
"start": 345,
"end": 364,
"text": "Dinan et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 365,
"end": 384,
"text": "Dinan et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 581,
"end": 605,
"text": "(Maass and Arcuri, 1996;",
"ref_id": "BIBREF31"
},
{
"start": 606,
"end": 633,
"text": "Menegatti and Rubini, 2017)",
"ref_id": "BIBREF34"
},
{
"start": 645,
"end": 665,
"text": "Habash et al. (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present an approach for sentence-level gender reinflection using linguistically enhanced sequence-to-sequence models. Our system takes an Arabic sentence and a given target gender as input and generates a gender-reinflected sentence based on the provided target gender. Table 1 shows some input and output examples. Our work is closely related to the one by Habash et al. (2019) , as we use the same corpus that is made available and focus on first-person-singular constructions in Arabic. However, the main contributions of this work are the following: (1) we introduce an approach that jointly models the reinflection for both masculine and feminine grammatical genders, unlike Habash et al. (2019)'s segregated systems; (2) we show that adding linguistic features to our encoder-decoder model leads to better reinflection results. Our code, data, and trained models are publicly available. 3 This paper is organized as follows. In Section 2, we discuss some related work. In Section 3, we present some Arabic linguistic facts related to grammatical gender. Section 4 introduces our model for joint gender reinflection and describes the encoder-decoder architecture. Then, we present the experimental setup in Section 5 and discuss the results in Section 6. An error analysis is given in Section 7. We conclude and present future work in Section 8.",
"cite_spans": [
{
"start": 376,
"end": 396,
"text": "Habash et al. (2019)",
"ref_id": "BIBREF21"
},
{
"start": 911,
"end": 912,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 288,
"end": 295,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many NLP systems have the ability to embed and amplify societal (gender, racial, religious, etc.) biases across a variety of core tasks such as coreference resolution (Rudinger et al., 2018; Zhao et al., 2018a) , machine translation (Rabinovich et al., 2017; Vanmassenhove et al., 2018; Font and Costa-juss\u00e0, 2019; Moryossef et al., 2019; Stanovsky et al., 2019; Stafanovi\u010ds et al., 2020; Gonen and Webster, 2020) , named entity recognition (Mehrabi et al., 2019) , dialogue systems (Dinan et al., 2019) , and language modeling (Lu et al., 2018; Bordia and Bowman, 2019) .",
"cite_spans": [
{
"start": 167,
"end": 190,
"text": "(Rudinger et al., 2018;",
"ref_id": "BIBREF40"
},
{
"start": 191,
"end": 210,
"text": "Zhao et al., 2018a)",
"ref_id": "BIBREF49"
},
{
"start": 233,
"end": 258,
"text": "(Rabinovich et al., 2017;",
"ref_id": "BIBREF39"
},
{
"start": 259,
"end": 286,
"text": "Vanmassenhove et al., 2018;",
"ref_id": "BIBREF44"
},
{
"start": 287,
"end": 314,
"text": "Font and Costa-juss\u00e0, 2019;",
"ref_id": "BIBREF16"
},
{
"start": 315,
"end": 338,
"text": "Moryossef et al., 2019;",
"ref_id": "BIBREF36"
},
{
"start": 339,
"end": 362,
"text": "Stanovsky et al., 2019;",
"ref_id": "BIBREF42"
},
{
"start": 363,
"end": 388,
"text": "Stafanovi\u010ds et al., 2020;",
"ref_id": null
},
{
"start": 389,
"end": 413,
"text": "Gonen and Webster, 2020)",
"ref_id": "BIBREF18"
},
{
"start": 441,
"end": 463,
"text": "(Mehrabi et al., 2019)",
"ref_id": "BIBREF33"
},
{
"start": 483,
"end": 503,
"text": "(Dinan et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 528,
"end": 545,
"text": "(Lu et al., 2018;",
"ref_id": "BIBREF29"
},
{
"start": 546,
"end": 570,
"text": "Bordia and Bowman, 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For the case of gender bias, various research efforts have shown that this could be caused by either human-generated training datasets (Font and Costa-juss\u00e0, 2019; Habash et al., 2019) , pre-trained word embeddings (Bolukbasi et al., 2016; Zhao et al., 2017; Caliskan et al., 2017; Manzini et al., 2019) , or language models (Kurita et al., 2019; Zhao et al., 2019) . To mitigate this problem, several researchers proposed approaches in which they focus mainly on debiasing word embeddings (Bolukbasi et al., 2016; Zhao et al., 2018b; Gonen and Goldberg, 2019) or using counterfactual data augmentation techniques (Lu et al., 2018; Zhao et al., 2018a; Zmigrod et al., 2019; Hall Maudslay et al., 2019) .",
"cite_spans": [
{
"start": 135,
"end": 163,
"text": "(Font and Costa-juss\u00e0, 2019;",
"ref_id": "BIBREF16"
},
{
"start": 164,
"end": 184,
"text": "Habash et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 215,
"end": 239,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 240,
"end": 258,
"text": "Zhao et al., 2017;",
"ref_id": "BIBREF48"
},
{
"start": 259,
"end": 281,
"text": "Caliskan et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 282,
"end": 303,
"text": "Manzini et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 325,
"end": 346,
"text": "(Kurita et al., 2019;",
"ref_id": "BIBREF27"
},
{
"start": 347,
"end": 365,
"text": "Zhao et al., 2019)",
"ref_id": "BIBREF51"
},
{
"start": 490,
"end": 514,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 515,
"end": 534,
"text": "Zhao et al., 2018b;",
"ref_id": "BIBREF50"
},
{
"start": 535,
"end": 560,
"text": "Gonen and Goldberg, 2019)",
"ref_id": "BIBREF17"
},
{
"start": 614,
"end": 631,
"text": "(Lu et al., 2018;",
"ref_id": "BIBREF29"
},
{
"start": 632,
"end": 651,
"text": "Zhao et al., 2018a;",
"ref_id": "BIBREF49"
},
{
"start": 652,
"end": 673,
"text": "Zmigrod et al., 2019;",
"ref_id": "BIBREF53"
},
{
"start": 674,
"end": 701,
"text": "Hall Maudslay et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Most of the solutions were mainly proposed to reduce gender bias in English and may not work as well when it comes to morphologically rich languages. Nevertheless, there have been recent studies that explored the gender bias problem in languages other than English. Zhao et al. (2020) studied gender bias which is exhibited by multilingual embeddings in four languages (English, German, French, and Spanish) and demonstrated that such bias can impact cross-lingual transfer learning tasks. Zmigrod et al. (2019) used a counterfactual data augmentation approach and developed a generative model to convert between masculine and feminine sentences in four languages (French, Hebrew, Italian, and Spanish).",
"cite_spans": [
{
"start": 266,
"end": 284,
"text": "Zhao et al. (2020)",
"ref_id": "BIBREF52"
},
{
"start": 369,
"end": 407,
"text": "(English, German, French, and Spanish)",
"ref_id": null
},
{
"start": 490,
"end": 511,
"text": "Zmigrod et al. (2019)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For Arabic, Habash et al. (2019) introduced a two-step approach to gender-identify and reinflect firstperson-singular constructions. The identification was done through a feature-based classifier, whereas they used a character-level sequence-to-sequence model for the reinflection. They also compared their two-step approach to a single-step joint identification and reinflection model, which under-performed in the case of the Arabic source (not the machine translation source) task. All of their systems modeled grammatical masculine and feminine genders separately. In this paper, we compare to their results using the publicly available Arabic parallel gender corpus they built -a parallel corpus of first-person-singular Arabic sentences that are gender-annotated and reinflected. However, our work is different from theirs in that we jointly learn reinflection for both masculine and feminine genders together. We also model identification implicitly with reinflection in a single architecture. Furthermore, we formulate the problem as a user-aware grammatical error correction task (UGEC). As such, we use as our primary metric the MaxMatch (M 2 ) scorer (Dahlmeier and Ng, 2012) , which is far more meaningful than the BLEU (Papineni et al., 2002) metric used by Habash et al. (2019) for this task.",
"cite_spans": [
{
"start": 12,
"end": 32,
"text": "Habash et al. (2019)",
"ref_id": "BIBREF21"
},
{
"start": 1162,
"end": 1186,
"text": "(Dahlmeier and Ng, 2012)",
"ref_id": "BIBREF11"
},
{
"start": 1232,
"end": 1255,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF37"
},
{
"start": 1271,
"end": 1291,
"text": "Habash et al. (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Modern Standard Arabic (MSA) NLP systems and more specifically those using deep learning, face several challenges when it comes to gender expression including morphological richness, orthographic ambiguity and noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Linguistic Background",
"sec_num": "3"
},
{
"text": "Morphological Richness and Complexity Arabic has a rich morphological system that inflects for gender, number, person, case, state, aspect, mood and voice, in addition to numerous attachable clitics (prepositions, particles, pronouns) (Habash, 2010) . This results in a large number of forms for any particular word, with different morpho-syntactic restrictions. For instance, the adjective mhm\u0169 'important [masculine singular indefinite nominative]', has a related form mhmA\u00e3 that only differs in being accusative in case. In addition to its richness, Arabic morphology has a lot of idiosyncratic inflectional affixes that are not consistent in indicating specific genders or numbers (Alkuhlani and Habash, 2011) . For instance, the Ta-Marbuta suffix , often called the 'feminine singular ending', appears with many words where it does not indicate a feminine-singular feature, and cannot be attached to all masculine singular words to turn them feminine. So, in contrast to the good example of mhm 'important [feminine singular]', we find words like",
"cite_spans": [
{
"start": 235,
"end": 249,
"text": "(Habash, 2010)",
"ref_id": "BIBREF22"
},
{
"start": 685,
"end": 713,
"text": "(Alkuhlani and Habash, 2011)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Linguistic Background",
"sec_num": "3"
},
{
"text": "xlyf 'Caliph [masculine singular]', and sHr 'wizards [masculine plural]'. Furthermore, adding the Ta-Marbuta to some masculine nouns produces nonsensical forms such as * *rjl 'man-ess (female man)' from rjl 'man'. Similarly, removing the Ta-Marbuta is no guarantee that we map from feminine to masculine in every context. For example, the noun word mhm 'mission/assignment' is only feminine and has no meaningful masculine form, as opposed to the adjective mhm 'important [feminine singular]' discussed above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Linguistic Background",
"sec_num": "3"
},
{
"text": "These facts pose major challenges to deep learning models attempting to learn from limited supervised or even large unsupervised data. In this work, we make use of morphological analyzers that indicate all the possible gender information of the words in terms of their functional (grammatical) and form-based (affixational) values (Alkuhlani and Habash, 2011) .",
"cite_spans": [
{
"start": 331,
"end": 359,
"text": "(Alkuhlani and Habash, 2011)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Linguistic Background",
"sec_num": "3"
},
{
"text": "Orthographic Ambiguity and Noise Arabic uses diacritics to specify short vowels and consonantal doubling. These diacritics are optional and generally unwritten, leaving readers to decipher words using contextual and templatic morphology clues. For example, the verb knt can be diacritized as kuntu 'I was', kunta 'You [masculine] were', or kunti 'You [feminine] were'. This is a challenge for identifying the words that need to change for a first-person target gender. In addition to the issue of orthographic ambiguity, unedited MSA text is reported to be quite noisy with spelling errors reaching \u223c23% of all words (Zaghouani et al., 2014) . The most important errors involve Alif-Hamza (Glottal Stop) spelling ( A,\u0100,\u01cd, \u00c2), Ya spelling ( y, \u00fd), and the feminine suffix Ta-Marbuta ( h, ). In Arabic NLP, Alif/Ya normalization is almost standard preprocessing (Habash, 2010) . Generally, the high degree of ambiguity and noise result in a high degree of morphological confusability and model sparsity. For instance, a common spelling error of writing the Ta-Marbuta ( ) as Ha ( h) results in interpreting the ( h) as a possessive pronoun clitic attached to a masculine noun:",
"cite_spans": [
{
"start": 617,
"end": 641,
"text": "(Zaghouani et al., 2014)",
"ref_id": "BIBREF47"
},
{
"start": 860,
"end": 874,
"text": "(Habash, 2010)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Linguistic Background",
"sec_num": "3"
},
{
"text": "kAtbh 'his writer [masculine]', vs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Linguistic Background",
"sec_num": "3"
},
{
"text": "kAtb 'writer [feminine]'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Linguistic Background",
"sec_num": "3"
},
{
"text": "Normalizing the text may solve some issues related to noise and ambiguity. In this paper, we follow Habash et al. 2019's decision to evaluate within an orthographically normalized space for Alif, Ya, and Ta-Marbuta, since the OpenSubtitles 2018 corpus (Lison and Tiedemann, 2016) they use to build the Arabic parallel gender corpus has many of such spelling confusions.",
"cite_spans": [
{
"start": 252,
"end": 279,
"text": "(Lison and Tiedemann, 2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Linguistic Background",
"sec_num": "3"
},
{
"text": "In this section, we discuss the motivation behind our model architecture as well as the integration of the linguistic features. We also describe the training settings and the model's hyperparameters for reproducibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Gender Reinflection Model",
"sec_num": "4"
},
{
"text": "Sequence-to-sequence models have achieved significant results in grammatical error correction (GEC) (Chollampatt and Ng, 2018; Junczys-Dowmunt et al., 2018; Grundkiewicz et al., 2019) and morphological reinflection tasks (Faruqui et al., 2016; Kann and Sch\u00fctze, 2016; Aharoni and Goldberg, 2017) . Many of these problems are modeled on the word-level, however, such models usually require large amounts of training data to achieve good results. Character-level sequence-to-sequence models can be superior in mitigating the lack of training data and in dealing with subtle morphological reinflection. Further, pre-trained distributed word representations have also shown to be helpful if integrated properly within character-level sequence-to-sequence models (Watson et al., 2018) . We formulate the gender reinflection problem as a user-aware grammatical error correction (UGEC) task at the character-level. We also explore leveraging linguistic knowledge on the word-level as well as pre-trained word embeddings to enhance the performance of the model.",
"cite_spans": [
{
"start": 100,
"end": 126,
"text": "(Chollampatt and Ng, 2018;",
"ref_id": "BIBREF10"
},
{
"start": 127,
"end": 156,
"text": "Junczys-Dowmunt et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 157,
"end": 183,
"text": "Grundkiewicz et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 221,
"end": 243,
"text": "(Faruqui et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 244,
"end": 267,
"text": "Kann and Sch\u00fctze, 2016;",
"ref_id": "BIBREF25"
},
{
"start": 268,
"end": 295,
"text": "Aharoni and Goldberg, 2017)",
"ref_id": "BIBREF0"
},
{
"start": 758,
"end": 779,
"text": "(Watson et al., 2018)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "4.1"
},
{
"text": "Given an input sequence x 1:n \u2208 V x containing k words w 1:k \u2208 V w , a gender-reinflected output sequence y 1:m \u2208 V y , and a target gender g \u2208 {F, M }, the goal is to model an auto-regressive distribution which is defined over the target vocabulary: 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "P Vy (y 1:m |x 1:n , g) = m t=1 P (y t |y 1:t\u22121 , x 1:n , g; \u03b8);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "where \u03b8 represents the model's parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "We implement this model using a character-level encoder-decoder neural network with an attention mechanism. Figure 1 : The encoder-decoder architecture for gender reinflection. The input and predicted characters are shown both in Arabic and in the HSB scheme. <s> and </s> indicate the start-of-sequence and endof-sequence tokens respectively. + refers to the attention mechanism and the filled dot (\u2022) indicates a concatenation operation.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 116,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "Encoder First, each character in the input sequence x i is mapped to an embedding e x i \u2208 R E . The character embeddings are parameters of the model which are learned during training. We then feed these embeddings to a two-layer bidirectional GRU (Cho et al., 2014) to obtain a sequence of hidden states h (e) 1:n . Each hidden state h (e) i \u2208 R 2H is the concatenation of the forward and backward GRU outputs when we feed it e x i .",
"cite_spans": [
{
"start": 247,
"end": 265,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "Decoder For the decoder, we use a two-layer GRU with additive attention (Bahdanau et al., 2015; Luong et al., 2015) over the last layer encoder hidden states h (e) 1:n . The initial hidden states of the decoder h t , we learn a context vector c t \u2208 R 2H that is used to summarize the source attentional context when we predict target symbol y t ; we initialize c 0 = 0. At each time step, we feed two inputs to the decoder: the context vector c t\u22121 \u2208 R 2H and the embedding of the predicted decoder output symbol e\u0177 t\u22121 \u2208 R E from the previous time step. However, it is important to note that we use scheduled sampling (teacher forcing) ) with a constant sampling probability during training.",
"cite_spans": [
{
"start": 72,
"end": 95,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 96,
"end": 115,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "The two inputs are then concatenated to create a single vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "v t = [e\u0177 t\u22121 ; c t\u22121 ] \u2208 R E+2H , which",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "is then fed to the GRU to obtain a decoder hidden state h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "(d) t \u2208 R H .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "The target gender g is mapped to an embedding e g \u2208 R J which is learned during training and concatenated together with the decoder hidden state h t ; c t ; e\u0177 t\u22121 ; e g ] \u2208 R H+2H+E+J . We finally project z t to a vector of size |V y | followed by a softmax layer to model the distribution over the target vocabulary P Vy (\u0177 t ) = softmax (W b z t + b b ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "Linguistic Features and Word Embeddings We explore adding word-level morphological features as well as pre-trained distributed word representations to the character embeddings. We use the CALIMA Star Arabic morphological analyzer (Taji et al., 2018) to obtain word-level functional gender features (Alkuhlani and Habash, 2011) . 5 We represent the morphological features for word w j as a four-dimension one-hot vector \u00b5 w j \u2208 R 4 . Each element of this one-hot vector represents whether the word w j is masculine or feminine as well as if the analysis was obtained with or without spelling backoff. We use FastText (Bojanowski et al., 2017) to learn distributed word representations and we denote the FastText word embedding for word w j as \u03c1 w j \u2208 R F .",
"cite_spans": [
{
"start": 230,
"end": 249,
"text": "(Taji et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 298,
"end": 326,
"text": "(Alkuhlani and Habash, 2011)",
"ref_id": "BIBREF1"
},
{
"start": 329,
"end": 330,
"text": "5",
"ref_id": null
},
{
"start": 616,
"end": 641,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "Similarly to Watson et al. (2018) , we added the word-level features to the character embeddings only on the encoder side. Each character embedding e x i is then enriched with \u03c1 w j and \u00b5 w j to create a single vector [e x i ; \u00b5 w j ; \u03c1 w j ] \u2208 R E+4+F which we feed to the encoder, where w j is the word containing character x i .",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "Watson et al. (2018)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "Inference At inference time, we use greedy decoding to find the most likely sequence: 6 y 1:m = argmax y\u2208Vy P (\u0177|x 1:n , g) = argmax y\u2208Vy \u0177t\u2208\u0177",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "P (\u0177 t |\u0177 1:t\u22121 , x 1:n , g)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "The architecture of our gender reinflection linguistically enhanced sequence-to-sequence model is shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 115,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "For all the experiments described in this paper, we use a batch size of 32, a character embedding size of E = 128, a gender embedding size of J = 10, a hidden size of H = 256, a scheduled sampling probability of 0.3, a dropout probability of 0.2, and gradient clipping with a maximum norm of 1. The FastText embeddings have a dimension of F = 100 and were trained for 10 epochs using the OpenSubtitles 2018 corpus in a skip-gram manner with context windows of 2 and 3 respectively. We train the model for 50 epochs by minimizing the average cross-entropy loss defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Settings",
"sec_num": "4.3"
},
{
"text": "L(y 1:m ,\u0177 1:m ; \u03b8) = 1 m m t=1 L(y t ,\u0177 t ; \u03b8); L(y t ,\u0177 t ; \u03b8) = \u2212 log P Vy (\u0177 t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Settings",
"sec_num": "4.3"
},
{
"text": "We use the Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 0.0005, decaying by a factor of 0.5 if the loss on the development set does not decrease after 2 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Settings",
"sec_num": "4.3"
},
{
"text": "In this section, we discuss the data we use to train and evaluate our models. We also discuss the evaluation metrics and the various systems we implemented including the baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "5"
},
{
"text": "For our experiments, we use the publicly available Arabic parallel gender corpus (Habash et al., 2019) , containing 12,238 parallel gender-annotated sentences: F (feminine), M (masculine) or B (genderambiguous). The corpus is divided into three parallel balanced corpora: (1) Corpus input containing F, M and B sentences, (2) Corpus M containing M and B sentences only, and (3) Corpus F containing F and B sentences only. 7 Table 1 shows examples of what Corpus input (Input), Corpus M (Target Masculine), and Corpus F (Target Feminine) would look like.",
"cite_spans": [
{
"start": 81,
"end": 102,
"text": "(Habash et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 424,
"end": 431,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We build our target corpus by concatenating Corpus M and Corpus F , while our source corpus is a duplication of Corpus input . Since our goal is to build a single user-aware joint gender reinflection model for both grammatical genders, we introduce the notion of target gender g having two possible values: F or M. All of the target sentences from Corpus M will have an M target gender, whereas all of the target sentences from Corpus F will have an F target gender. We follow the same data split as Habash et al. (2019) . After merging the corpora we ended up with 17,132 sentence pairs for training (TRAIN), 2,448 for development (DEV), and 4,896 for testing (TEST). All of our systems are trained to take a source sentence and a target gender as input to produce a gender-reinflected target sentence as described in section 4.2.",
"cite_spans": [
{
"start": 500,
"end": 520,
"text": "Habash et al. (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "Gender Reinflection We follow Habash et al. (2019) and use BLEU as an evaluation metric (Papineni et al., 2002) , however, we believe that BLEU is not a suitable metric for our task due to the high similarity between the input and output sentences. We use SacreBLEU (Post, 2018) to compute the BLEU scores. Additionally, we use the MaxMatch (M 2 ) scorer (Dahlmeier and Ng, 2012) to compute the word-level edits between the input and reinflected output. We report the precision, recall, and F 0.5 scores calculated against the gold edits, which were also created by the M 2 scorer. We are aware that there are other tools to consider for word-level edit calculation such as ERRANT (Bryant et al., 2017 ), but we did not use them as they require additional dependencies to work for Arabic.",
"cite_spans": [
{
"start": 30,
"end": 50,
"text": "Habash et al. (2019)",
"ref_id": "BIBREF21"
},
{
"start": 88,
"end": 111,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF37"
},
{
"start": 266,
"end": 278,
"text": "(Post, 2018)",
"ref_id": "BIBREF38"
},
{
"start": 355,
"end": 379,
"text": "(Dahlmeier and Ng, 2012)",
"ref_id": "BIBREF11"
},
{
"start": 681,
"end": 701,
"text": "(Bryant et al., 2017",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "5.2"
},
{
"text": "Input Gender Identification Our sequence-to-sequence model does not explicitly identify the gender of the input sentence; however, we consider any attempted change (or lack thereof) to the input as a signal for the implicit gender identification: if our model reinflects the source sentence, then we consider the gender of this sentence to be the opposite of the given target gender. But if the model does not reinflect the source sentence, then we consider the gender of this sentence to be the same as the target gender. We report the average F 1 score for M and F gender identification over the source sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "5.2"
},
{
"text": "We report the results for gender identification and reinflection in a normalized space for Alif, Ya, and Ta-Marbuta as discussed in section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "5.2"
},
{
"text": "In addition to comparing with the results from Habash et al. (2019), we include two baselines. The first one is a DO NOTHING baseline which simply passes the input to the output as is. This baseline is intended to show how similar the inputs and the outputs are. The second is a baseline in which we define a bigram maximum likelihood estimation (MLE) model: given an input sequence of words x w 1:n \u2208 V xw , a target sequence of words y w 1:n \u2208 V yw , and a target gender g \u2208 {F, M }, the MLE model is built as follows: 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.3"
},
{
"text": "P (y w i |x w i , x w i\u22121 , g) = count(y w i , x w i , x w i\u22121 , g) count(x w i , x w i\u22121 , g)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.3"
},
{
"text": "At inference time, we pick the target word\u0177 w i which maximizes the probability defined above. If\u0177 w i was not observed in the training data along with x w i and x w i\u22121 , we back-off to a lower-order distribution (unigram) P (\u0177 w i |x w i , g). In the worst case scenario, where\u0177 w i was not observed in the training data along with x w i , we pass x w i to the output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.3"
},
{
"text": "The MLE baseline is suitable for our case because the input and output sentences are perfectly aligned on the word-level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.3"
},
{
"text": "We explore four variants of the model described in section 4.2. In the first, we provide the encoder with the character embeddings without any morphological features or FastText embeddings and we refer to it as JOINT. The second variant is where we add the morphological features to the character embeddings but without the FastText embeddings and we refer to it as JOINT+MORPH. For the third variant, we explore adding both the morphological features and the FastText embeddings to the character embeddings, we refer to it as JOINT+MORPH+FT. To build the fourth one, we selected the best variant and trained it in a similar fashion to Habash et al. (2019) . We trained two systems disjointly; one using Corpus M and the other using Corpus F and reported the average performance of both systems. We refer to this last variant as DISJOINT+MORPH.",
"cite_spans": [
{
"start": 636,
"end": 656,
"text": "Habash et al. (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "5.4"
},
{
"text": "The results of our evaluation on the DEV set are presented in Table 2 . The best performing system is JOINT+MORPH. It improves over the previous SOTA on this task, Habash et al. (2019) , in every compared metric, including a 4.4% absolute increase in M 2 F 0.5 . The biggest contribution to the performance increase is from recall (10.3% absolute). In fact, all of the neural models we introduced in this paper improve over the Habash et al. (2019) results in terms of recall (at varying degrees); however, only JOINT+MORPH improves in terms of recall and precision. The MLE results are surprisingly competitive in terms of precision, scoring higher than some of the weaker neural models; while being the worst (barring DO NOTHING) across all other metrics.",
"cite_spans": [
{
"start": 164,
"end": 184,
"text": "Habash et al. (2019)",
"ref_id": "BIBREF21"
},
{
"start": 428,
"end": 448,
"text": "Habash et al. (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 62,
"end": 69,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The two aspects of our best system (being joint and using morphological features) are important to its performance. When we compare JOINT+MORPH to its JOINT counterpart, we observe an 5.6% absolute increase in the M 2 F 0.5 score and a corresponding 0.6% increase in identification F 1 score. This confirms that morphological features are helpful for both gender identification and reinflection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "An ablation experiment comparing the best system JOINT+MORPH to the disjoint variant of it (DIS-JOINT+MORPH) demonstrates the large added value of using a joint model: an 11.2% absolute increase in M 2 F 0.5 score, 0.45 BLEU points , and 0.8% absolute improvement in identification F 1 score. The use of word embeddings was not helpful to our best system. One possible explanation is that the use of semantically oriented embeddings may not be optimal for fine-targeted rewriting tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The results on the TEST set using the baselines and the best system from the DEV experiments are given in Table 3 . These results show consistent conclusions with the DEV results. Our best system improves over the previous SOTA in every compared metric, including a 3.6% absolute increase in terms of M 2 F 0.5 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 113,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We conducted a manual error analysis examining all of the errors in the output of our best system on the DEV set. In total, there were 106 sentences with errors (or 4.3% out of 2,448). In those erroneous sentences, there were 128 words with problems. Table 4 presents the detailed scores, which we discuss next.",
"cite_spans": [],
"ref_spans": [
{
"start": 251,
"end": 258,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "Around two thirds of the word errors were false negatives, i.e., where a change should have happened but did not (Table 4 No Change) . In a quarter of the No Change cases, a clear copular construction context for first person gendered expression is seen. For example, the word fnAn 'artist [masc]' in",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 132,
"text": "(Table 4 No Change)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "\u00c2nA fnAn yA sydy 'I'm an artist, sir' is not correctly reinflected to its F target form fnAn 'artist [fem]'. The No Change errors with target gender F are 50% higher than the target gender M; this suggests that the system is more adept at identifying feminine source text than the other way around. This is plausible given that the Arabic feminine form is the marked variety.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "Returning to the rest of the errors, an additional quarter of them involved a false positive (Table 4 Wrong Change). Three types of incorrect changes are noteworthy. First is imperfectly reinflecting the masculine form by failing to indicate case (Table 4 Case \u00c2m\u03b8l which is a homograph with the word 'examples'. The third type of change errors involves random generation of odd repetitive character sequences (Table 4 Odd characters), a side effect of using character sequence-to-sequence models. One example in our data is the generation of the nonsensical form qqq from the word qlq 'worried [masc]' instead of qlq 'worried [fem]'. Finally, about 1/12 th of all counted errors are miscounts due to Gold annotation fails, where our system actually generated the correct output (Table 4 Gold Error) . Considering the detailed scores for the whole DEV set and for M target and F target cases, we note the following. As expected, the F target setting has more errors than the M target setting. No Change errors and Gold errors are more common for the F target setting. The Case form errors are only seen in the M target setting. Errors with uninflectable words are almost equally present. These errors suggest that more work needs to be done on identifying when a reinflection should take place. Furthermore, to address the errors of uninflectable forms and case-marked forms, we may have to incorporate more linguistic knowledge or more powerful language models.",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 260,
"text": "(Table 4 Case",
"ref_id": "TABREF6"
},
{
"start": 410,
"end": 418,
"text": "(Table 4",
"ref_id": "TABREF6"
},
{
"start": 779,
"end": 799,
"text": "(Table 4 Gold Error)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "In this paper, we proposed a solution to single-output NLP systems that allows users to specify their grammatical gender preference in Arabic. Our intention is to enable users to reduce the harm that may be produced by NLP systems propagation of biased representations. Our joint approach for sentence-level gender reinflection uses linguistically enhanced sequence-to-sequence models and frames the problem as a user-aware grammatical error correction task. Our system takes an Arabic sentence and a given target gender as input and generates a gender-reinflected sentence based on the provided target gender. We showed that linguistic knowledge helps in learning gender identification implicitly which improves reinflection results. In future work, we would like to explore different architectures such as Transformerbased models (Vaswani et al., 2017) . Furthermore, we are interested in exploring the added value of combining syntactic and morphological features. We would also like to apply our approach to different languages and dialectal varieties. Lastly, we plan to extend the Arabic parallel gender corpus beyond first-person-singular constructions and adapt our models accordingly.",
"cite_spans": [
{
"start": 832,
"end": 854,
"text": "(Vaswani et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "https://www.facebook.com/EbdalProject/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Arabic transliteration is in the HSB scheme(Habash et al., 2007). 3 https://github.com/CAMeL-Lab/gender-reinflection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "F stands for Feminine and M stands for Masculine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We experimented with both form-based and functional gender features, and found the functional features to be superior in performance; so we only report on them in this paper.6 It important to note that we also explored beam search for decoding, however, greedy decoding yield better results.7 In this work, we consider the B cases to be masculine in CorpusM and feminine in CorpusF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We experimented with different n-gram sizes for the MLE model, the bigram yielded the best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was carried out on the High Performance Computing resources at New York University Abu Dhabi (NYUAD). We would like to thank the Computational Approaches to Modeling Language Lab at NYUAD for their help and invaluable suggestions throughout this project. We also would like to thank Professor Dima Ayoub for helpful conversations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Morphological inflection generation with hard monotonic attention",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2004--2015",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni and Yoav Goldberg. 2017. Morphological inflection generation with hard monotonic attention. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2004-2015, Vancouver, Canada, July.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A corpus for modeling morpho-syntactic agreement in Arabic: Gender, number and rationality",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Alkuhlani",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "357--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Alkuhlani and Nizar Habash. 2011. A corpus for modeling morpho-syntactic agreement in Arabic: Gen- der, number and rationality. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 357-362, Portland, Oregon, USA, June.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Scheduled sampling for sequence prediction with recurrent neural networks",
"authors": [
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence predic- tion with recurrent neural networks. CoRR, abs/1506.03099.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Man is to computer programmer as woman is to homemaker?",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Saligrama",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Identifying and reducing gender bias in word-level language models",
"authors": [
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "7--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikha Bordia and Samuel R. Bowman. 2019. Identifying and reducing gender bias in word-level language mod- els. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 7-15, Minneapolis, Minnesota, June.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic annotation and evaluation of error types for grammatical error correction",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "793--805",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 793-805, Vancouver, Canada, July.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar, October.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A multilayer convolutional encoder-decoder neural network for grammatical error correction",
"authors": [
{
"first": "Shamil",
"middle": [],
"last": "Chollampatt",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shamil Chollampatt and Hwee Tou Ng. 2018. A multilayer convolutional encoder-decoder neural network for grammatical error correction. In Proceedings of the AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Better evaluation for grammatical error correction",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Dahlmeier",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "568--572",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568-572, Montr\u00e9al, Canada, June.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Queens are powerful too: Mitigating gender bias in dialogue generation",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Angela Fan, Adina Williams, Jack Urbanek, Douwe Kiela, and Jason Weston. 2019. Queens are powerful too: Mitigating gender bias in dialogue generation. ArXiv, abs/1911.03842.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multi-dimensional gender bias classification",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00614"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, and Adina Williams. 2020. Multi-dimensional gender bias classification. arXiv preprint arXiv:2005.00614.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "plain sight: Media bias through the lens of factual reporting",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Marshall",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Ruisi",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kumar Choubey",
"suffix": ""
},
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.02670"
]
},
"num": null,
"urls": [],
"raw_text": "Lisa Fan, Marshall White, Eva Sharma, Ruisi Su, Prafulla Kumar Choubey, Ruihong Huang, and Lu Wang. 2019. In plain sight: Media bias through the lens of factual reporting. arXiv preprint arXiv:1909.02670.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Morphological inflection generation using character sequence to sequence learning",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "634--643",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 634-643, San Diego, California, June.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Equalizing gender biases in neural machine translation with word embeddings techniques",
"authors": [
{
"first": "Joel",
"middle": [
"Escud\u00e9"
],
"last": "Font",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Escud\u00e9 Font and Marta R. Costa-juss\u00e0. 2019. Equalizing gender biases in neural machine translation with word embeddings techniques.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatically identifying gender issues in machine translation using perturbations",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Kellie",
"middle": [],
"last": "Webster",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Kellie Webster. 2020. Automatically identifying gender issues in machine translation using perturbations.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural grammatical error correction systems with unsupervised pre-training on synthetic data",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "252--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correc- tion systems with unsupervised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 252-263, Florence, Italy, August.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On Arabic Transliteration",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Abdelhadi",
"middle": [],
"last": "Soudi",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Buckwalter",
"suffix": ""
}
],
"year": 2007,
"venue": "Arabic Computational Morphology: Knowledge-based and Empirical Methods",
"volume": "",
"issue": "",
"pages": "15--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Habash, Abdelhadi Soudi, and Tim Buckwalter. 2007. On Arabic Transliteration. In A. van den Bosch and A. Soudi, editors, Arabic Computational Morphology: Knowledge-based and Empirical Methods, pages 15-22. Springer, Netherlands.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatic gender identification and reinflection in Arabic",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Houda",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Chung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "155--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Habash, Houda Bouamor, and Christine Chung. 2019. Automatic gender identification and reinflection in Arabic. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 155-165, Florence, Italy, August.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Introduction to Arabic natural language processing",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Nizar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Y Habash. 2010. Introduction to Arabic natural language processing, volume 3. Morgan & Claypool Publishers.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "It's all in the name: Mitigating gender bias with name-based counterfactual data substitution",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Rowan Hall Maudslay",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Teufel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5267--5275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It's all in the name: Mitigating gender bias with name-based counterfactual data substitution. In Proceedings of the 2019 Conference on Empir- ical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5267-5275, Hong Kong, China, November.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Approaching neural grammatical error correction as a low-resource machine translation task",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Shubha",
"middle": [],
"last": "Guha",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "595--606",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Approaching neural grammatical error correction as a low-resource machine translation task. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 595-606, New Orleans, Louisiana, June.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Single-model encoder-decoder with explicit morphological representation for reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "555--560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016. Single-model encoder-decoder with explicit morphological repre- sentation for reinflection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 555-560, Berlin, Germany, August.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Measuring bias in contextualized word representations",
"authors": [
{
"first": "Keita",
"middle": [],
"last": "Kurita",
"suffix": ""
},
{
"first": "Nidhi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Ayush",
"middle": [],
"last": "Pareek",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "166--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contex- tualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 166-172, Florence, Italy, August.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "923--929",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 923-929, Portoro\u017e, Slovenia, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Gender bias in neural natural language processing",
"authors": [
{
"first": "Kaiji",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Mardziel",
"suffix": ""
},
{
"first": "Fangjing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Preetam",
"middle": [],
"last": "Amancharla",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Datta",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2018. Gender bias in neural natural language processing.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher Manning. 2015. Effective approaches to attention-based neural ma- chine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1412-1421, Lisbon, Portugal.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Language and stereotyping. Stereotypes and stereotyping",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Maass",
"suffix": ""
},
{
"first": "Luciano",
"middle": [],
"last": "Arcuri",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "193--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Maass and Luciano Arcuri. 1996. Language and stereotyping. Stereotypes and stereotyping, pages 193-226.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Manzini",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Manzini, Yao Chong Lim, Yulia Tsvetkov, and Alan W Black. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Man is to person as woman is to location: Measuring gender bias in named entity recognition",
"authors": [
{
"first": "Ninareh",
"middle": [],
"last": "Mehrabi",
"suffix": ""
},
{
"first": "Thamme",
"middle": [],
"last": "Gowda",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Morstatter",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Aram",
"middle": [],
"last": "Galstyan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, and Aram Galstyan. 2019. Man is to person as woman is to location: Measuring gender bias in named entity recognition.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Gender bias and sexism in language",
"authors": [
{
"first": "Michela",
"middle": [],
"last": "Menegatti",
"suffix": ""
},
{
"first": "Monica",
"middle": [],
"last": "Rubini",
"suffix": ""
}
],
"year": 2017,
"venue": "Oxford Research Encyclopedia of Communication",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michela Menegatti and Monica Rubini. 2017. Gender bias and sexism in language. In Oxford Research Encyclo- pedia of Communication. Oxford University Press.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "vestigating sports commentator bias within a large corpus of american football broadcasts",
"authors": [
{
"first": "Jack",
"middle": [],
"last": "Merullo",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Yeh",
"suffix": ""
},
{
"first": "Abram",
"middle": [],
"last": "Handler",
"suffix": ""
},
{
"first": "Alvin",
"middle": [],
"last": "Grissom",
"suffix": ""
},
{
"first": "I",
"middle": [
"I"
],
"last": "Brendan",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.03343"
]
},
"num": null,
"urls": [],
"raw_text": "Jack Merullo, Luke Yeh, Abram Handler, Alvin Grissom II, Brendan O'Connor, and Mohit Iyyer. 2019. In- vestigating sports commentator bias within a large corpus of american football broadcasts. arXiv preprint arXiv:1909.03343.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Filling gender & number gaps in neural machine translation with black-box context injection",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Moryossef",
"suffix": ""
},
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "49--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Moryossef, Roee Aharoni, and Yoav Goldberg. 2019. Filling gender & number gaps in neural machine translation with black-box context injection. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 49-54, Florence, Italy, August.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "BLEU: a Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the Conference of the Association for Computational Linguistics (ACL), pages 311-318, Philadelphia, Pennsylvania, USA.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium, October.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Personalized machine translation: Preserving original author traits",
"authors": [
{
"first": "Ella",
"middle": [],
"last": "Rabinovich",
"suffix": ""
},
{
"first": "Raj",
"middle": [
"Nath"
],
"last": "Patel",
"suffix": ""
},
{
"first": "Shachar",
"middle": [],
"last": "Mirkin",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "1",
"issue": "",
"pages": "1074--1084",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ella Rabinovich, Raj Nath Patel, Shachar Mirkin, Lucia Specia, and Shuly Wintner. 2017. Personalized machine translation: Preserving original author traits. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1074-1084, Valencia, Spain, April.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Gender bias in coreference resolution",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Leonard",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "8--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14, New Orleans, Louisiana, June.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Mitigating gender bias in machine translation with target gender annotations",
"authors": [],
"year": 2020,
"venue": "Art\u016brs Stafanovi\u010ds, Toms Bergmanis, and M\u0101rcis Pinnis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Art\u016brs Stafanovi\u010ds, Toms Bergmanis, and M\u0101rcis Pinnis. 2020. Mitigating gender bias in machine translation with target gender annotations.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Evaluating gender bias in machine translation",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1679--1684",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy, July.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "An Arabic Morphological Analyzer and Generator with Copious Features",
"authors": [
{
"first": "Dima",
"middle": [],
"last": "Taji",
"suffix": ""
},
{
"first": "Salam",
"middle": [],
"last": "Khalifa",
"suffix": ""
},
{
"first": "Ossama",
"middle": [],
"last": "Obeid",
"suffix": ""
},
{
"first": "Fadhl",
"middle": [],
"last": "Eryani",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology (SIGMORPHON)",
"volume": "",
"issue": "",
"pages": "140--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dima Taji, Salam Khalifa, Ossama Obeid, Fadhl Eryani, and Nizar Habash. 2018. An Arabic Morphological Analyzer and Generator with Copious Features. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology (SIGMORPHON), pages 140-150.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Getting gender right in neural machine translation",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Vanmassenhove",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3003--3008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine trans- lation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3003-3008, Brussels, Belgium, October-November.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Utilizing character and word embeddings for text normalization with sequence-to-sequence models",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Watson",
"suffix": ""
},
{
"first": "Nasser",
"middle": [],
"last": "Zalmout",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "837--843",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Watson, Nasser Zalmout, and Nizar Habash. 2018. Utilizing character and word embeddings for text nor- malization with sequence-to-sequence models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 837-843, Brussels, Belgium, October-November.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Large Scale Arabic Error Annotation: Guidelines and Framework",
"authors": [
{
"first": "Wajdi",
"middle": [],
"last": "Zaghouani",
"suffix": ""
},
{
"first": "Behrang",
"middle": [],
"last": "Mohit",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Ossama",
"middle": [],
"last": "Obeid",
"suffix": ""
},
{
"first": "Nadi",
"middle": [],
"last": "Tomeh",
"suffix": ""
},
{
"first": "Alla",
"middle": [],
"last": "Rozovskaya",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Alkuhlani",
"suffix": ""
},
{
"first": "Kemal",
"middle": [],
"last": "Oflazer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Language Resources and Evaluation Conference (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wajdi Zaghouani, Behrang Mohit, Nizar Habash, Ossama Obeid, Nadi Tomeh, Alla Rozovskaya, Noura Farra, Sarah Alkuhlani, and Kemal Oflazer. 2014. Large Scale Arabic Error Annotation: Guidelines and Framework. In Proceedings of the Language Resources and Evaluation Conference (LREC), Reykjavik, Iceland.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Men also like shopping: Reducing gender bias amplification using corpus-level constraints",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "15--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20, New Orleans, Louisiana, June.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Learning gender-neutral word embeddings",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yichao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4847--4853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai-Wei Chang. 2018b. Learning gender-neutral word embed- dings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4847-4853, Brussels, Belgium, October-November.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Gender bias in contextualized word embeddings",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "629--634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629-634, Minneapolis, Minnesota, June.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Gender bias in multilingual embeddings and cross-lingual transfer",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Subhabrata",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Saghar",
"middle": [],
"last": "Hosseini",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"Hassan"
],
"last": "Awadallah",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Subhabrata Mukherjee, Saghar Hosseini, Kai-Wei Chang, and Ahmed Hassan Awadallah. 2020. Gender bias in multilingual embeddings and cross-lingual transfer.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology",
"authors": [
{
"first": "Ran",
"middle": [],
"last": "Zmigrod",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1651--1661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ran Zmigrod, Sabrina J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1651-1661, Florence, Italy, July.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "H are learned by passing the encoder hidden states at the last time step h (e) n of the corresponding layers through a fully-connected tanh layer, h (d) 0 = tanh(W a h (e) n + b a ). Given the last layer encoder hidden states h (e) 1:n and the last layer decoder hidden state at the t th time step h (d)",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "(d) t , the context vector c t , and the embedding of the predicted symbol from the previous time step e\u0177 t\u22121 to create vector z t = [h(d)",
"type_str": "figure",
"num": null
},
"TABREF0": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Input</td><td>Gender</td><td>Target Masculine</td><td>Target Feminine</td></tr><tr><td>\u00c2ryd HlwlA sry\u03c2</td><td>B</td><td>\u00c2ryd HlwlA sry\u03c2</td><td>\u00c2ryd HlwlA sry\u03c2</td></tr><tr><td>I want quick solutions</td><td/><td>I want quick solutions</td><td>I want quick solutions</td></tr><tr><td>l\u00c2nny Amr\u00c2 \u0161qrA'</td><td>F</td><td>l\u00c2nny rjl \u00c2\u0161qr</td><td>l\u00c2nny Amr\u00c2 \u0161qrA'</td></tr></table>",
"text": "Because I am a blonde woman Because I am a blonde man Because I am a blonde woman \u00c2nA s\u03c2yd blqA\u0177km M \u00c2nA s\u03c2yd blqA\u0177km \u00c2nA s\u03c2yd blqA\u0177km I am happy [masc.] to meet you I am happy [masc.] to meet you I am happy [fem.] to meet you"
},
"TABREF1": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Examples covering all possible combinations of input and output grammatical genders. Changed output words are underlined in the transliterations. rich languages. While English uses gender-neutral terms that hide the ambiguity of the first-person gender reference, morphologically rich languages need to use grammatically different gender-specific terms for these two expressions. In Arabic, as in other languages with grammatical gender, genderunaware single-output MT from English often results in \u00c2nA Tbyb 2 'I am a [male] doctor'/ \u00c2nA mmrD 'I am a [female] nurse', which is inappropriate for female doctors and male nurses, respectively."
},
"TABREF4": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Results of baseline systems and the best system on the TEST set."
},
"TABREF6": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Summary of the errors found in the Dev set organized by target gender (M or F) and in combination (M+F)."
},
"TABREF7": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "form), e.g., generating knt m\u0161\u03b3l instead of knt m\u0161\u03b3lA 'I was busy [masc]'. It should be noted that such cases are commonly used and are 'accepted' since most modern dialects of Arabic lost the productive generation of case. Second is reinflecting words that are not inflectable for gender (Table 4 Uninflectable word). One example is adding the feminine nominal suffix to the first person imperfective verb \u00c2m\u03b8l in \u01cd nny \u00c2m\u03b8l j\u0161\u03c2 Al\u0161rkAt 'I represent corporate greed'. This results in creating a nonsensical verbal form"
}
}
}
}