{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:11:44.154247Z" }, "title": "You Reap What You Sow: On the Challenges of Bias Evaluation Under Multilingual Settings", "authors": [ { "first": "Zeerak", "middle": [], "last": "Talat", "suffix": "", "affiliation": { "laboratory": "", "institution": "Simon Fraser University", "location": {} }, "email": "" }, { "first": "Aur\u00e9lie", "middle": [], "last": "N\u00e9v\u00e9ol", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNRS", "location": { "region": "LISN" } }, "email": "" }, { "first": "Stella", "middle": [], "last": "Biderman", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Miruna", "middle": [], "last": "Clinciu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Manan", "middle": [], "last": "Dey", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Shayne", "middle": [], "last": "Longpre", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Alexandra", "middle": [ "Sasha" ], "last": "Luccioni", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Maraim", "middle": [], "last": "Masoud", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yale University", "location": { "addrLine": "13 Walmart Labs", "country": "India" } }, "email": "" }, { "first": "Shanya", "middle": [], "last": "Sharma", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Arjun", "middle": [], "last": "Subramonian", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Los Angeles" } }, "email": "" }, { "first": "Jaesung", "middle": [], "last": "Tae", "suffix": "", "affiliation": { "laboratory": "", "institution": "Yale University", "location": { "addrLine": "13 Walmart Labs", "country": "India" } }, "email": "" }, { "first": "Samson", "middle": [], "last": "Tan", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": {} }, "email": "" }, { "first": "Deepak", "middle": [], "last": "Tunuguntla", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Oskar", "middle": [], "last": "Van Der Wal", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Amsterdam", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Evaluating bias, fairness, and social impact in monolingual language models is a difficult task. This challenge is further compounded when language modeling occurs in a multilingual context. Considering the implication of evaluation biases for large multilingual language models, we situate the discussion of bias evaluation within a wider context of social scientific research with computational work. We highlight three dimensions of developing multilingual bias evaluation frameworks: (1) increasing transparency through documentation, (2) expanding targets of bias beyond gender, and (3) addressing cultural differences that exist between languages. We further discuss the power dynamics and consequences of training large language models and recommend that researchers remain cognizant of the ramifications of developing such technologies.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Evaluating bias, fairness, and social impact in monolingual language models is a difficult task. This challenge is further compounded when language modeling occurs in a multilingual context. Considering the implication of evaluation biases for large multilingual language models, we situate the discussion of bias evaluation within a wider context of social scientific research with computational work. We highlight three dimensions of developing multilingual bias evaluation frameworks: (1) increasing transparency through documentation, (2) expanding targets of bias beyond gender, and (3) addressing cultural differences that exist between languages. We further discuss the power dynamics and consequences of training large language models and recommend that researchers remain cognizant of the ramifications of developing such technologies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine learning (ML) systems, especially large language models (LLMs), are prone to (re)produce harmful outcomes and social biases (Bender et al., 2021; Raji et al., 2021; Blodgett et al., 2020; Aguera y Arcas et al., 2018) . Despite recent advances in LLMs (Bender and Koller, 2020) , they have shown to disproportionately produce harmful content when addressing certain topics (Gehman et al., 2020; and demographics (Sheng et al., 2019; Liang et al., 2021; Dev et al., 2021a )-in part due to the training data used (Dunn, 2020; Gao et al., 2020; Bender et al., 2021) , and the design of modeling processes (Talat et al., 2021; Hovy and Prabhumoye, 2021) . In response, previous work has explored ways in which such social biases can be measured and counteracted (Nangia et al., 2020; Gehman et al., 2020; Czarnowska et al., 2021) . Typically, these issues have been addressed either by conceptualizing the underlying systemic discrimination as \"bias\" or by developing evaluation datasets that shed light on how LLMs produce harmful social outcomes. However, in the former case, as Blodgett et al. (2020) points out, these conceptualizations often lack clear descriptions, e.g., type of systemic discrimination and affected demographics. This results in a highly under-specified \"bias\", which could lead to a downstream issue in the validity of the technical approaches that are developed . Similarly, the ill-defined \"bias\" is further compounded by the specifics of many benchmarks. Often, benchmarks exhibit discrepancies between understandings of the unobservable theoretical constructs against which \"bias\" is being measured and their operationalization (Jacobs and Wallach, 2021; Friedler et al., 2021) . Furthermore, many prior benchmark datasets were developed with specific modeling architectures in mind (Nangia et al., 2020) . They are limited to English and are culturally Anglo-centric. 1 In this position paper, we present an overview of the current state-of-the-art concerning challenges and measures taken to address bias in language models. Specifically, we document the challenges of evaluating language models, with a focus on the generation of harmful text. By engaging our challenges with the relevant social scientific literature, we propose (1) a more transparent evaluation of bias via scoping and documentation, (2) focusing on the diversity of stereotypes for increased inclusivity, (3) careful curation of culturally aware datasets, and (4) creation of general bias measures that are independent of model architecture but capture the context of the task.", "cite_spans": [ { "start": 132, "end": 153, "text": "(Bender et al., 2021;", "ref_id": "BIBREF11" }, { "start": 154, "end": 172, "text": "Raji et al., 2021;", "ref_id": null }, { "start": 173, "end": 195, "text": "Blodgett et al., 2020;", "ref_id": "BIBREF19" }, { "start": 196, "end": 224, "text": "Aguera y Arcas et al., 2018)", "ref_id": "BIBREF2" }, { "start": 259, "end": 284, "text": "(Bender and Koller, 2020)", "ref_id": "BIBREF12" }, { "start": 380, "end": 401, "text": "(Gehman et al., 2020;", "ref_id": "BIBREF61" }, { "start": 419, "end": 439, "text": "(Sheng et al., 2019;", "ref_id": "BIBREF113" }, { "start": 440, "end": 459, "text": "Liang et al., 2021;", "ref_id": "BIBREF87" }, { "start": 460, "end": 477, "text": "Dev et al., 2021a", "ref_id": "BIBREF44" }, { "start": 518, "end": 530, "text": "(Dunn, 2020;", "ref_id": "BIBREF51" }, { "start": 531, "end": 548, "text": "Gao et al., 2020;", "ref_id": "BIBREF57" }, { "start": 549, "end": 569, "text": "Bender et al., 2021)", "ref_id": "BIBREF11" }, { "start": 609, "end": 629, "text": "(Talat et al., 2021;", "ref_id": null }, { "start": 630, "end": 656, "text": "Hovy and Prabhumoye, 2021)", "ref_id": "BIBREF68" }, { "start": 765, "end": 786, "text": "(Nangia et al., 2020;", "ref_id": "BIBREF99" }, { "start": 787, "end": 807, "text": "Gehman et al., 2020;", "ref_id": "BIBREF61" }, { "start": 808, "end": 832, "text": "Czarnowska et al., 2021)", "ref_id": "BIBREF39" }, { "start": 1660, "end": 1686, "text": "(Jacobs and Wallach, 2021;", "ref_id": "BIBREF72" }, { "start": 1687, "end": 1709, "text": "Friedler et al., 2021)", "ref_id": "BIBREF56" }, { "start": 1815, "end": 1836, "text": "(Nangia et al., 2020)", "ref_id": "BIBREF99" }, { "start": 1901, "end": 1902, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We recognize that many of the challenges that we have encountered and described here are large open problems that will require joint work to address. Our goal is to analyze these challenges and provide scaffolding for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Issues of socially discriminatory (human and technological) systems have long been the subject of study for scholars across disciplines, e.g. in Science and Technology Studies (Haraway, 1988) , discard studies (Lepawsky, 2019) , social anthropology (Douglas, 1978) , philosophy of democracy (Fraser, 1990) , gender and LGBTQIA+ studies (Spade, 2015; Rajunov and Duane, 2019; Keyes et al., 2021; D'Ignazio and Klein, 2020) , media studies (Gitelman, 2013) , archival studies (Agostinho et al., 2019) , sociolinguistics (Labov, 1986; Cheshire, 2007) , and critical race theory (Noble, 2018; Benjamin, 2019) . 2 Scholars argue that technical systems are embedded in social contexts (Lepawsky, 2019; Haraway, 1988) and are therefore necessarily evaluated as socio-technical systems interacting with complex social hierarchies (Winner, 1980; Benjamin, 2019; Costanza-Chock, 2018; Friedler et al., 2021) . When technological systems prioritize majorities, there is a risk they oppress minorities at the personal, communal, and institutional levels (Costanza-Chock, 2018). Haraway (1988) argues that researchers default to a \"view from nowhere\", without reflecting on the context or use of their research. This default view often represents the interests of dominant majorities, disregarding knowledges from marginalized communities. Considering machine learning systems, Chun (2021) argues that the development of such technological systems relies on faulty assumptions (e.g., that past data collections can adequately and fairly predict future human behavior) which can lead to embedded social biases. Situating ourselves in the wider academic literature of social discrimination and marginalization, compels us to recognize that our technical systems must be considered in the social context in which they exist.", "cite_spans": [ { "start": 176, "end": 191, "text": "(Haraway, 1988)", "ref_id": "BIBREF67" }, { "start": 210, "end": 226, "text": "(Lepawsky, 2019)", "ref_id": "BIBREF85" }, { "start": 249, "end": 264, "text": "(Douglas, 1978)", "ref_id": "BIBREF50" }, { "start": 291, "end": 305, "text": "(Fraser, 1990)", "ref_id": "BIBREF55" }, { "start": 336, "end": 349, "text": "(Spade, 2015;", "ref_id": "BIBREF115" }, { "start": 350, "end": 374, "text": "Rajunov and Duane, 2019;", "ref_id": null }, { "start": 375, "end": 394, "text": "Keyes et al., 2021;", "ref_id": "BIBREF79" }, { "start": 395, "end": 421, "text": "D'Ignazio and Klein, 2020)", "ref_id": "BIBREF47" }, { "start": 438, "end": 454, "text": "(Gitelman, 2013)", "ref_id": "BIBREF62" }, { "start": 474, "end": 498, "text": "(Agostinho et al., 2019)", "ref_id": "BIBREF1" }, { "start": 518, "end": 531, "text": "(Labov, 1986;", "ref_id": "BIBREF82" }, { "start": 532, "end": 547, "text": "Cheshire, 2007)", "ref_id": "BIBREF35" }, { "start": 575, "end": 588, "text": "(Noble, 2018;", "ref_id": "BIBREF101" }, { "start": 589, "end": 604, "text": "Benjamin, 2019)", "ref_id": "BIBREF14" }, { "start": 607, "end": 608, "text": "2", "ref_id": null }, { "start": 679, "end": 695, "text": "(Lepawsky, 2019;", "ref_id": "BIBREF85" }, { "start": 696, "end": 710, "text": "Haraway, 1988)", "ref_id": "BIBREF67" }, { "start": 822, "end": 836, "text": "(Winner, 1980;", "ref_id": "BIBREF131" }, { "start": 837, "end": 852, "text": "Benjamin, 2019;", "ref_id": "BIBREF14" }, { "start": 853, "end": 874, "text": "Costanza-Chock, 2018;", "ref_id": "BIBREF37" }, { "start": 875, "end": 897, "text": "Friedler et al., 2021)", "ref_id": "BIBREF56" }, { "start": 1066, "end": 1080, "text": "Haraway (1988)", "ref_id": "BIBREF67" } ], "ref_spans": [], "eq_spans": [], "section": "Social Discrimination", "sec_num": "2.1" }, { "text": "On the topic of socially discriminatory systems within machine learning, Buolamwini and Gebru (2018) and Raji and Buolamwini (2019) show that there are significant disparities along gendered and racialized lines in commercially available facial recognition and analysis systems. Similar issues of discriminatory social biases in natural language processing (NLP) systems have resulted in emerging research dedicated to the identification, quantification (e.g. Rudinger et al., 2018; De-Arteaga et al., 2019; Czarnowska et al., 2021) , and mitigation of bias (Bolukbasi et al., 2016; Sun et al., 2019; Garimella et al., 2021) in NLP systems.", "cite_spans": [ { "start": 73, "end": 100, "text": "Buolamwini and Gebru (2018)", "ref_id": "BIBREF26" }, { "start": 105, "end": 131, "text": "Raji and Buolamwini (2019)", "ref_id": "BIBREF104" }, { "start": 460, "end": 482, "text": "Rudinger et al., 2018;", "ref_id": "BIBREF106" }, { "start": 483, "end": 507, "text": "De-Arteaga et al., 2019;", "ref_id": "BIBREF41" }, { "start": 508, "end": 532, "text": "Czarnowska et al., 2021)", "ref_id": "BIBREF39" }, { "start": 558, "end": 582, "text": "(Bolukbasi et al., 2016;", "ref_id": "BIBREF23" }, { "start": 583, "end": 600, "text": "Sun et al., 2019;", "ref_id": "BIBREF119" }, { "start": 601, "end": 624, "text": "Garimella et al., 2021)", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Machine-learned Systems in Social Context", "sec_num": "2.2" }, { "text": "However, these methods tend to obscure rather than remove social biases (Gonen and Goldberg, 2019) , and are particularly brittle when applied to complex, contextual language representations (Dev et al., 2020) .", "cite_spans": [ { "start": 72, "end": 98, "text": "(Gonen and Goldberg, 2019)", "ref_id": "BIBREF63" }, { "start": 191, "end": 209, "text": "(Dev et al., 2020)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Machine-learned Systems in Social Context", "sec_num": "2.2" }, { "text": "Further, operationalization of under-specified \"bias\" has varied widely across studies, and in some cases has been internally inconsistent with their stated goals (Blodgett et al., 2020; Jacobs and Wallach, 2021) . The recent surge of LLMs is no exception to such concerns. Hovy and Prabhumoye (2021) ; Talat et al. (2021) , and Cao and Daum\u00e9 III (2020) argue that socially discriminatory biases can be encoded in several stages of the LLM development process (Biderman and Scheirer, 2020) , including data sampling, annotation, selection of input representations or model, research design, and how the models are situated with regards to the language communities that they are applied to. Language generation models, despite their inference-time flexibility, are particularly susceptible to reproducing hegemonic social biases and generating offensive language, even when not explicitly prompted to do so (Sheng et al., 2021; Wallace et al., 2019; Bender et al., 2021) . In efforts to address the expression of such social biases, a number of bias evaluation benchmarks have been proposed (Dev et al., 2021b; Zhao et al., 2018; Cao and Daum\u00e9 III, 2020) . However, common evaluation benchmarks are fraught with pitfalls in their conceptualization of bias, stereotypes, and harms, including meaningless or poorly formed stereotype constructions, non-intersectional examples, contexts that don't reflect downstream use, and reliance on specific model architectures Jin et al., 2021) . Furthermore, bias evaluation benchmarks often make strong assumptions about the validity, reliability, and existence of observable properties, e.g. pronouns, as signals for unobservable theoretical constructs such as gender (Jacobs and Wallach, 2021). This is particularly problematic when building benchmarks for biases against communities that resist categorization based on observable characteristics (e.g. LGBTQIA+ and racialized people) and leads to reliance on existing stereotypes (Tomasev et al., 2021; Dev et al., 2021a) .", "cite_spans": [ { "start": 163, "end": 186, "text": "(Blodgett et al., 2020;", "ref_id": "BIBREF19" }, { "start": 187, "end": 212, "text": "Jacobs and Wallach, 2021)", "ref_id": "BIBREF72" }, { "start": 274, "end": 300, "text": "Hovy and Prabhumoye (2021)", "ref_id": "BIBREF68" }, { "start": 303, "end": 322, "text": "Talat et al. (2021)", "ref_id": null }, { "start": 460, "end": 489, "text": "(Biderman and Scheirer, 2020)", "ref_id": "BIBREF16" }, { "start": 906, "end": 926, "text": "(Sheng et al., 2021;", "ref_id": "BIBREF112" }, { "start": 927, "end": 948, "text": "Wallace et al., 2019;", "ref_id": "BIBREF126" }, { "start": 949, "end": 969, "text": "Bender et al., 2021)", "ref_id": "BIBREF11" }, { "start": 1090, "end": 1109, "text": "(Dev et al., 2021b;", "ref_id": "BIBREF45" }, { "start": 1110, "end": 1128, "text": "Zhao et al., 2018;", "ref_id": "BIBREF132" }, { "start": 1129, "end": 1153, "text": "Cao and Daum\u00e9 III, 2020)", "ref_id": "BIBREF29" }, { "start": 1463, "end": 1480, "text": "Jin et al., 2021)", "ref_id": "BIBREF74" }, { "start": 1971, "end": 1993, "text": "(Tomasev et al., 2021;", "ref_id": "BIBREF124" }, { "start": 1994, "end": 2012, "text": "Dev et al., 2021a)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Machine-learned Systems in Social Context", "sec_num": "2.2" }, { "text": "This rapid development of NLP resources and tools have further yielded a non-inclusive environment, skewed heavily towards English and Anglo-centric biases (Joshi et al., 2020) . Sambasivan et al. (2021) and Chan et al. (2021) contend there remains a significant gap between the communities governing and governed by AI, and advocate for a redistribution of powers and responsibilities in developing responsible AI.", "cite_spans": [ { "start": 156, "end": 176, "text": "(Joshi et al., 2020)", "ref_id": "BIBREF75" }, { "start": 179, "end": 203, "text": "Sambasivan et al. (2021)", "ref_id": "BIBREF107" }, { "start": 208, "end": 226, "text": "Chan et al. (2021)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Machine-learned Systems in Social Context", "sec_num": "2.2" }, { "text": "Considering gender bias, Stanczak and Augenstein (2021) show that existing methods (1) largely avoid ethical considerations or evaluations of gender bias, (2) focus primarily on binary gender treatment, in mostly Anglo-centric settings, and (3) employ limited or flawed evaluation methodologies. Such issues are in part exacerbated by the general poverty of documentation of datasets Bender and Friedman, 2018) and machine learning models (Mitchell et al., 2019) . One way to mitigate these biases includes creating diverse teams with varied backgrounds and life experiences to assure the expression of diverse perspectives (Monteiro and Castillo, 2019; Nekoto et al., 2020) . However, as critiqued by Talat et al. (2021) ; West et al. (2019) , incorporating the diversity factor may be inadequate. Biases in language representations and task models can not only reflect, but also amplify bias present in the datasets (Barocas and Selbst, 2016; . These biases have been investigated and attempts made at creating interpretable representations and providing post-hoc explanations of model predictions.", "cite_spans": [ { "start": 25, "end": 55, "text": "Stanczak and Augenstein (2021)", "ref_id": null }, { "start": 384, "end": 410, "text": "Bender and Friedman, 2018)", "ref_id": "BIBREF10" }, { "start": 439, "end": 462, "text": "(Mitchell et al., 2019)", "ref_id": "BIBREF96" }, { "start": 624, "end": 653, "text": "(Monteiro and Castillo, 2019;", "ref_id": "BIBREF97" }, { "start": 654, "end": 674, "text": "Nekoto et al., 2020)", "ref_id": null }, { "start": 702, "end": 721, "text": "Talat et al. (2021)", "ref_id": null }, { "start": 724, "end": 742, "text": "West et al. (2019)", "ref_id": "BIBREF130" }, { "start": 918, "end": 944, "text": "(Barocas and Selbst, 2016;", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Machine-learned Systems in Social Context", "sec_num": "2.2" }, { "text": "Given the grave consequences that inherent or conceptualized biases in ML systems can inflict, responsible AI has received a growing amount of research attention (Amershi et al., 2020) . Responsible AI refers to the creation of ethical principles for AI and the development of AI systems based on these principles (Dignum, 2017; Schiff, 2020) . Colloquially, responsible AI encompasses distinct machine learning fields such as fairness, explainability, privacy, and interpretability. Concretely, how can responsible AI principles best contribute to the development of equitable systems?", "cite_spans": [ { "start": 162, "end": 184, "text": "(Amershi et al., 2020)", "ref_id": "BIBREF5" }, { "start": 314, "end": 328, "text": "(Dignum, 2017;", "ref_id": "BIBREF48" }, { "start": 329, "end": 342, "text": "Schiff, 2020)", "ref_id": "BIBREF110" } ], "ref_spans": [], "eq_spans": [], "section": "Bias, Fairness, and Explainability", "sec_num": "2.3" }, { "text": "Examining this question, Friedler et al. 2021propose that building just ML systems requires an a priori definition of fairness. However, contemporary decisionmaking systems build on a so-called what-you-see-iswhat-you-get (WYSIWYG) approach that implicitly imbibes multiple fairness definitions or world views, leading to a system based on the conflict between the underlying value systems. To tackle this issue, ML engineers should explicitly state the underlying systemic values, as systems will inevitably comprise certain assumptions (Birhane et al., 2021) . Thus, implying that biases as inherent to these decision-making systems and should be clearly articulated (Bender et al., 2021) by explaining the whys and whats (explainability).", "cite_spans": [ { "start": 538, "end": 560, "text": "(Birhane et al., 2021)", "ref_id": "BIBREF17" }, { "start": 669, "end": 690, "text": "(Bender et al., 2021)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Bias, Fairness, and Explainability", "sec_num": "2.3" }, { "text": "However, a more promising course of action for researchers would be to prioritize fairness in the entire life cycle of a language model. The tendency to consider and mitigate undesirable biases in models after training has completed leaves harmful residues that affect the communities we seek to protect (Dev et al., 2021a) . Hence, a fruitful approach could be to reduce systemic unfairness by grounding the discussion on clear definitions of fairness based on input from the communities that could be harmed by the system (Liao and Muller, 2019), explaining the inherent biases, and, if possible, minimizing bias issues by employing the measures discussed in, both, the previous and the following sections.", "cite_spans": [ { "start": 304, "end": 323, "text": "(Dev et al., 2021a)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Bias, Fairness, and Explainability", "sec_num": "2.3" }, { "text": "Evaluating the social impacts and harmful biases LLMs exhibit is an important development step. However, despite the increased interest in developing bias benchmarks, the field still faces various challenges in evaluating LLMs with off-the-shelf benchmarks. In this section, we provide examples of existing bias measures currently used in NLP. We then discuss the challenges that originate from these: (1) they rely on vague definitions of bias, (2) are restricted to particular model architectures, (3) have limited relevance for different cultural contexts, and (4) are difficult to validate and interpret.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges of Bias", "sec_num": "3" }, { "text": "Recently, researchers and practitioners have begun to pay more attention to bias measures in NLP systems (Blodgett et al., 2020; Dev et al., 2021b) . One line of work has focused on identifying bias in word embeddings: The Word Embedding Association Test (WEAT, Caliskan et al., 2017) measures bias by comparing the relative distances of two sets of target words (e.g. occupation words: nurse, doctor) with respect to two sets of attribute words (e.g., gender attributes: male, female)and has inspired other similar approaches (Kurita et al., 2019; May et al., 2019; Dev et al., 2020) .", "cite_spans": [ { "start": 105, "end": 128, "text": "(Blodgett et al., 2020;", "ref_id": "BIBREF19" }, { "start": 129, "end": 147, "text": "Dev et al., 2021b)", "ref_id": "BIBREF45" }, { "start": 262, "end": 284, "text": "Caliskan et al., 2017)", "ref_id": "BIBREF27" }, { "start": 527, "end": 548, "text": "(Kurita et al., 2019;", "ref_id": "BIBREF81" }, { "start": 549, "end": 566, "text": "May et al., 2019;", "ref_id": "BIBREF93" }, { "start": 567, "end": 584, "text": "Dev et al., 2020)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Examples of Bias Measure Studies", "sec_num": "3.1" }, { "text": "Although word embeddings may help identify biases in the context of LLMs, it is often difficult to access the learned contextual language representations of the model (Abid et al., 2021; Dev et al., 2020) . Furthermore, such methods are developed to address static word embeddings rather than the dynamic contextual word embeddings LLMs rely on (Subramonian, 2021) .", "cite_spans": [ { "start": 167, "end": 186, "text": "(Abid et al., 2021;", "ref_id": "BIBREF0" }, { "start": 187, "end": 204, "text": "Dev et al., 2020)", "ref_id": "BIBREF43" }, { "start": 345, "end": 364, "text": "(Subramonian, 2021)", "ref_id": "BIBREF118" } ], "ref_spans": [], "eq_spans": [], "section": "Examples of Bias Measure Studies", "sec_num": "3.1" }, { "text": "Another research direction is the use of causal inference for measuring biases in LLMs, for example to analyze if the generated text by an LLM is affected considerably by only changing the protected attributes or categories in the input (Huang et al., 2020; Madaan et al., 2021; Cheng et al., 2021) . In line with this idea, Huang et al. (2020) used a sentiment classifier to quantify and reduce the sentiment bias existent in LLMs. Similarly, the CrowS-Pairs benchmark (Nangia et al., 2020) leverages the paradigm of minimal pairs to contrast sentences expressing stereotypes against social categories with the same sentences addressing different social categories. Crows-Pairs is designed such for language models to be probed for disparate behavior between the sentences pairs, with the hypothesis that systematic difference in the treatment reflecting the preference for stereotype indicates the presence of bias in the language models. Other examples of bias measures benchmarks include StereoSet (Nadeem et al., 2020) , WinoMT (Stanovsky et al., 2019) , BBQ (Parrish et al., 2021) , BOLD (Dhamala et al., 2021) , and Toxicity Comment Classification competition (Jigsaw, 2017).", "cite_spans": [ { "start": 237, "end": 257, "text": "(Huang et al., 2020;", "ref_id": "BIBREF69" }, { "start": 258, "end": 278, "text": "Madaan et al., 2021;", "ref_id": "BIBREF90" }, { "start": 279, "end": 298, "text": "Cheng et al., 2021)", "ref_id": "BIBREF34" }, { "start": 470, "end": 491, "text": "(Nangia et al., 2020)", "ref_id": "BIBREF99" }, { "start": 1002, "end": 1023, "text": "(Nadeem et al., 2020)", "ref_id": "BIBREF98" }, { "start": 1033, "end": 1057, "text": "(Stanovsky et al., 2019)", "ref_id": "BIBREF117" }, { "start": 1064, "end": 1086, "text": "(Parrish et al., 2021)", "ref_id": null }, { "start": 1094, "end": 1116, "text": "(Dhamala et al., 2021)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Examples of Bias Measure Studies", "sec_num": "3.1" }, { "text": "The term \"bias\" is overloaded in the ML and NLP communities, as it is used in the lay (a prejudice towards or against some entity) and the statistical sense (a systematic deviation from a distribution's mean) (Campolo et al., 2018) . Moreover, researchers often refer to vague definitions of bias and gloss over the details, which results in methods that lack specificity (Blodgett et al., 2020). When discussing methods to address bias, it is critical to be precise about the bias being addressed.", "cite_spans": [ { "start": 209, "end": 231, "text": "(Campolo et al., 2018)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Defining Bias", "sec_num": "3.2" }, { "text": "Bias can, for instance, be made more specific by being defined along socially relevant dimensions. Nangia et al. (2020) consider the protected categories from the US Equal Employment Opportunities Commission and Queer in AI uses a similar list (gender identity and expression, sexual orientation, disability, neurodivergence, skill set, physical appearance, body size, race, caste, age, nationality, citizenship status, colonial experience, religion), yet other characteristics may be relevant elsewhere in the world (e.g. illness, migrant, and social status). 3 However, protected classes are only one dimension along which to define bias; researchers should also be mindful of political biases and biases resulting from the focus on prestigious, highly resourced language varieties, in additions to the intersections of multiple dimensions (Kearns et al., 2018; Buolamwini and Gebru, 2018; Crenshaw, 1991) .", "cite_spans": [ { "start": 99, "end": 119, "text": "Nangia et al. (2020)", "ref_id": "BIBREF99" }, { "start": 842, "end": 863, "text": "(Kearns et al., 2018;", "ref_id": "BIBREF77" }, { "start": 864, "end": 891, "text": "Buolamwini and Gebru, 2018;", "ref_id": "BIBREF26" }, { "start": 892, "end": 907, "text": "Crenshaw, 1991)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Defining Bias", "sec_num": "3.2" }, { "text": "With respect to any of the aforementioned dimensions, a \"bias\" is a preferential disposition towards or against an entity. Colloquially, it is perceived negatively and considered to be unfair treatment. As pointed out by Barocas et al. (2017) , biases in language models can manifest in the form of quality-of-service and representation disparities. As quality-of-service bias describes subpar performance of a language model when used by a particular group. For example, LLM-driven machine translation systems provide significantly better support for \"prestigious\", high-resource languages, and consequently deny quality performance to individuals who do not speak these languages (Nekoto et al., 2020) . Furthermore, in fundamental NLP tasks such as coreference resolution, LLMs can fail for people who use neopronouns, and often capture meaningless representations for language associated with trans and non-binary individuals. (Cao and Daum\u00e9 III, 2020; Dev et al., 2021a). Additionally, Blodgett et al. (2018) show that parsing systems trained primarily on White Mainstream American English exhibit disparate performance on African American English and Tan et al. (2020) show that English question answering and machine translation systems often fail on the morphological variation that is often present in non-prestige and Learner Englishes.", "cite_spans": [ { "start": 221, "end": 242, "text": "Barocas et al. (2017)", "ref_id": "BIBREF7" }, { "start": 682, "end": 703, "text": "(Nekoto et al., 2020)", "ref_id": null }, { "start": 1007, "end": 1013, "text": "(2018)", "ref_id": null }, { "start": 1157, "end": 1174, "text": "Tan et al. (2020)", "ref_id": "BIBREF121" } ], "ref_spans": [], "eq_spans": [], "section": "Defining Bias", "sec_num": "3.2" }, { "text": "Representation biases consist of stereotypes and under-representation (or over-representation) of data or model outputs. Stereotyping is a cognitive process that manifests from often negative cultural norms about a characteristic; stereotyping permeates what people do, say, or write. A long line of work has shown that language models capture social stereotypes, for example, with respect to binary gender and occupations (Zhao et al., 2018; de Vassimon Manela et al., 2021) . With regard to (under)representation, in MIMIC-III, a clinical notes dataset, only 1.9% of patients identify as Asian, in comparison to 71.5% who identify as white (Chen et al., 3 Queer in AI (http://queerinai.org/) is a grassroots D&I organization that seeks to empower queer and trans researchers in AI and advance research at the intersections of AI and queerness. Their list of categories can be found here: http://queerinai.org/code-of-conduct. 2020). Furthermore, blocklists in the Colossal Clean Crawled Corpus (C4) dataset disproportionately filter words related to queerness and language that is not White-aligned English (Dodge et al., 2021) . Notably, quality-of-service and representation biases are not mutually exclusive; for instance, the brittle representations learned by a LLM for language associated with trans and non-binary individuals largely stems from the severe under-representation of this in training data (Dev et al., 2021a; Barocas and Selbst, 2016) .", "cite_spans": [ { "start": 423, "end": 442, "text": "(Zhao et al., 2018;", "ref_id": "BIBREF132" }, { "start": 443, "end": 475, "text": "de Vassimon Manela et al., 2021)", "ref_id": "BIBREF42" }, { "start": 642, "end": 655, "text": "(Chen et al.,", "ref_id": null }, { "start": 656, "end": 657, "text": "3", "ref_id": null }, { "start": 1109, "end": 1129, "text": "(Dodge et al., 2021)", "ref_id": "BIBREF49" }, { "start": 1411, "end": 1430, "text": "(Dev et al., 2021a;", "ref_id": "BIBREF44" }, { "start": 1431, "end": 1456, "text": "Barocas and Selbst, 2016)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Defining Bias", "sec_num": "3.2" }, { "text": "The breakdown of biases into quality-of-service and representation disparities is only one of many possible lenses. It is also critical to explicitly consider biases stemming from disparities in resources, broadly defined in terms of data availability, time to invest into dataset curation, access to compute resources, financial resources, and more (Bender et al., 2021) .", "cite_spans": [ { "start": 350, "end": 371, "text": "(Bender et al., 2021)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Defining Bias", "sec_num": "3.2" }, { "text": "Current benchmarks often measure bias in specific downstream tasks (e.g. Machine Translation (Stanovsky et al., 2019) , Question Answering (Parrish et al., 2021) , or Text Generation (Dhamala et al., 2021) ), while others focus on bias in LLMs more generally (e.g Kurita et al., 2019; Nadeem et al., 2020; Nangia et al., 2020) . This has the advantage of being more widely applicable, as many NLP systems are based on LLMs, and it avoids the need for creating and validating a new benchmark for each possible downstream task. Yet, when the benchmarks heavily rely on the model architecture rather than the task specification, quantitative comparison between different models based on these benchmarks is no longer possible. In such cases, it also becomes more difficult to assess the validity of the bias measure in how it relates to other benchmarks (criterion validity) and the more abstract notion of fairness (construct validity).", "cite_spans": [ { "start": 93, "end": 117, "text": "(Stanovsky et al., 2019)", "ref_id": "BIBREF117" }, { "start": 139, "end": 161, "text": "(Parrish et al., 2021)", "ref_id": null }, { "start": 183, "end": 205, "text": "(Dhamala et al., 2021)", "ref_id": "BIBREF46" }, { "start": 264, "end": 284, "text": "Kurita et al., 2019;", "ref_id": "BIBREF81" }, { "start": 285, "end": 305, "text": "Nadeem et al., 2020;", "ref_id": "BIBREF98" }, { "start": 306, "end": 326, "text": "Nangia et al., 2020)", "ref_id": "BIBREF99" } ], "ref_spans": [], "eq_spans": [], "section": "Overreliance on Model Architectures", "sec_num": "3.3" }, { "text": "Some researchers circumvent this problem by adapting the original bias metric, but care should be taken when doing so. For instance, bias metrics originally developed for masked language models have been adapted by using perplexity (e.g. Nadeem et al., 2020) or prompting (e.g. Gao et al., 2021; Sanh et al., 2021) instead. While these could still result in important insights, they also open new questions. Are the underlying assumptions of the bias measure still valid? Can you compare the bias metrics across different (future) types of models? Do the results of the initial validation of the benchmark still hold? And how does the kind of training data impact the evaluation that assumes a different training domain (e.g., legal texts vs. social media)?", "cite_spans": [ { "start": 238, "end": 258, "text": "Nadeem et al., 2020)", "ref_id": "BIBREF98" }, { "start": 278, "end": 295, "text": "Gao et al., 2021;", "ref_id": "BIBREF58" }, { "start": 296, "end": 314, "text": "Sanh et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overreliance on Model Architectures", "sec_num": "3.3" }, { "text": "While bias is ideally defined independently of the particular model architecture-not least because implementations change over time-we should not fall into a generalization trap either. As argued before, bias is inherent to systems and context-sensitive, and we should not strive for a panacea bias measure. Instead, the goal should be to develop methods that are task-specific yet independent of a given architecture, to the degree that this is possible. Researchers should keep this tension between task-and architecture-specific measures in mind when designing methods for measuring biases in LLMs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overreliance on Model Architectures", "sec_num": "3.3" }, { "text": "Despite the need for evaluating LLMs for a wide range of languages, bias benchmarks that cover non-English languages are rare (Zhou et al., 2019; Joshi et al., 2020) . As a solution, simply translating existing English benchmarks is not ideal: manual translation is a labor-intensive and highly skilled task, while automated translations are prone to errors and could potentially introduce new algorithmic sources of bias. Moreover, translated benchmarks may only test for Anglo-centric biases, which do not necessarily hold in many non-Western cultural contexts. For instance, many gender bias evaluations focus on Western professions, which are grammatically gendered in some languages Zhou et al., 2019) or may not cover other prevalent occupations outside the U.S. (Escud\u00e9 Font and Costa-juss\u00e0, 2019). WinoMT (Stanovsky et al., 2019) is one of the few benchmarks that covers multiple languages, but it comes with its own downsides. The sentences are generated from templates that capture a limited range of actual language use; the samples are translated from English examples, which may not reflect how stereotypes would occur in other languages; and the scope is limited to machine translation systems, and therefore WinoMT may not be suitable for multilingual models that are not trained on this specific task. The tightly coupled nature of bias and cultural context should be emphasized when designing a multilingual bias benchmark.", "cite_spans": [ { "start": 126, "end": 145, "text": "(Zhou et al., 2019;", "ref_id": "BIBREF133" }, { "start": 146, "end": 165, "text": "Joshi et al., 2020)", "ref_id": "BIBREF75" }, { "start": 688, "end": 706, "text": "Zhou et al., 2019)", "ref_id": "BIBREF133" }, { "start": 813, "end": 837, "text": "(Stanovsky et al., 2019)", "ref_id": "BIBREF117" } ], "ref_spans": [], "eq_spans": [], "section": "Bias Measures are Anglo-centric", "sec_num": "3.4" }, { "text": "Towards making NLP systems more just, we must understand the flaws of common bias measures and develop better guidelines to address biases. According to Jacobs and Wallach (2021) and Blodgett et al. (2021), bias measures are measurement models which link observable properties, e.g., quality-of-service and representational biases, with unobservable theoretical constructs such as social discrimination, power dynamics, and systemic oppression. Consequently, bias measures are deeply political. Notably, a vast majority of bias measures themselves rely on other measurement models, such as the presence of gendered pronouns, to infer theoretical protected categories, e.g., gender. Moreover, bias measures may cause further epistemic violence onto the marginalized by creating a veneer of fairness, in spite of ongoing marginalization (Gonen and Goldberg, 2019; Talat et al., 2021; Jacobs and Wallach, 2021) . In ensuring the reliability, validity, and correct interpretation of bias measures, it is critical to examine all components in a bias measurement method.", "cite_spans": [ { "start": 153, "end": 178, "text": "Jacobs and Wallach (2021)", "ref_id": "BIBREF72" }, { "start": 835, "end": 861, "text": "(Gonen and Goldberg, 2019;", "ref_id": "BIBREF63" }, { "start": 862, "end": 881, "text": "Talat et al., 2021;", "ref_id": null }, { "start": 882, "end": 907, "text": "Jacobs and Wallach, 2021)", "ref_id": "BIBREF72" } ], "ref_spans": [], "eq_spans": [], "section": "Validity of Bias Measures", "sec_num": "3.5" }, { "text": "Upstream measurement models that infer protected categories can be unreliable or even non-existent. For instance, pronouns and gendered names are usually em-ployed as proxies for binary gender, which is problematic (Dev et al., 2021a) . Furthermore, characteristics like sexuality and disability are usually unobservable, which can lead to a reliance on hegemonic stereotypes and unnatural language in bias evaluation benchmarks (Tomasev et al., 2021; Hutchinson et al., 2020) .", "cite_spans": [ { "start": 215, "end": 234, "text": "(Dev et al., 2021a)", "ref_id": "BIBREF44" }, { "start": 429, "end": 451, "text": "(Tomasev et al., 2021;", "ref_id": "BIBREF124" }, { "start": 452, "end": 476, "text": "Hutchinson et al., 2020)", "ref_id": "BIBREF70" } ], "ref_spans": [], "eq_spans": [], "section": "Validity of Bias Measures", "sec_num": "3.5" }, { "text": "With regard to validity, Blodgett et al. (2021) reviews how bias measures often rely on operationalization of stereotypes that are invalid for reasons such as misalignment and conflation. Additionally, the mathematical formalization of most bias measures is based on notions of parity-based fairness and do not reflect other conceptualizations of fairness such as distributive justice (Jacobs and Wallach, 2021) . Another source of invalidity of bias measures lies in the purported generality of associated benchmarks. Raji et al. (2021) argue that the \"instantiation [of benchmarks] in particular data, metrics and practice\" undermines the validity of their construction to have \"general applicability.\" Moreover, measurement models for protected categories fallaciously assume that the identities being indirectly observed can be discretized. Hence, Dev et al. (2021b) advocate for documenting the limitations of bias measures and related data in terms of their validity. In this process, it is critical to describe the relationship between the context of the data, model usage, and bias measure at stake.", "cite_spans": [ { "start": 385, "end": 411, "text": "(Jacobs and Wallach, 2021)", "ref_id": "BIBREF72" }, { "start": 852, "end": 870, "text": "Dev et al. (2021b)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Validity of Bias Measures", "sec_num": "3.5" }, { "text": "Throughout the paper, we have primarily discussed bias in language models as a mechanical phenomenon. However, it is important to situate these discussions within the context and power dynamics of the way that NLP is practiced -both in research and in application (Miceli et al., 2022) . In this section, we discuss sociopolitical influences on AI ethics and bias research in NLP. We argue that contemporary developments of LLMs have been an exercise in financial, institutional, ecological, linguistic, and cultural privilege. They are the consequence of the political will to create totalizing technologies and evaluation of bias, fairness and social impact should be viewed as a countervailing power mechanism, although in some cases serve to obscure these.", "cite_spans": [ { "start": 264, "end": 285, "text": "(Miceli et al., 2022)", "ref_id": "BIBREF95" } ], "ref_spans": [], "eq_spans": [], "section": "The Elephant in the Room: Power, Privilege, and Point of View", "sec_num": "4" }, { "text": "The current dominant paradigm in natural language processing is driven by the creation of ever-larger pretrained transformer models (Brown et al., 2020) . As the size of LLMs increases, so do the requirements for hardware, energy, and time. For example, GPT-NeoX 20B (Black et al., 2022) was trained for 1830 hours on 96 A100 GPUs, consuming 43.92 MWh of electricity and emitting 23 metric tons of CO 2 . Based on the current price listing of the cloud provider the model was trained on, training such a model would cost between 250,000 and 500,000 USD. 4 While this is not on the scale of the largest research programs, it is a significant amount of money and beyond the funding of many institutions, or beyond their political will to spend. While the development of such models can contribute towards improving the ability of people with less resources to pursue cutting edge downstream research, such pursuits have significant costs and barriers to entry for upstream research. This creates a stratification of research, wherein money is a barrier of entry for some forms of research but not for others.", "cite_spans": [ { "start": 132, "end": 152, "text": "(Brown et al., 2020)", "ref_id": null }, { "start": 267, "end": 287, "text": "(Black et al., 2022)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Large Language Models are Expensive", "sec_num": "4.1" }, { "text": "Although there are thousands of spoken languages in the world, the overwhelming majority of LLMs are monolingual and encode white respectability politics (Thylstrup and Talat, 2020; Kerrison et al., 2018) onto minoritized variants of English (Gehman et al., 2020) . In this way, the cost of the developing LLMs extends from externalizing computational and infrastructural costs, to externalizing languages and language variants (Lau, 2021) . Specifically, the vast majority of LLMs are trained to operate on an unspecified variant of \"English\" (Bender, 2019) , and in some cases Chinese (see Table 1 for a detailed overview of the top 25 LLMs). The dominance of English, and to a lesser degree Chinese, reifies cultural hegemonies and precipitates technological imperialism. Even when researchers seek to include other languages, these purportedly multilingual models often underserve certain languages and communities (Kerrison et al., 2018; Virtanen et al., 2019; Kreutzer et al., 2022; Gururangan et al., 2022) . We also note that few of these models have been assessed for bias or fairness (see table 1 ). This act relies on two foundations. First, LLMs should only be used for languages that they have been developed for, with the cultural stereotypes that they have been trained on, thus limiting LLMs to be used within a small set of cultural contexts, or casting cultural contexts for which they are trained onto ones that they are not developed for. Second, should a multilingual LLM be trained, its primary data sources will still be in English, whereas the remaining languages will only be incidental to it. Such cultural imperialism is evident from the fact that only 2 of the 14 organizations involved in developing LLMs have teams in multiple countries (see table 1). Further, all multinational LLM efforts, except for one, draw their membership from the USA, UK, Germany, & Australia. GPT-NeoX 20B (Black et al., 2022) is an exception, as it also includes authors from India. A commonly-used resource for developing LLMs, CommonCrawl, relies on data that primarily stems from the US (Dodge et al., 2021) and is written in privileged dialects of English (Dunn, 2020) . This prioritization is reflected by 16 teams being physically located in the U.S. Consequently, the current state of LLM development is a totalizing endeavor (Talat et al., the upper end reflects the sticker price of the systems. 2021), which engages in externalization across a number of axes, as is apparent from the infrastructural and development practices and the efforts to evaluate and mitigate social harms that arise from such technologies.", "cite_spans": [ { "start": 154, "end": 181, "text": "(Thylstrup and Talat, 2020;", "ref_id": "BIBREF123" }, { "start": 182, "end": 204, "text": "Kerrison et al., 2018)", "ref_id": "BIBREF78" }, { "start": 234, "end": 263, "text": "English (Gehman et al., 2020)", "ref_id": null }, { "start": 428, "end": 439, "text": "(Lau, 2021)", "ref_id": "BIBREF83" }, { "start": 544, "end": 558, "text": "(Bender, 2019)", "ref_id": "BIBREF9" }, { "start": 919, "end": 942, "text": "(Kerrison et al., 2018;", "ref_id": "BIBREF78" }, { "start": 943, "end": 965, "text": "Virtanen et al., 2019;", "ref_id": "BIBREF125" }, { "start": 966, "end": 988, "text": "Kreutzer et al., 2022;", "ref_id": null }, { "start": 989, "end": 1013, "text": "Gururangan et al., 2022)", "ref_id": "BIBREF66" }, { "start": 1913, "end": 1933, "text": "(Black et al., 2022)", "ref_id": "BIBREF18" }, { "start": 2098, "end": 2118, "text": "(Dodge et al., 2021)", "ref_id": "BIBREF49" }, { "start": 2168, "end": 2180, "text": "(Dunn, 2020)", "ref_id": "BIBREF51" }, { "start": 2341, "end": 2355, "text": "(Talat et al.,", "ref_id": null } ], "ref_spans": [ { "start": 592, "end": 599, "text": "Table 1", "ref_id": null }, { "start": 1099, "end": 1106, "text": "table 1", "ref_id": null } ], "eq_spans": [], "section": "Language is Multicultural, Language Models are Not", "sec_num": "4.2" }, { "text": "Actors to Control NLP Research", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Large Language Models Allow Powerful", "sec_num": "4.3" }, { "text": "Due to the costs involved with training large language models and the small number of actors who have decided to train them, the overwhelming majority of research studying their properties is not carried out by people who train LLMs. When the actors that do possess the models choose to not publicly release them, model trainers are afforded control over the research that can be conducted with and by these models. Famously, OpenAI's initial announcement of GPT-3 asserted that access to the model would be heavily restricted while the company continued to research ethical interventions in their model. OpenAI is not alone in this; the idea that it is inherently dangerous to release models to the public has been put forth by several other actors in this space (Weidinger et al., 2021a; Askell et al., 2021) .", "cite_spans": [ { "start": 764, "end": 789, "text": "(Weidinger et al., 2021a;", "ref_id": null }, { "start": 790, "end": 810, "text": "Askell et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Large Language Models Allow Powerful", "sec_num": "4.3" }, { "text": "It is essential to recognize that the decisions regarding access and the kind of research that can be conducted on large language models (or any ML models, for that matter) is an inherently political one (Leahy and Biderman, 2021) . Regardless of the truth of the aforementioned claims, they are highly contentious political claims and should be treated as such rather than passively accepted.", "cite_spans": [ { "start": 204, "end": 230, "text": "(Leahy and Biderman, 2021)", "ref_id": "BIBREF84" } ], "ref_spans": [], "eq_spans": [], "section": "Large Language Models Allow Powerful", "sec_num": "4.3" }, { "text": "Direct access to LLMs is important to perform independent research on their datasets, functions, and societal impact (Kandpal et al., 2022; Carlini et al., 2022) . While language models produced by the academic research community are widely available for critical examination, commercial systems are often only available through APIs provided by the developers (see table 1 for an overview on access for the 25 largest pretrained language models. Such restrictions to access to the models and resources that they are developed for provide a significant barrier to a) principles of open science and b) research on how the datasets and language models themselves embed and amplify social biases.", "cite_spans": [ { "start": 117, "end": 139, "text": "(Kandpal et al., 2022;", "ref_id": "BIBREF76" }, { "start": 140, "end": 161, "text": "Carlini et al., 2022)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 361, "end": 373, "text": "(see table 1", "ref_id": null } ], "eq_spans": [], "section": "Large Language Models Allow Powerful", "sec_num": "4.3" }, { "text": "Researchers have developed various strategies to address bias in large language models. As discussed in earlier sections, however, these strategies are insufficient to tackle multiple dimensions of bias. Below, we enumerate a few ways in which bias can be addressed by the research community to effectively engage with our aforementioned concerns: (1) moving towards a more transparent way of evaluating bias, (2) focusing on the diversity of stereotypes and increasing inclusivity, and (3) considering the impact of linguistic and cultural differences on the identification and mitigation of bias in designing culturally comparable datasets. We would like to highlight that these suggestions are not exhaustive. They will, however, guide the work in this area.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Addressing Bias", "sec_num": "5" }, { "text": "Stereotypes and biases cover a broad definition and vary in conceptualization across geographical and cultural contexts. To ensure that the nuances are well communicated and that practitioners understand the applicability of the evaluation approach, we suggest documenting a thorough analysis of the scope. Below, we provide a starting point based on Mitchell et al. (2019) ; ; Dev et al. (2021b); Blodgett et al. (2020) .", "cite_spans": [ { "start": 351, "end": 373, "text": "Mitchell et al. (2019)", "ref_id": "BIBREF96" }, { "start": 378, "end": 397, "text": "Dev et al. (2021b);", "ref_id": "BIBREF45" }, { "start": 398, "end": 420, "text": "Blodgett et al. (2020)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Transparency Through Documentation", "sec_num": "5.1" }, { "text": "Defining the scope of the approach Blodgett et al. 2020found that works around bias \"often fail to explain what kinds of system behaviors are harmful, in what ways, to whom, and why.\" It thus becomes imperative to question what underrepresented groups would benefit more from a given evaluation benchmark. We therefore urge researchers and practitioners to clearly specify the demographic a particular method is relevant for. Moreover, given how social hierarchies intertwine tightly with language and may present themselves through its peculiarities, we also encourage researchers to specify the limitations and scope of their approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transparency Through Documentation", "sec_num": "5.1" }, { "text": "As an example, we consider the gender bias evaluation in English (Zhao et al., 2018; Stanovsky et al., 2019; Levy et al., 2021; Sharma et al., 2021) , where the bias might present itself through strong associations between grammatical constructs like pronouns. The same does not hold true for genderless languages, despite the existence of the bias (Zmigrod et al., 2019) . Thus, evaluation benchmarks and approaches do not always transfer well to other languages. Additionally, while such benchmarks use gender associations to professions for their evaluation, this method covers only one aspect of the social hierarchy, and does not address gender bias in language in its entirety. By being binary in nature and tightly coupled to Anglo-centric contexts (see \u00a73) benchmarks are limited in their scope and relevance. While most recent works do include ethical considerations, the limitations and scope are only vaguely specified. We advocate for such limitations to be highlighted and pointed out for the community to have a clearer picture about the steps that need to be taken towards greater inclusivity.", "cite_spans": [ { "start": 65, "end": 84, "text": "(Zhao et al., 2018;", "ref_id": "BIBREF132" }, { "start": 85, "end": 108, "text": "Stanovsky et al., 2019;", "ref_id": "BIBREF117" }, { "start": 109, "end": 127, "text": "Levy et al., 2021;", "ref_id": "BIBREF86" }, { "start": 128, "end": 148, "text": "Sharma et al., 2021)", "ref_id": "BIBREF111" }, { "start": 349, "end": 371, "text": "(Zmigrod et al., 2019)", "ref_id": "BIBREF134" } ], "ref_spans": [], "eq_spans": [], "section": "Transparency Through Documentation", "sec_num": "5.1" }, { "text": "Documenting the demographics Previous work has highlighted the importance of engaging with individuals on the receiving end of the bias (Bender et al., 2021) . It thus becomes important to understand the demographics of those involved in the creation of the benchmarks. As previously shown (Al Kuwatly et al., 2020) there exists a relation between annotators' identities and toxicity/bias in dataset. On this basis, we urge the researchers to collect and document the demographic information and annotator attitude scores . Building upon the same, we encourage the collection and reporting of this information about the researchers involved.", "cite_spans": [ { "start": 136, "end": 157, "text": "(Bender et al., 2021)", "ref_id": "BIBREF11" }, { "start": 290, "end": 315, "text": "(Al Kuwatly et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Transparency Through Documentation", "sec_num": "5.1" }, { "text": "The majority of previous work on bias has focused particularly on gender bias (Zhao et al., 2018; Stanovsky et al., 2019; Levy et al., 2021; Sharma et al., 2021) and the very few works (Nadeem et al., 2020; Nangia et al., 2020) that take other dimensions of biases into account, have their own shortcomings, as discussed in Section 3. It thus becomes important to diversify the range of bias and stereotypes that are being investigated by research, and covered by a certain evaluation technique. In extending the coverage to more dimensions, context stands as an important aspect of bias. The contextual aspects of bias as represented in language, culture, and history hold a significant role in forming and assessing the bias itself. Hence, as a practice, we encourage researchers to consider these three aspects when constructing bias measures and datasets.", "cite_spans": [ { "start": 78, "end": 97, "text": "(Zhao et al., 2018;", "ref_id": "BIBREF132" }, { "start": 98, "end": 121, "text": "Stanovsky et al., 2019;", "ref_id": "BIBREF117" }, { "start": 122, "end": 140, "text": "Levy et al., 2021;", "ref_id": "BIBREF86" }, { "start": 141, "end": 161, "text": "Sharma et al., 2021)", "ref_id": "BIBREF111" }, { "start": 185, "end": 206, "text": "(Nadeem et al., 2020;", "ref_id": "BIBREF98" }, { "start": 207, "end": 227, "text": "Nangia et al., 2020)", "ref_id": "BIBREF99" } ], "ref_spans": [], "eq_spans": [], "section": "Diversity Beyond Gender Bias", "sec_num": "5.2" }, { "text": "In discussing bias, it is important to note that discrimination does not occur in a vacuum. An act of discrimination against a person may be directed towards several intersecting identities. Considering bias using a singleaxis framework makes it impossible to engage with and evaluate the harms extended to the social groups that lie at the intersection of multiple identities (Crenshaw, 1991) . In an Indian context, for example, even those who identify as belonging to the \"same\" caste (Malik et al., 2021) , can have varied lived experiences based on class, gender, and other identities. More precisely, it is impossible to disentangle which specific identity a discriminatory act is directed against. Previous works have highlighted the importance of studying intersectional bias (Bender et al., 2021; Buolamwini and Gebru, 2018; Field et al., 2021; Guo et al., 2019; Crenshaw, 1991) but little research has been conducted around addressing such biases (Magee et al., 2021; Guo and Caliskan, 2021) . We thus encourage researchers to develop measures and benchmarks which are grounded in intersectional understanding of bias and adequately address the lived experiences of various social groups, towards increased inclusivity and fairness.", "cite_spans": [ { "start": 377, "end": 393, "text": "(Crenshaw, 1991)", "ref_id": "BIBREF38" }, { "start": 488, "end": 508, "text": "(Malik et al., 2021)", "ref_id": "BIBREF92" }, { "start": 784, "end": 805, "text": "(Bender et al., 2021;", "ref_id": "BIBREF11" }, { "start": 806, "end": 833, "text": "Buolamwini and Gebru, 2018;", "ref_id": "BIBREF26" }, { "start": 834, "end": 853, "text": "Field et al., 2021;", "ref_id": "BIBREF53" }, { "start": 854, "end": 871, "text": "Guo et al., 2019;", "ref_id": "BIBREF64" }, { "start": 872, "end": 887, "text": "Crenshaw, 1991)", "ref_id": "BIBREF38" }, { "start": 957, "end": 977, "text": "(Magee et al., 2021;", "ref_id": "BIBREF91" }, { "start": 978, "end": 1001, "text": "Guo and Caliskan, 2021)", "ref_id": "BIBREF65" } ], "ref_spans": [], "eq_spans": [], "section": "Diversity Beyond Gender Bias", "sec_num": "5.2" }, { "text": "Not only can the dimensions and context influence our definitions and approaches to bias, but the categories (values) assigned to each dimension (e.g., age) can also limit our understanding and solution of bias. For instance, the majority of gender-bias evaluation datasets solely deal with binary gender, i.e., male and female, with just a handful covering non-binary genders with only minimal representation (Dev et al., 2021a; Cao and Daum\u00e9 III, 2020) . As a result, category inclusiveness is critical in the development of a high-quality bias evaluation dataset. A set of categories that can act as a starting point are provided by Queer in AI in Section 3.2.", "cite_spans": [ { "start": 410, "end": 429, "text": "(Dev et al., 2021a;", "ref_id": "BIBREF44" }, { "start": 430, "end": 454, "text": "Cao and Daum\u00e9 III, 2020)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Diversity Beyond Gender Bias", "sec_num": "5.2" }, { "text": "Stereotype and bias formation is influenced by culture. As a result, what might be a stereotype in a given culture might not stand relevant in another. For instance, the characterization that parental leave is for mothers is considered stereotypical in the United States, but not in Sweden, where parental leave is split between both parents. (Gao et al., 2020; Table 1: The 25 largest pretrained dense language models, ranging from 6 billion parameters to 530 billion. Models are overwhelmingly trained by teams located in the US and on English text. Less than half of the language models were evaluated for bias by their creators.", "cite_spans": [ { "start": 343, "end": 361, "text": "(Gao et al., 2020;", "ref_id": "BIBREF57" } ], "ref_spans": [], "eq_spans": [], "section": "Acknowledging Differences", "sec_num": "5.3" }, { "text": "Previous sections have criticized the Anglo-centricity in the research of NLP bias and the influence on languages other than English. In particular, the lack of culturally-aware datasets limits the degree to which future NLP algorithms can be evaluated for biases. More crucially, these unspecified languages and cultures are on the receiving end of unmanaged effects. As a result, researchers are encouraged to develop bias datasets and benchmarks for non Anglo-centric cultures and languages (Bender et al., 2021) . Involving experts in related areas, especially participants with lived experiences of language-related harms, might aid decisions at all parts of this process, e.g. deciding what groups and content to include in research or dataset design (Liao and Muller, 2019; Dev et al., 2021a; McMillan-Major et al., 2022) . Overall, having culturally diverse and comparable datasets for a diverse set of languages (ideally covering all languages) is critical for evaluating multilingual models. Moreover, the applicability of bias measures across various languages suggests the necessity for cross-linguistic metrics or measurements that can be extended to different languages or cultures (Zhou et al., 2019; Escud\u00e9 Font and Costa-juss\u00e0, 2019; Malik et al., 2021) .", "cite_spans": [ { "start": 494, "end": 515, "text": "(Bender et al., 2021)", "ref_id": "BIBREF11" }, { "start": 757, "end": 780, "text": "(Liao and Muller, 2019;", "ref_id": "BIBREF88" }, { "start": 781, "end": 799, "text": "Dev et al., 2021a;", "ref_id": "BIBREF44" }, { "start": 800, "end": 828, "text": "McMillan-Major et al., 2022)", "ref_id": null }, { "start": 1196, "end": 1215, "text": "(Zhou et al., 2019;", "ref_id": "BIBREF133" }, { "start": 1216, "end": 1250, "text": "Escud\u00e9 Font and Costa-juss\u00e0, 2019;", "ref_id": "BIBREF52" }, { "start": 1251, "end": 1270, "text": "Malik et al., 2021)", "ref_id": "BIBREF92" } ], "ref_spans": [], "eq_spans": [], "section": "Acknowledging Differences", "sec_num": "5.3" }, { "text": "Recent improvements in LLMs to mimic human text have led to a surge in research that seeks to identify and address the harms arising from their training and deployment. However, the considerations on social harms that arise has been limited to narrow, Anglo-centric, contradictory, and often underspecified definitions of fairness and bias. Furthermore, the development of contemporary methods has conflated task-specific and architecture-specific designations. Compounded with the structural inequalities around resources, language, and identity, this has yielded an overreliance on prestige forms of English for developing LLMs and interrogating and addressing the social biases that they harbor. Situating these methods within such Englishes has had the consequence of over-emphasizing Western-centric social categories. Moreover, datasets for evaluating social biases in LLMs have traditionally failed to denote and specify the context within which biases are situated. Such concerns have been the cause for questions around the validity of the developed measures, and in particular for multilingual LLMs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "To address such challenges, we propose that developing methods for multilingual LLMs requires researchers to provide thorough documentation of their approaches, including documenting the scope, demographics of speakers, and potential annotators. Additionally, we also recommend that researchers situate their bias evaluation methods within the specific context of the languages that the model operates on. In doing so, bias evaluation methods can be made to specifically address biases under the conditions and contexts that they occur in each of the model's languages. Furthermore, we recommend that researchers examine diversity issues beyond gender bias, with a particular focus on intersectional issues (Guo and Caliskan, 2021) .", "cite_spans": [ { "start": 707, "end": 731, "text": "(Guo and Caliskan, 2021)", "ref_id": "BIBREF65" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Finally, we recommend that researchers are cognizant of the social and environmental harms that developing LLMs have. For instance, developing ever-larger language models that achieve marginal improvements for English may bring a smaller benefit than developing a LLM for other languages. Thus, in a consideration of developing a new language model, we implore researchers to consider ways in which harms can be limited, or the benefits can come to compensate for their costs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Grounding Bias, Fairness and Social Impact across DisciplinesConsidering biases in socio-technical systems as a purely technical construct is an insufficient consideration of the problem(Blodgett et al., 2020). In this section, we situate LLMs, and their applications, within the wider interdisciplinary literature on social harms and discrimination.1 For example, the BigScience biomedical working group has estimated that 82% of evaluation datasets in the biomedical and clinical field are for corpora in English(Datta et al., 2021).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Many recent works on socially biased technological systems are interdisciplinary, e.g., 'Race After Technology: The New Jim Code'(Benjamin, 2019) spans critical race theory, science and technology, Black feminism, and media studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The lower end of this range reflects the common practice of giving discounts of up to 50% for large purchases, while", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Micah Rajunov and Scott Duane. 2019. Nonbinary:Memoirs of Gender and Identity. Columbia University Press.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Contributors are listed alphabetically, except for Zeerak Talat and Aureli\u00e9 Nev\u00e9ol, who managed paper writing and chaired the working group, respectively. All authors contributed to the conceptualizing and writing of the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Determination of Author Order", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Persistent Anti-Muslim Bias in Large Language Models", "authors": [ { "first": "Abubakar", "middle": [], "last": "Abid", "suffix": "" }, { "first": "Maheen", "middle": [], "last": "Farooqi", "suffix": "" }, { "first": "James", "middle": [], "last": "Zou", "suffix": "" } ], "year": 2021, "venue": "AIES 2021 -Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3461702.3462624" ] }, "num": null, "urls": [], "raw_text": "Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Persistent Anti-Muslim Bias in Large Lan- guage Models. In AIES 2021 -Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Soci- ety.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Uncertain Archives: Approaching the Unknowns, Errors, and Vulnerabilities of Big Data through Cultural Theories of the Archive", "authors": [ { "first": "Daniela", "middle": [], "last": "Agostinho", "suffix": "" }, { "first": "Catherine", "middle": [ "D" ], "last": "Ignazio", "suffix": "" }, { "first": "Annie", "middle": [], "last": "Ring", "suffix": "" } ], "year": 2019, "venue": "Surveillance & Society", "volume": "17", "issue": "3/4", "pages": "422--441", "other_ids": { "DOI": [ "10.24908/ss.v17i3/4.12330" ] }, "num": null, "urls": [], "raw_text": "Daniela Agostinho, Catherine D'Ignazio, Annie Ring, Nanna Bonde Thylstrup, and Kristin Veel. 2019. Un- certain Archives: Approaching the Unknowns, Er- rors, and Vulnerabilities of Big Data through Cultural Theories of the Archive. Surveillance & Society, 17(3/4):422-441.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Do algorithms reveal sexual orientation or just expose our stereotypes?", "authors": [ { "first": "Alexander", "middle": [], "last": "Blaise Aguera Y Arcas", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Todorov", "suffix": "" }, { "first": "", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blaise Aguera y Arcas, Alexander Todorov, and Mar- garet Mitchell. 2018. Do algorithms reveal sexual orientation or just expose our stereotypes?", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Identifying and measuring annotator bias based on annotators' demographic characteristics", "authors": [ { "first": "Maximilian", "middle": [], "last": "Hala Al Kuwatly", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Wich", "suffix": "" }, { "first": "", "middle": [], "last": "Groh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourth Workshop on Online Abuse and Harms", "volume": "", "issue": "", "pages": "184--190", "other_ids": { "DOI": [ "10.18653/v1/2020.alw-1.21" ] }, "num": null, "urls": [], "raw_text": "Hala Al Kuwatly, Maximilian Wich, and Georg Groh. 2020. Identifying and measuring annotator bias based on annotators' demographic characteristics. In Pro- ceedings of the Fourth Workshop on Online Abuse and Harms, pages 184-190, Online. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "How can we ensure visibility and diversity in research contributions? How the Contributor Role Taxonomy (CRediT) is helping the shift from authorship to contributorship", "authors": [ { "first": "Liz", "middle": [], "last": "Allen", "suffix": "" }, { "first": "Alison O'", "middle": [], "last": "Connell", "suffix": "" }, { "first": "Veronique", "middle": [], "last": "Kiermer", "suffix": "" } ], "year": 2019, "venue": "Learned Publishing", "volume": "32", "issue": "1", "pages": "71--74", "other_ids": { "DOI": [ "10.1002/leap.1210" ] }, "num": null, "urls": [], "raw_text": "Liz Allen, Alison O'Connell, and Veronique Kiermer. 2019. How can we ensure visibility and diversity in research contributions? How the Contributor Role Taxonomy (CRediT) is helping the shift from au- thorship to contributorship. Learned Publishing, 32(1):71-74.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Research Supporting Responsible AI", "authors": [ { "first": "Saleema", "middle": [], "last": "Amershi", "suffix": "" }, { "first": "Ece", "middle": [], "last": "Kamar", "suffix": "" }, { "first": "Kristin", "middle": [], "last": "Lauter", "suffix": "" }, { "first": "Jenn", "middle": [ "Wortman" ], "last": "Vaughan", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Wallach", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saleema Amershi, Ece Kamar, Kristin Lauter, Jenn Wortman Vaughan, and Hanna Wallach. 2020. Re- search Supporting Responsible AI.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The problem with bias: from allocative to representational harms in machine learning. special interest group for computing", "authors": [ { "first": "Solon", "middle": [], "last": "Barocas", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Crawford", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Shapiro", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Wallach", "suffix": "" } ], "year": 2017, "venue": "Information and Society (SIGCIS)", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: from allocative to representational harms in machine learn- ing. special interest group for computing. Informa- tion and Society (SIGCIS), 2.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Big Data's Disparate Impact", "authors": [ { "first": "Solon", "middle": [], "last": "Barocas", "suffix": "" }, { "first": "Andrew", "middle": [ "D" ], "last": "Selbst", "suffix": "" } ], "year": 2016, "venue": "California Law Review", "volume": "104", "issue": "3", "pages": "", "other_ids": { "DOI": [ "10.2139/ssrn.2477899" ] }, "num": null, "urls": [], "raw_text": "Solon Barocas and Andrew D. Selbst. 2016. Big Data's Disparate Impact. California Law Review, 104(3).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The #BenderRule: On Naming the Languages We Study and Why It Matters", "authors": [ { "first": "Emily", "middle": [], "last": "Bender", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Bender. 2019. The #BenderRule: On Naming the Languages We Study and Why It Matters.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Batya", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "587--604", "other_ids": { "DOI": [ "10.1162/tacl_a_00041" ] }, "num": null, "urls": [], "raw_text": "Emily M. Bender and Batya Friedman. 2018. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics, 6:587-604.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "On the dangers of stochastic parrots: Can language models be too big", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Timnit", "middle": [], "last": "Gebru", "suffix": "" }, { "first": "Angelina", "middle": [], "last": "Mcmillan-Major", "suffix": "" }, { "first": "Shmargaret", "middle": [], "last": "Shmitchell", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21", "volume": "", "issue": "", "pages": "610--623", "other_ids": { "DOI": [ "10.1145/3442188.3445922" ] }, "num": null, "urls": [], "raw_text": "Emily M. Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? . In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Trans- parency, FAccT '21, page 610-623, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.463" ] }, "num": null, "urls": [], "raw_text": "Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: On Meaning, Form, and Understand- ing in the Age of Data. In Proceedings of the 58th", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "5185--5198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 5185-5198, Online. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Race after technology: abolitionist tools for the new Jim code", "authors": [ { "first": "Ruha", "middle": [], "last": "Benjamin", "suffix": "" } ], "year": 2019, "venue": "Polity", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruha Benjamin. 2019. Race after technology: aboli- tionist tools for the new Jim code. Polity, Medford, MA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Datasheet for the Pile", "authors": [ { "first": "Stella", "middle": [], "last": "Biderman", "suffix": "" }, { "first": "Kieran", "middle": [], "last": "Bicheno", "suffix": "" }, { "first": "Leo", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2201.07311" ] }, "num": null, "urls": [], "raw_text": "Stella Biderman, Kieran Bicheno, and Leo Gao. 2022. Datasheet for the Pile. arXiv:2201.07311 [cs].", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Pitfalls in machine learning research: Reexamining the development cycle. In \"I Can't Believe It's Not Better!", "authors": [ { "first": "Stella", "middle": [], "last": "Biderman", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Scheirer", "suffix": "" } ], "year": 2020, "venue": "NeurIPS", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stella Biderman and Walter Scheirer. 2020. Pitfalls in machine learning research: Reexamining the devel- opment cycle. In \"I Can't Believe It's Not Better!\" NeurIPS 2020 workshop.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The Values Encoded in Machine Learning Research", "authors": [ { "first": "Abeba", "middle": [], "last": "Birhane", "suffix": "" }, { "first": "Pratyusha", "middle": [], "last": "Kalluri", "suffix": "" }, { "first": "Dallas", "middle": [], "last": "Card", "suffix": "" }, { "first": "William", "middle": [], "last": "Agnew", "suffix": "" }, { "first": "Ravit", "middle": [], "last": "Dotan", "suffix": "" }, { "first": "Michelle", "middle": [], "last": "Bao", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2106.15590[cs].ArXiv:2106.15590" ] }, "num": null, "urls": [], "raw_text": "Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao. 2021. The Values Encoded in Machine Learning Research. arXiv:2106.15590 [cs]. ArXiv: 2106.15590.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "GPT-NeoX-20B: An Open-Source Autoregressive Language Model", "authors": [ { "first": "Sid", "middle": [], "last": "Black", "suffix": "" }, { "first": "Stella", "middle": [], "last": "Biderman", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Hallahan", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Anthony", "suffix": "" }, { "first": "Leo", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Laurence", "middle": [], "last": "Golding", "suffix": "" }, { "first": "Horace", "middle": [], "last": "He", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sid Black, Stella Biderman, Eric Hallahan, Quentin An- thony, Leo Gao, Laurence Golding, Horace He, Con- nor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. GPT-NeoX-20B: An Open- Source Autoregressive Language Model.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Language (Technology) is Power: A Critical Survey of \"Bias\" in NLP", "authors": [ { "first": "", "middle": [], "last": "Su Lin", "suffix": "" }, { "first": "Solon", "middle": [], "last": "Blodgett", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Barocas", "suffix": "" }, { "first": "Iii", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Wallach", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5454--5476", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.485" ] }, "num": null, "urls": [], "raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (Technology) is Power: A Critical Survey of \"Bias\" in NLP. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics, pages 5454-5476, Online. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets", "authors": [ { "first": "", "middle": [], "last": "Su Lin", "suffix": "" }, { "first": "Gilsinia", "middle": [], "last": "Blodgett", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Lopez", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Olteanu", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Sim", "suffix": "" }, { "first": "", "middle": [], "last": "Wallach", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.81" ] }, "num": null, "urls": [], "raw_text": "Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. 2021. Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fair- ness Benchmark Datasets. In Proceedings of the 59th", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "1004--1015", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 1004-1015, Online. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Twitter Universal Dependency parsing for African-American and mainstream American English", "authors": [ { "first": "", "middle": [], "last": "Su Lin", "suffix": "" }, { "first": "Johnny", "middle": [], "last": "Blodgett", "suffix": "" }, { "first": "Brendan O'", "middle": [], "last": "Wei", "suffix": "" }, { "first": "", "middle": [], "last": "Connor", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1415--1425", "other_ids": { "DOI": [ "10.18653/v1/P18-1131" ] }, "num": null, "urls": [], "raw_text": "Su Lin Blodgett, Johnny Wei, and Brendan O'Connor. 2018. Twitter Universal Dependency parsing for African-American and mainstream American English. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1415-1425, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", "authors": [ { "first": "Tolga", "middle": [], "last": "Bolukbasi", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Y", "middle": [], "last": "James", "suffix": "" }, { "first": "Venkatesh", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Adam", "middle": [ "T" ], "last": "Saligrama", "suffix": "" }, { "first": "", "middle": [], "last": "Kalai", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "29", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to home- maker? debiasing word embeddings. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Identifying and Reducing Gender Bias in Word-Level Language Models", "authors": [ { "first": "Shikha", "middle": [], "last": "Bordia", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North", "volume": "", "issue": "", "pages": "7--15", "other_ids": { "DOI": [ "10.18653/v1/N19-3002" ] }, "num": null, "urls": [], "raw_text": "Shikha Bordia and Samuel R. Bowman. 2019. Iden- tifying and Reducing Gender Bias in Word-Level Language Models. In Proceedings of the 2019 Con- ference of the North, pages 7-15, Minneapolis, Min- nesota. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners", "authors": [ { "first": "Tom", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Ryder", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Subbiah", "suffix": "" }, { "first": "Jared", "middle": [ "D" ], "last": "Kaplan", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Dhariwal", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Shyam", "suffix": "" }, { "first": "Girish", "middle": [], "last": "Sastry", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Askell", "suffix": "" }, { "first": "Sandhini", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Ariel", "middle": [], "last": "Herbert-Voss", "suffix": "" }, { "first": "Gretchen", "middle": [], "last": "Krueger", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Henighan", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Ramesh", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ziegler", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Clemens", "middle": [], "last": "Winter", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Hesse", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Sigler", "suffix": "" }, { "first": "Mateusz", "middle": [], "last": "Litwin", "suffix": "" } ], "year": null, "venue": "", "volume": "33", "issue": "", "pages": "1877--1901", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Rad- ford, Ilya Sutskever, and Dario Amodei. 2020. Lan- guage Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification", "authors": [ { "first": "Joy", "middle": [], "last": "Buolamwini", "suffix": "" }, { "first": "Timnit", "middle": [], "last": "Gebru", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 1st Conference on Fairness, Accountability and Transparency", "volume": "81", "issue": "", "pages": "77--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Com- mercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pages 77-91, New York, NY, USA. PMLR.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Semantics derived automatically from language corpora contain human-like biases", "authors": [ { "first": "Aylin", "middle": [], "last": "Caliskan", "suffix": "" }, { "first": "Joanna", "middle": [ "J" ], "last": "Bryson", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2017, "venue": "Science", "volume": "", "issue": "6334", "pages": "", "other_ids": { "DOI": [ "10.1126/science.aal4230" ] }, "num": null, "urls": [], "raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "AI now 2017 report", "authors": [ { "first": "Alex", "middle": [], "last": "Campolo", "suffix": "" }, { "first": "Madelyn", "middle": [], "last": "Sanfilippo", "suffix": "" }, { "first": "Meredith", "middle": [], "last": "Whittaker", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Crawford", "suffix": "" } ], "year": 2018, "venue": "AI now 2017 symposium and workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Campolo, Madelyn Sanfilippo, Meredith Whit- taker, and Kate Crawford. 2018. AI now 2017 report. In AI now 2017 symposium and workshop. AI Now Institute at New York University. Edition: AI Now 2017 Symposium and Workshop.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Toward Gender-Inclusive Coreference Resolution", "authors": [ { "first": "Yang", "middle": [], "last": "", "suffix": "" }, { "first": "Trista", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4568--4595", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.418" ] }, "num": null, "urls": [], "raw_text": "Yang Trista Cao and Hal Daum\u00e9 III. 2020. Toward Gender-Inclusive Coreference Resolution. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics, pages 4568-4595, Online. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Quantifying memorization across neural language models", "authors": [ { "first": "Nicholas", "middle": [], "last": "Carlini", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Ippolito", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Jagielski", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Tramer", "suffix": "" }, { "first": "Chiyuan", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2202.07646" ] }, "num": null, "urls": [], "raw_text": "Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. 2022. Quantifying memorization across neural lan- guage models. arXiv preprint arXiv:2202.07646.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "The Limits of Global Inclusion in AI Development", "authors": [ { "first": "Alan", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Chinasa", "middle": [ "T" ], "last": "Okolo", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Terner", "suffix": "" }, { "first": "Angelina", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2102.01265[cs].ArXiv:2102.01265" ] }, "num": null, "urls": [], "raw_text": "Alan Chan, Chinasa T. Okolo, Zachary Terner, and An- gelina Wang. 2021. The Limits of Global Inclusion in AI Development. arXiv:2102.01265 [cs]. ArXiv: 2102.01265.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Safwan Hossain, and Frank Rudzicz. 2020. Exploring text specific and blackbox fairness algorithms in multimodal clinical NLP", "authors": [ { "first": "John", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Berlot-Attwell", "suffix": "" }, { "first": "Xindi", "middle": [], "last": "Wang", "suffix": "" } ], "year": null, "venue": "Proceedings of the 3rd Clinical Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "301--312", "other_ids": { "DOI": [ "10.18653/v1/2020.clinicalnlp-1.33" ] }, "num": null, "urls": [], "raw_text": "John Chen, Ian Berlot-Attwell, Xindi Wang, Safwan Hossain, and Frank Rudzicz. 2020. Exploring text specific and blackbox fairness algorithms in multi- modal clinical NLP. In Proceedings of the 3rd Clini- cal Natural Language Processing Workshop, pages 301-312, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Gender Bias and Under-Representation in Natural Language Processing Across Human Languages", "authors": [ { "first": "Yan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Mahoney", "suffix": "" }, { "first": "Isabella", "middle": [], "last": "Grasso", "suffix": "" }, { "first": "Esma", "middle": [], "last": "Wali", "suffix": "" }, { "first": "Abigail", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Middleton", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Njie", "suffix": "" }, { "first": "Jeanna", "middle": [], "last": "Matthews", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 AAAI/ACM Conference on AI", "volume": "", "issue": "", "pages": "24--34", "other_ids": { "DOI": [ "10.1145/3461702.3462530" ] }, "num": null, "urls": [], "raw_text": "Yan Chen, Christopher Mahoney, Isabella Grasso, Esma Wali, Abigail Matthews, Thomas Middleton, Mariama Njie, and Jeanna Matthews. 2021. Gender Bias and Under-Representation in Natural Language Processing Across Human Languages. In Proceed- ings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 24-34, Virtual Event USA. ACM.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Causal Learning for Socially Responsible AI", "authors": [ { "first": "Lu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Ahmadreza", "middle": [], "last": "Mosallanezhad", "suffix": "" }, { "first": "Paras", "middle": [], "last": "Sheth", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.24963/ijcai.2021/598" ] }, "num": null, "urls": [], "raw_text": "Lu Cheng, Ahmadreza Mosallanezhad, Paras Sheth, and Huan Liu. 2021. Causal Learning for Socially Re- sponsible AI. In Proceedings of the Thirtieth Interna- tional Joint Conference on Artificial Intelligence.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "An untitled review of \"style and sociolinguistic variation", "authors": [ { "first": "Jenny", "middle": [], "last": "Cheshire", "suffix": "" } ], "year": 2007, "venue": "Language", "volume": "83", "issue": "2", "pages": "432--435", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Cheshire. 2007. An untitled review of \"style and sociolinguistic variation\". Language, 83(2):432-435.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Discriminating data: correlation, neighborhoods, and the new politics of recognition", "authors": [ { "first": "Wendy", "middle": [], "last": "Hui", "suffix": "" }, { "first": "Kyong", "middle": [], "last": "Chun", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wendy Hui Kyong Chun. 2021. Discriminating data: correlation, neighborhoods, and the new politics of recognition. The MIT Press, Cambridge, Mas- sachusetts.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Design Justice, A.I., and Escape from the Matrix of Domination", "authors": [ { "first": "Sasha", "middle": [], "last": "Costanza-Chock", "suffix": "" } ], "year": 2018, "venue": "Journal of Design and Science", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.21428/96c8d426" ] }, "num": null, "urls": [], "raw_text": "Sasha Costanza-Chock. 2018. Design Justice, A.I., and Escape from the Matrix of Domination. Journal of Design and Science.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Mapping the margins: Intersectionality, identity politics, and violence against women of color", "authors": [ { "first": "Kimberle", "middle": [], "last": "Crenshaw", "suffix": "" } ], "year": 1991, "venue": "Stanford Law Review", "volume": "43", "issue": "6", "pages": "1241--1299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kimberle Crenshaw. 1991. Mapping the margins: In- tersectionality, identity politics, and violence against women of color. Stanford Law Review, 43(6):1241- 1299.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics", "authors": [ { "first": "Paula", "middle": [], "last": "Czarnowska", "suffix": "" }, { "first": "Yogarshi", "middle": [], "last": "Vyas", "suffix": "" }, { "first": "Kashif", "middle": [], "last": "Shah", "suffix": "" } ], "year": 2021, "venue": "Transactions of the Association for Computational Linguistics", "volume": "9", "issue": "", "pages": "1249--1267", "other_ids": { "DOI": [ "10.1162/tacl_a_00425" ] }, "num": null, "urls": [], "raw_text": "Paula Czarnowska, Yogarshi Vyas, and Kashif Shah. 2021. Quantifying Social Biases in NLP: A Gen- eralization and Empirical Comparison of Extrinsic Fairness Metrics. Transactions of the Association for Computational Linguistics, 9:1249-1267.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Aur\u00e9lie N\u00e9v\u00e9ol, Vassilina Nikoulina, and Maya Varma. 2021. Challenges in language modelling for biomedicine", "authors": [ { "first": "Debajyoti", "middle": [], "last": "Datta", "suffix": "" }, { "first": "Jason", "middle": [ "A" ], "last": "Fries", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Mckenna", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debajyoti Datta, Jason A. Fries, Michael McKenna, Aur\u00e9lie N\u00e9v\u00e9ol, Vassilina Nikoulina, and Maya Varma. 2021. Challenges in language modelling for biomedicine.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting", "authors": [ { "first": "Maria", "middle": [], "last": "De-Arteaga", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Romanov", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Chayes", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Borgs", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Chouldechova", "suffix": "" }, { "first": "Sahin", "middle": [], "last": "Geyik", "suffix": "" }, { "first": "Krishnaram", "middle": [], "last": "Kenthapadi", "suffix": "" }, { "first": "Adam Tauman", "middle": [], "last": "Kalai", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference on Fairness, Accountability, and Transparency", "volume": "", "issue": "", "pages": "120--128", "other_ids": { "DOI": [ "10.1145/3287560.3287572" ] }, "num": null, "urls": [], "raw_text": "Maria De-Arteaga, Alexey Romanov, Hanna Wal- lach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in Bios: A Case Study of Semantic Representation Bias in a High- Stakes Setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 120-128, Atlanta GA USA. ACM.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models", "authors": [ { "first": "Vassimon", "middle": [], "last": "Daniel De", "suffix": "" }, { "first": "David", "middle": [], "last": "Manela", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Errington", "suffix": "" }, { "first": "Boris", "middle": [], "last": "Fisher", "suffix": "" }, { "first": "Pasquale", "middle": [], "last": "Van Breugel", "suffix": "" }, { "first": "", "middle": [], "last": "Minervini", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "2232--2242", "other_ids": { "DOI": [ "10.18653/v1/2021.eacl-main.190" ] }, "num": null, "urls": [], "raw_text": "Daniel de Vassimon Manela, David Errington, Thomas Fisher, Boris van Breugel, and Pasquale Minervini. 2021. Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models. In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Main Volume, pages 2232-2242, Online. Association for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "On measuring and mitigating biased inferences of word embeddings", "authors": [ { "first": "Sunipa", "middle": [], "last": "Dev", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jeff", "middle": [ "M" ], "last": "Phillips", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "34", "issue": "", "pages": "7659--7666", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6267" ] }, "num": null, "urls": [], "raw_text": "Sunipa Dev, Tao Li, Jeff M. Phillips, and Vivek Sriku- mar. 2020. On measuring and mitigating biased infer- ences of word embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7659- 7666.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Harms of gender exclusivity and challenges in non-binary representation in language technologies", "authors": [ { "first": "Sunipa", "middle": [], "last": "Dev", "suffix": "" }, { "first": "Masoud", "middle": [], "last": "Monajatipoor", "suffix": "" }, { "first": "Anaelia", "middle": [], "last": "Ovalle", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Subramonian", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Phillips", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1968--1994", "other_ids": { "DOI": [ "10.18653/v1/2021.emnlp-main.150" ] }, "num": null, "urls": [], "raw_text": "Sunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Ar- jun Subramonian, Jeff Phillips, and Kai-Wei Chang. 2021a. Harms of gender exclusivity and challenges in non-binary representation in language technologies. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 1968-1994, Online and Punta Cana, Dominican Re- public. Association for Computational Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "What do bias measures measure?", "authors": [ { "first": "Sunipa", "middle": [], "last": "Dev", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Jieyu", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Jiao", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Mattie", "middle": [], "last": "Sanseverino", "suffix": "" }, { "first": "Jiin", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sunipa Dev, Emily Sheng, Jieyu Zhao, Jiao Sun, Yu Hou, Mattie Sanseverino, Jiin Kim, Nanyun Peng, and Kai-Wei Chang. 2021b. What do bias measures measure?", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation", "authors": [ { "first": "Jwala", "middle": [], "last": "Dhamala", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Satyapriya", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Gupta", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency", "volume": "", "issue": "", "pages": "862--872", "other_ids": { "DOI": [ "10.1145/3442188.3445924" ] }, "num": null, "urls": [], "raw_text": "Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, and Rahul Gupta. 2021. BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Gener- ation. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 862-872, Virtual Event Canada. ACM.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Data feminism. Strong ideas series", "authors": [ { "first": "D'", "middle": [], "last": "Catherine", "suffix": "" }, { "first": "Lauren", "middle": [ "F" ], "last": "Ignazio", "suffix": "" }, { "first": "", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Catherine D'Ignazio and Lauren F. Klein. 2020. Data feminism. Strong ideas series. The MIT Press, Cam- bridge, Massachusetts.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Responsible artificial intelligence: Designing ai for human values", "authors": [ { "first": "Virginia", "middle": [], "last": "Dignum", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Virginia Dignum. 2017. Responsible artificial intelli- gence: Designing ai for human values. ICT Discover- ies.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Documenting large webtext corpora: A case study on the colossal clean crawled corpus", "authors": [ { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Ana", "middle": [], "last": "Marasovi\u0107", "suffix": "" }, { "first": "William", "middle": [], "last": "Agnew", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Ilharco", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Groeneveld", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1286--1305", "other_ids": { "DOI": [ "10.18653/v1/2021.emnlp-main.98" ] }, "num": null, "urls": [], "raw_text": "Jesse Dodge, Maarten Sap, Ana Marasovi\u0107, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colos- sal clean crawled corpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1286-1305, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Purity and danger: an analysis of the concepts of pollution and taboo", "authors": [ { "first": "Mary", "middle": [], "last": "Douglas", "suffix": "" } ], "year": 1978, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mary Douglas. 1978. Purity and danger: an analysis of the concepts of pollution and taboo, repr edition. Routledge, London. OCLC: 248038797.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Mapping languages: the Corpus of Global Language Use", "authors": [ { "first": "Jonathan", "middle": [], "last": "Dunn", "suffix": "" } ], "year": 2020, "venue": "Language Resources and Evaluation", "volume": "54", "issue": "4", "pages": "999--1018", "other_ids": { "DOI": [ "10.1007/s10579-020-09489-2" ] }, "num": null, "urls": [], "raw_text": "Jonathan Dunn. 2020. Mapping languages: the Corpus of Global Language Use. Language Resources and Evaluation, 54(4):999-1018.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Equalizing gender bias in neural machine translation with word embeddings techniques", "authors": [ { "first": "Joel", "middle": [ "Escud\u00e9" ], "last": "Font", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing", "volume": "", "issue": "", "pages": "147--154", "other_ids": { "DOI": [ "10.18653/v1/W19-3821" ] }, "num": null, "urls": [], "raw_text": "Joel Escud\u00e9 Font and Marta R. Costa-juss\u00e0. 2019. Equalizing gender bias in neural machine translation with word embeddings techniques. In Proceedings of the First Workshop on Gender Bias in Natural Lan- guage Processing, pages 147-154, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "A Survey of Race, Racism, and Anti-Racism in NLP", "authors": [ { "first": "Anjalie", "middle": [], "last": "Field", "suffix": "" }, { "first": "Su", "middle": [ "Lin" ], "last": "Blodgett", "suffix": "" }, { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.149" ] }, "num": null, "urls": [], "raw_text": "Anjalie Field, Su Lin Blodgett, Zeerak Waseem, and Yulia Tsvetkov. 2021. A Survey of Race, Racism, and Anti-Racism in NLP. In Proceedings of the 59th", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "1905--1925", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 1905-1925, Online. Association for Computational Linguistics.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Rethinking the Public Sphere: A Contribution to the Critique of", "authors": [ { "first": "Nancy", "middle": [], "last": "Fraser", "suffix": "" } ], "year": 1990, "venue": "Actually Existing Democracy. Social Text", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.2307/466240" ] }, "num": null, "urls": [], "raw_text": "Nancy Fraser. 1990. Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy. Social Text, (25/26):56.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "The (Im)possibility of fairness: different value systems require different mechanisms for fair decision making", "authors": [ { "first": "A", "middle": [], "last": "Sorelle", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Friedler", "suffix": "" }, { "first": "Suresh", "middle": [], "last": "Scheidegger", "suffix": "" }, { "first": "", "middle": [], "last": "Venkatasubramanian", "suffix": "" } ], "year": 2021, "venue": "Communications of the ACM", "volume": "64", "issue": "4", "pages": "136--143", "other_ids": { "DOI": [ "10.1145/3433949" ] }, "num": null, "urls": [], "raw_text": "Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2021. The (Im)possibility of fairness: different value systems require different mechanisms for fair decision making. Communica- tions of the ACM, 64(4):136-143.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "The Pile: An 800GB Dataset of Diverse Text for Language Modeling", "authors": [ { "first": "Leo", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Stella", "middle": [], "last": "Biderman", "suffix": "" }, { "first": "Sid", "middle": [], "last": "Black", "suffix": "" }, { "first": "Laurence", "middle": [], "last": "Golding", "suffix": "" }, { "first": "Travis", "middle": [], "last": "Hoppe", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Horace", "middle": [], "last": "He", "suffix": "" }, { "first": "Anish", "middle": [], "last": "Thite", "suffix": "" }, { "first": "Noa", "middle": [], "last": "Nabeshima", "suffix": "" }, { "first": "Shawn", "middle": [], "last": "Presser", "suffix": "" }, { "first": "Connor", "middle": [], "last": "Leahy", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2101.00027[cs].ArXiv:2101.00027" ] }, "num": null, "urls": [], "raw_text": "Leo Gao, Stella Biderman, Sid Black, Laurence Gold- ing, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The Pile: An 800GB Dataset of Diverse Text for Language Model- ing. arXiv:2101.00027 [cs]. ArXiv: 2101.00027.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Making pre-trained language models better few-shot learners", "authors": [ { "first": "Tianyu", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "3816--3830", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.295" ] }, "num": null, "urls": [], "raw_text": "Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Mak- ing pre-trained language models better few-shot learn- ers. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 3816-3830, Online. Association for Computational Linguistics.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "He is very intelligent, she is very beautiful? On Mitigating Social Biases in Language Modelling and Generation", "authors": [ { "first": "Aparna", "middle": [], "last": "Garimella", "suffix": "" }, { "first": "Akhash", "middle": [], "last": "Amarnath", "suffix": "" }, { "first": "Kiran", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Akash", "middle": [], "last": "Pramod Yalla", "suffix": "" }, { "first": "N", "middle": [], "last": "Anandhavelu", "suffix": "" }, { "first": "Niyati", "middle": [], "last": "Chhaya", "suffix": "" }, { "first": "Balaji Vasan", "middle": [], "last": "Srinivasan", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", "volume": "", "issue": "", "pages": "4534--4545", "other_ids": { "DOI": [ "10.18653/v1/2021.findings-acl.397" ] }, "num": null, "urls": [], "raw_text": "Aparna Garimella, Akhash Amarnath, Kiran Kumar, Akash Pramod Yalla, Anandhavelu N, Niyati Chhaya, and Balaji Vasan Srinivasan. 2021. He is very intel- ligent, she is very beautiful? On Mitigating Social Biases in Language Modelling and Generation. In Findings of the Association for Computational Lin- guistics: ACL-IJCNLP 2021, pages 4534-4545, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Datasheets for Datasets", "authors": [ { "first": "Timnit", "middle": [], "last": "Gebru", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Morgenstern", "suffix": "" }, { "first": "Briana", "middle": [], "last": "Vecchione", "suffix": "" }, { "first": "Jennifer", "middle": [ "Wortman" ], "last": "Vaughan", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Crawford", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.09010[cs].ArXiv:1803.09010" ] }, "num": null, "urls": [], "raw_text": "Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daum\u00e9 III, and Kate Crawford. 2018. Datasheets for Datasets. arXiv:1803.09010 [cs]. ArXiv: 1803.09010.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "RealToxi-cityPrompts: Evaluating Neural Toxic Degeneration in Language Models", "authors": [ { "first": "Suchin", "middle": [], "last": "Samuel Gehman", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Gururangan", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Choi", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "3356--3369", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.301" ] }, "num": null, "urls": [], "raw_text": "Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxi- cityPrompts: Evaluating Neural Toxic Degeneration in Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3356-3369, Online. Association for Computational Linguistics.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Raw data is an oxymoron", "authors": [ { "first": "Lisa", "middle": [], "last": "Gitelman", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisa Gitelman. 2013. Raw data is an oxymoron. MIT press.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them", "authors": [ { "first": "Hila", "middle": [], "last": "Gonen", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North", "volume": "", "issue": "", "pages": "609--614", "other_ids": { "DOI": [ "10.18653/v1/N19-1061" ] }, "num": null, "urls": [], "raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gen- der Biases in Word Embeddings But do not Remove Them. In Proceedings of the 2019 Conference of the North, pages 609-614, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Toward fairness in ai for people with disabilities: A research roadmap", "authors": [ { "first": "Anhong", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Ece", "middle": [], "last": "Kamar", "suffix": "" }, { "first": "Jennifer", "middle": [ "Wortman" ], "last": "Vaughan", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "Meredith Ringel", "middle": [], "last": "Morris", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anhong Guo, Ece Kamar, Jennifer Wortman Vaughan, Hanna Wallach, and Meredith Ringel Morris. 2019. Toward fairness in ai for people with disabilities: A research roadmap.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases", "authors": [ { "first": "Wei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Aylin", "middle": [], "last": "Caliskan", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society", "volume": "", "issue": "", "pages": "122--133", "other_ids": { "DOI": [ "10.1145/3461702.3462536" ] }, "num": null, "urls": [], "raw_text": "Wei Guo and Aylin Caliskan. 2021. Detecting Emergent Intersectional Biases: Contextualized Word Embed- dings Contain a Distribution of Human-like Biases. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 122-133, Virtual Event USA. ACM.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection", "authors": [ { "first": "Suchin", "middle": [], "last": "Gururangan", "suffix": "" }, { "first": "Dallas", "middle": [], "last": "Card", "suffix": "" }, { "first": "Sarah", "middle": [ "K" ], "last": "Dreier", "suffix": "" }, { "first": "Emily", "middle": [ "K" ], "last": "Gade", "suffix": "" }, { "first": "Leroy", "middle": [ "Z" ], "last": "Wang", "suffix": "" }, { "first": "Zeyu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2201.10474[cs].ArXiv:2201.10474" ] }, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, and Noah A. Smith. 2022. Whose Lan- guage Counts as High Quality? Measuring Language Ideologies in Text Data Selection. arXiv:2201.10474 [cs]. ArXiv: 2201.10474.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective", "authors": [ { "first": "Donna", "middle": [], "last": "Haraway", "suffix": "" } ], "year": 1988, "venue": "Feminist Studies", "volume": "14", "issue": "3", "pages": "575--599", "other_ids": { "DOI": [ "10.2307/3178066" ] }, "num": null, "urls": [], "raw_text": "Donna Haraway. 1988. Situated Knowledges: The Sci- ence Question in Feminism and the Privilege of Par- tial Perspective. Feminist Studies, 14(3):575-599.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Five sources of bias in natural language processing. Language and Linguistics Compass", "authors": [ { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Shrimai", "middle": [], "last": "Prabhumoye", "suffix": "" } ], "year": 2021, "venue": "", "volume": "15", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1111/lnc3.12432" ] }, "num": null, "urls": [], "raw_text": "Dirk Hovy and Shrimai Prabhumoye. 2021. Five sources of bias in natural language processing. Lan- guage and Linguistics Compass, 15(8):e12432.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Reducing sentiment bias in language models via counterfactual evaluation", "authors": [ { "first": "Huan", "middle": [], "last": "Po Sen Huang", "suffix": "" }, { "first": "Ray", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Stanforth", "suffix": "" }, { "first": "Jack", "middle": [ "W" ], "last": "Welbl", "suffix": "" }, { "first": "Vishal", "middle": [], "last": "Rae", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Maini", "suffix": "" }, { "first": "Pushmeet", "middle": [], "last": "Yogatama", "suffix": "" }, { "first": "", "middle": [], "last": "Kohli", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.7" ] }, "num": null, "urls": [], "raw_text": "Po Sen Huang, Huan Zhang, Ray Jiang, Robert Stan- forth, Johannes Welbl, Jack W. Rae, Vishal Maini, Dani Yogatama, and Pushmeet Kohli. 2020. Reduc- ing sentiment bias in language models via counter- factual evaluation. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Social biases in NLP models as barriers for persons with disabilities", "authors": [ { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" }, { "first": "Vinodkumar", "middle": [], "last": "Prabhakaran", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Denton", "suffix": "" }, { "first": "Kellie", "middle": [], "last": "Webster", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Denuyl", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.487" ] }, "num": null, "urls": [], "raw_text": "Ben Hutchinson, Vinodkumar Prabhakaran, Emily Den- ton, Kellie Webster, Yu Zhong, and Stephen Denuyl. 2020. Social biases in NLP models as barriers for persons with disabilities. In Proceedings of the 58th", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "5491--5501", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 5491-5501, Online. Association for Computational Linguistics.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "Measurement and Fairness", "authors": [ { "first": "Abigail", "middle": [ "Z" ], "last": "Jacobs", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Wallach", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency", "volume": "", "issue": "", "pages": "375--385", "other_ids": { "DOI": [ "10.1145/3442188.3445901" ] }, "num": null, "urls": [], "raw_text": "Abigail Z. Jacobs and Hanna Wallach. 2021. Mea- surement and Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 375-385, Virtual Event Canada. ACM.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Kaggle's Toxicity Comment Classification competition", "authors": [], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jigsaw. 2017. Kaggle's Toxicity Comment Classifica- tion competition.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "On transferability of bias mitigation effects in language model fine-tuning", "authors": [ { "first": "Xisen", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Barbieri", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Kennedy", "suffix": "" }, { "first": "Aida", "middle": [ "Mostafazadeh" ], "last": "Davani", "suffix": "" }, { "first": "Leonardo", "middle": [], "last": "Neves", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "3770--3783", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.296" ] }, "num": null, "urls": [], "raw_text": "Xisen Jin, Francesco Barbieri, Brendan Kennedy, Aida Mostafazadeh Davani, Leonardo Neves, and Xiang Ren. 2021. On transferability of bias mitigation ef- fects in language model fine-tuning. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 3770-3783, Online. Association for Computational Linguistics.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "The state and fate of linguistic diversity and inclusion in the NLP world", "authors": [ { "first": "Pratik", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Sebastin", "middle": [], "last": "Santy", "suffix": "" }, { "first": "Amar", "middle": [], "last": "Budhiraja", "suffix": "" }, { "first": "Kalika", "middle": [], "last": "Bali", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6282--6293", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.560" ] }, "num": null, "urls": [], "raw_text": "Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computational Linguistics.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Deduplicating Training Data Mitigates Privacy Risks in Language Models", "authors": [ { "first": "Nikhil", "middle": [], "last": "Kandpal", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2202.06539[cs].ArXiv:2202.06539" ] }, "num": null, "urls": [], "raw_text": "Nikhil Kandpal, Eric Wallace, and Colin Raffel. 2022. Deduplicating Training Data Mitigates Privacy Risks in Language Models. arXiv:2202.06539 [cs]. ArXiv: 2202.06539.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Preventing fairness gerrymandering: Auditing and learning for subgroup fairness", "authors": [ { "first": "Michael", "middle": [], "last": "Kearns", "suffix": "" }, { "first": "Seth", "middle": [], "last": "Neel", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Zhiwei Steven", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 35th International Conference on Machine Learning", "volume": "80", "issue": "", "pages": "2564--2572", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Kearns, Seth Neel, Aaron Roth, and Zhi- wei Steven Wu. 2018. Preventing fairness gerryman- dering: Auditing and learning for subgroup fairness. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2564-2572. PMLR.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "Your Pants Won't Save You\": Why Black Youth Challenge Race-Based Police Surveillance and the Demands of Black Respectability Politics", "authors": [ { "first": "Erin", "middle": [ "M" ], "last": "Kerrison", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Cobbina", "suffix": "" }, { "first": "Kimberly", "middle": [], "last": "Bender", "suffix": "" } ], "year": 2018, "venue": "Race and Justice", "volume": "8", "issue": "1", "pages": "7--26", "other_ids": { "DOI": [ "10.1177/2153368717734291" ] }, "num": null, "urls": [], "raw_text": "Erin M. Kerrison, Jennifer Cobbina, and Kimberly Ben- der. 2018. \"Your Pants Won't Save You\": Why Black Youth Challenge Race-Based Police Surveillance and the Demands of Black Respectability Politics. Race and Justice, 8(1):7-26.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "You keep using that word: Ways of thinking about gender in computing research", "authors": [ { "first": "Os", "middle": [], "last": "Keyes", "suffix": "" }, { "first": "Chandler", "middle": [], "last": "May", "suffix": "" }, { "first": "Annabelle", "middle": [], "last": "Carrell", "suffix": "" } ], "year": 2021, "venue": "Proc. ACM Hum.-Comput. Interact", "volume": "5", "issue": "CSCW1", "pages": "", "other_ids": { "DOI": [ "10.1145/3449113" ] }, "num": null, "urls": [], "raw_text": "Os Keyes, Chandler May, and Annabelle Carrell. 2021. You keep using that word: Ways of thinking about gender in computing research. Proc. ACM Hum.- Comput. Interact., 5(CSCW1).", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa Adeyemi. 2022. Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets. Transactions of the Association for Computational Linguistics", "authors": [ { "first": "Julia", "middle": [], "last": "Kreutzer", "suffix": "" }, { "first": "Isaac", "middle": [], "last": "Caswell", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ahsan", "middle": [], "last": "Wahab", "suffix": "" }, { "first": "Nasanbayar", "middle": [], "last": "Daan Van Esch", "suffix": "" }, { "first": "Allahsera", "middle": [], "last": "Ulzii-Orshikh", "suffix": "" }, { "first": "Nishant", "middle": [], "last": "Tapo", "suffix": "" }, { "first": "Artem", "middle": [], "last": "Subramani", "suffix": "" }, { "first": "Claytone", "middle": [], "last": "Sokolov", "suffix": "" }, { "first": "Monang", "middle": [], "last": "Sikasote", "suffix": "" }, { "first": "Supheakmungkol", "middle": [], "last": "Setyawan", "suffix": "" }, { "first": "Sokhar", "middle": [], "last": "Sarin", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Samb", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Annette", "middle": [], "last": "Rivera", "suffix": "" }, { "first": "Isabel", "middle": [], "last": "Rios", "suffix": "" }, { "first": "Salomey", "middle": [], "last": "Papadimitriou", "suffix": "" }, { "first": "Pedro", "middle": [ "Ortiz" ], "last": "Osei", "suffix": "" }, { "first": "Iroro", "middle": [], "last": "Suarez", "suffix": "" }, { "first": "Kelechi", "middle": [], "last": "Orife", "suffix": "" }, { "first": "Andre", "middle": [ "Niyongabo" ], "last": "Ogueji", "suffix": "" }, { "first": "Toan", "middle": [ "Q" ], "last": "Rubungo", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": null, "venue": "", "volume": "10", "issue": "", "pages": "50--72", "other_ids": { "DOI": [ "10.1162/tacl_a_00447" ] }, "num": null, "urls": [], "raw_text": "Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allah- sera Tapo, Nishant Subramani, Artem Sokolov, Clay- tone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Beno\u00eet Sagot, Clara Rivera, An- nette Rios, Isabel Papadimitriou, Salomey Osei, Pe- dro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, An- dre Niyongabo Rubungo, Toan Q. Nguyen, Math- ias M\u00fcller, Andr\u00e9 M\u00fcller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyak- eni, Jamshidbek Mirzakhalov, Tapiwanashe Matan- gira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaven- ture F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine \u00c7abuk Ball\u0131, Stella Biderman, Alessia Bat- tisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ata- man, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa Adeyemi. 2022. Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets. Transactions of the Association for Compu- tational Linguistics, 10:50-72.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "Measuring Bias in Contextualized Word Representations", "authors": [ { "first": "Keita", "middle": [], "last": "Kurita", "suffix": "" }, { "first": "Nidhi", "middle": [], "last": "Vyas", "suffix": "" }, { "first": "Ayush", "middle": [], "last": "Pareek", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing", "volume": "", "issue": "", "pages": "166--172", "other_ids": { "DOI": [ "10.18653/v1/W19-3823" ] }, "num": null, "urls": [], "raw_text": "Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring Bias in Con- textualized Word Representations. In Proceedings of the First Workshop on Gender Bias in Natural Lan- guage Processing, pages 166-172, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "The social stratification of (r) in new york city department stores", "authors": [ { "first": "William", "middle": [], "last": "Labov", "suffix": "" } ], "year": 1986, "venue": "Dialect and Language Variation", "volume": "", "issue": "", "pages": "304--329", "other_ids": { "DOI": [ "10.1016/B978-0-12-051130-3.50029-X" ] }, "num": null, "urls": [], "raw_text": "William Labov. 1986. The social stratification of (r) in new york city department stores. In Harold B. Allen and Michael D. Linn, editors, Dialect and Language Variation, pages 304-329. Academic Press, Boston.", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "Artificial intelligence language models and the false fantasy of participatory language policies", "authors": [ { "first": "Mandy", "middle": [], "last": "Lau", "suffix": "" } ], "year": 2021, "venue": "Working papers in Applied Linguistics and Linguistics at York", "volume": "1", "issue": "", "pages": "4--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mandy Lau. 2021. Artificial intelligence language mod- els and the false fantasy of participatory language policies. Working papers in Applied Linguistics and Linguistics at York, 1:4-15.", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "The hard problem of aligning AI to human values", "authors": [ { "first": "Connor", "middle": [], "last": "Leahy", "suffix": "" }, { "first": "Stella", "middle": [], "last": "Biderman", "suffix": "" } ], "year": 2021, "venue": "The Montreal AI Ethics Institute", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Connor Leahy and Stella Biderman. 2021. The hard problem of aligning AI to human values. In The State of AI Ethics Report (Volume 4). The Montreal AI Ethics Institute.", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "No insides on the outsides", "authors": [ { "first": "Josh", "middle": [], "last": "Lepawsky", "suffix": "" } ], "year": 2019, "venue": "Discard Studies", "volume": "", "issue": "0", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Josh Lepawsky. 2019. No insides on the outsides. Dis- card Studies, 0(0).", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "Collecting a large-scale gender bias dataset for coreference resolution and machine translation", "authors": [ { "first": "Shahar", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Koren", "middle": [], "last": "Lazar", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Stanovsky", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shahar Levy, Koren Lazar, and Gabriel Stanovsky. 2021. Collecting a large-scale gender bias dataset for coref- erence resolution and machine translation.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Towards Understanding and Mitigating Social Biases in Language Models", "authors": [ { "first": "Chiyu", "middle": [], "last": "Paul Pu Liang", "suffix": "" }, { "first": "Louis-Philippe", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Morency", "suffix": "" }, { "first": "", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 38th International Conference on Machine Learning", "volume": "139", "issue": "", "pages": "6565--6576", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2021. Towards Understanding and Mitigating Social Biases in Language Models. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 6565-6576. PMLR.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Enabling value sensitive ai systems through participatory design fictions", "authors": [ { "first": "Q", "middle": [], "last": "", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Muller", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Q. Vera Liao and Michael Muller. 2019. Enabling value sensitive ai systems through participatory design fic- tions.", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "TruthfulQA: Measuring How Models Mimic Human Falsehoods", "authors": [ { "first": "Stephanie", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Hilton", "suffix": "" }, { "first": "Owain", "middle": [], "last": "Evans", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2109.07958[cs].ArXiv:2109.07958" ] }, "num": null, "urls": [], "raw_text": "Stephanie Lin, Jacob Hilton, and Owain Evans. 2021. TruthfulQA: Measuring How Models Mimic Hu- man Falsehoods. arXiv:2109.07958 [cs]. ArXiv: 2109.07958.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Generate your counterfactuals: Towards controlled counterfactual generation for text", "authors": [ { "first": "Nishtha", "middle": [], "last": "Madaan", "suffix": "" }, { "first": "Inkit", "middle": [], "last": "Padhi", "suffix": "" }, { "first": "Naveen", "middle": [], "last": "Panwar", "suffix": "" }, { "first": "Diptikalyan", "middle": [], "last": "Saha", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "35", "issue": "", "pages": "13516--13524", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nishtha Madaan, Inkit Padhi, Naveen Panwar, and Dip- tikalyan Saha. 2021. Generate your counterfactuals: Towards controlled counterfactual generation for text. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15):13516-13524.", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "Intersectional Bias in Causal Language Models", "authors": [ { "first": "Liam", "middle": [], "last": "Magee", "suffix": "" }, { "first": "Lida", "middle": [], "last": "Ghahremanlou", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Soldatic", "suffix": "" }, { "first": "Shanthi", "middle": [], "last": "Robertson", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2107.07691" ] }, "num": null, "urls": [], "raw_text": "Liam Magee, Lida Ghahremanlou, Karen Soldatic, and Shanthi Robertson. 2021. Intersectional Bias in Causal Language Models. arXiv:2107.07691 [cs].", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "Socially aware bias measurements for hindi language representations", "authors": [ { "first": "Vijit", "middle": [], "last": "Malik", "suffix": "" }, { "first": "Sunipa", "middle": [], "last": "Dev", "suffix": "" }, { "first": "Akihiro", "middle": [], "last": "Nishi", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vijit Malik, Sunipa Dev, Akihiro Nishi, Nanyun Peng, and Kai-Wei Chang. 2021. Socially aware bias mea- surements for hindi language representations.", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "On measuring social biases in sentence encoders", "authors": [ { "first": "Chandler", "middle": [], "last": "May", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shikha", "middle": [], "last": "Bordia", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "622--628", "other_ids": { "DOI": [ "10.18653/v1/N19-1063" ] }, "num": null, "urls": [], "raw_text": "Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628, Minneapolis, Min- nesota. Association for Computational Linguistics.", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "van Strien, and Yacine Jernite. 2022. Documenting Geographically and Contextually Diverse Data Sources: The BigScience Catalogue of Language Data and Resources", "authors": [ { "first": "Angelina", "middle": [], "last": "Mcmillan-Major", "suffix": "" }, { "first": "Zaid", "middle": [], "last": "Alyafeai", "suffix": "" }, { "first": "Stella", "middle": [], "last": "Biderman", "suffix": "" }, { "first": "Kimbo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Francesco", "middle": [ "De" ], "last": "Toni", "suffix": "" }, { "first": "G\u00e9rard", "middle": [], "last": "Dupont", "suffix": "" }, { "first": "Hady", "middle": [], "last": "Elsahar", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Emezue", "suffix": "" }, { "first": "Alham", "middle": [], "last": "Fikri Aji", "suffix": "" }, { "first": "Suzana", "middle": [], "last": "Ili\u0107", "suffix": "" }, { "first": "Nurulaqilla", "middle": [], "last": "Khamis", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Leong", "suffix": "" }, { "first": "Maraim", "middle": [], "last": "Masoud", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2201.10066[cs].ArXiv:2201.10066" ] }, "num": null, "urls": [], "raw_text": "Angelina McMillan-Major, Zaid Alyafeai, Stella Bi- derman, Kimbo Chen, Francesco De Toni, G\u00e9rard Dupont, Hady Elsahar, Chris Emezue, Alham Fikri Aji, Suzana Ili\u0107, Nurulaqilla Khamis, Colin Leong, Maraim Masoud, Aitor Soroa, Pedro Ortiz Suarez, Zeerak Talat, Daniel van Strien, and Yacine Jernite. 2022. Documenting Geographically and Contextu- ally Diverse Data Sources: The BigScience Catalogue of Language Data and Resources. arXiv:2201.10066 [cs]. ArXiv: 2201.10066.", "links": null }, "BIBREF95": { "ref_id": "b95", "title": "Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power? Proceedings of the ACM on Human-Computer Interaction", "authors": [ { "first": "Milagros", "middle": [], "last": "Miceli", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Posada", "suffix": "" }, { "first": "Tianling", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2022, "venue": "", "volume": "6", "issue": "", "pages": "1--14", "other_ids": { "DOI": [ "10.1145/3492853" ] }, "num": null, "urls": [], "raw_text": "Milagros Miceli, Julian Posada, and Tianling Yang. 2022. Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power? Proceed- ings of the ACM on Human-Computer Interaction, 6(GROUP):1-14.", "links": null }, "BIBREF96": { "ref_id": "b96", "title": "Model Cards for Model Reporting", "authors": [ { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Simone", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zaldivar", "suffix": "" }, { "first": "Parker", "middle": [], "last": "Barnes", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vasserman", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Spitzer", "suffix": "" }, { "first": "Deborah", "middle": [], "last": "Inioluwa", "suffix": "" }, { "first": "Timnit", "middle": [], "last": "Raji", "suffix": "" }, { "first": "", "middle": [], "last": "Gebru", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* '19", "volume": "", "issue": "", "pages": "220--229", "other_ids": { "DOI": [ "10.1145/3287560.3287596" ] }, "num": null, "urls": [], "raw_text": "Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Account- ability, and Transparency, FAT* '19, pages 220-229, Atlanta, GA, USA. Association for Computing Ma- chinery.", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Ruined by design: how designers destroyed the world, and what we can do to fix it. Mule Design", "authors": [ { "first": "Mike", "middle": [], "last": "Monteiro", "suffix": "" }, { "first": "Vivianne", "middle": [], "last": "Castillo", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Monteiro and Vivianne Castillo. 2019. Ruined by design: how designers destroyed the world, and what we can do to fix it. Mule Design, Fresno.", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "Stereoset: Measuring stereotypical bias in pretrained language models", "authors": [ { "first": "Moin", "middle": [], "last": "Nadeem", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Bethke", "suffix": "" }, { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. Stereoset: Measuring stereotypical bias in pretrained language models.", "links": null }, "BIBREF99": { "ref_id": "b99", "title": "CrowS-pairs: A challenge dataset for measuring social biases in masked language models", "authors": [ { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Vania", "suffix": "" }, { "first": "Rasika", "middle": [], "last": "Bhalerao", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1953--1967", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.154" ] }, "num": null, "urls": [], "raw_text": "Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A chal- lenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953-1967, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF100": { "ref_id": "b100", "title": "Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp \u00d6ktem, Adewale Akinfaderin, and Abdallah Bashir. 2020. Participatory research for low-resourced machine translation: A case study in African languages", "authors": [ { "first": "Wilhelmina", "middle": [], "last": "Nekoto", "suffix": "" }, { "first": "Vukosi", "middle": [], "last": "Marivate", "suffix": "" }, { "first": "Tshinondiwa", "middle": [], "last": "Matsila", "suffix": "" }, { "first": "Timi", "middle": [], "last": "Fasubaa", "suffix": "" }, { "first": "Taiwo", "middle": [], "last": "Fagbohungbe", "suffix": "" }, { "first": "Shamsuddeen", "middle": [], "last": "Solomon Oluwole Akinola", "suffix": "" }, { "first": "Salomon", "middle": [ "Kabongo" ], "last": "Muhammad", "suffix": "" }, { "first": "Salomey", "middle": [], "last": "Kabenamualu", "suffix": "" }, { "first": "Freshia", "middle": [], "last": "Osei", "suffix": "" }, { "first": "Rubungo", "middle": [ "Andre" ], "last": "Sackey", "suffix": "" }, { "first": "Ricky", "middle": [], "last": "Niyongabo", "suffix": "" }, { "first": "Perez", "middle": [], "last": "Macharm", "suffix": "" }, { "first": "Orevaoghene", "middle": [], "last": "Ogayo", "suffix": "" }, { "first": "Musie", "middle": [], "last": "Ahia", "suffix": "" }, { "first": "Mofetoluwa", "middle": [], "last": "Meressa Berhe", "suffix": "" }, { "first": "Masabata", "middle": [], "last": "Adeyemi", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Mokgesi-Selinga", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Okegbemi", "suffix": "" }, { "first": "Kolawole", "middle": [], "last": "Martinus", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Tajudeen", "suffix": "" }, { "first": "Kelechi", "middle": [], "last": "Degila", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Ogueji", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Siminyu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Kreutzer", "suffix": "" }, { "first": "Jamiil Toure", "middle": [], "last": "Webster", "suffix": "" }, { "first": "Jade", "middle": [], "last": "Ali", "suffix": "" }, { "first": "Iroro", "middle": [], "last": "Abbott", "suffix": "" }, { "first": "Ignatius", "middle": [], "last": "Orife", "suffix": "" }, { "first": "", "middle": [], "last": "Ezeani", "suffix": "" }, { "first": "Abdulkadir", "middle": [], "last": "Idris", "suffix": "" }, { "first": "Herman", "middle": [], "last": "Dangana", "suffix": "" }, { "first": "Hady", "middle": [], "last": "Kamper", "suffix": "" }, { "first": "Goodness", "middle": [], "last": "Elsahar", "suffix": "" }, { "first": "Ghollah", "middle": [], "last": "Duru", "suffix": "" }, { "first": "Murhabazi", "middle": [], "last": "Kioko", "suffix": "" }, { "first": "", "middle": [], "last": "Espoir", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Elan Van Biljon", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Whitenack", "suffix": "" }, { "first": "", "middle": [], "last": "Onyefuluchi", "suffix": "" } ], "year": null, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "2144--2160", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.195" ] }, "num": null, "urls": [], "raw_text": "Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddeen Muham- mad, Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulkadir Dan- gana, Herman Kamper, Hady Elsahar, Goodness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van Biljon, Daniel Whitenack, Christopher Onyefuluchi, Chris Chinenye Emezue, Bonaventure F. P. Dossou, Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp \u00d6ktem, Adewale Akin- faderin, and Abdallah Bashir. 2020. Participatory re- search for low-resourced machine translation: A case study in African languages. In Findings of the Asso- ciation for Computational Linguistics: EMNLP 2020, pages 2144-2160, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF101": { "ref_id": "b101", "title": "Algorithms of oppression: how search engines reinforce racism", "authors": [ { "first": "Noble", "middle": [], "last": "Safiya Umoja", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Safiya Umoja Noble. 2018. Algorithms of oppression: how search engines reinforce racism. New York Uni- versity Press, New York.", "links": null }, "BIBREF102": { "ref_id": "b102", "title": "Phu Mon Htut, and Samuel R. Bowman. 2021. Bbq: A hand-built bias benchmark for question answering", "authors": [ { "first": "Alicia", "middle": [], "last": "Parrish", "suffix": "" }, { "first": "Angelica", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Vishakh", "middle": [], "last": "Padmakumar", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Jana", "middle": [], "last": "Thompson", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel R. Bowman. 2021. Bbq: A hand-built bias benchmark for question answering.", "links": null }, "BIBREF103": { "ref_id": "b103", "title": "2021. AI and the Everything in the Whole Wide World Benchmark", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Inioluwa Deborah Raji", "suffix": "" }, { "first": "Amandalynne", "middle": [], "last": "Bender", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Paullada", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Denton", "suffix": "" }, { "first": "", "middle": [], "last": "Hanna", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2111.15366[cs].ArXiv:2111.15366" ] }, "num": null, "urls": [], "raw_text": "Inioluwa Deborah Raji, Emily M. Bender, Amanda- lynne Paullada, Emily Denton, and Alex Hanna. 2021. AI and the Everything in the Whole Wide World Benchmark. arXiv:2111.15366 [cs]. ArXiv: 2111.15366.", "links": null }, "BIBREF104": { "ref_id": "b104", "title": "Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products", "authors": [ { "first": "Deborah", "middle": [], "last": "Inioluwa", "suffix": "" }, { "first": "Joy", "middle": [], "last": "Raji", "suffix": "" }, { "first": "", "middle": [], "last": "Buolamwini", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3306618.3314244" ] }, "num": null, "urls": [], "raw_text": "Inioluwa Deborah Raji and Joy Buolamwini. 2019. Ac- tionable Auditing: Investigating the Impact of Pub- licly Naming Biased Performance Results of Com- mercial AI Products. In Proceedings of the 2019", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "AAAI/ACM Conference on AI, Ethics, and Society", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "429--435", "other_ids": {}, "num": null, "urls": [], "raw_text": "AAAI/ACM Conference on AI, Ethics, and Society, pages 429-435, Honolulu HI USA. ACM.", "links": null }, "BIBREF106": { "ref_id": "b106", "title": "Gender bias in coreference resolution", "authors": [ { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Naradowsky", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Leonard", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "8--14", "other_ids": { "DOI": [ "10.18653/v1/N18-2002" ] }, "num": null, "urls": [], "raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF107": { "ref_id": "b107", "title": "Re-imagining Algorithmic Fairness in India and Beyond", "authors": [ { "first": "Nithya", "middle": [], "last": "Sambasivan", "suffix": "" }, { "first": "Erin", "middle": [], "last": "Arnesen", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" }, { "first": "Tulsee", "middle": [], "last": "Doshi", "suffix": "" }, { "first": "Vinodkumar", "middle": [], "last": "Prabhakaran", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency", "volume": "", "issue": "", "pages": "315--328", "other_ids": { "DOI": [ "10.1145/3442188.3445896" ] }, "num": null, "urls": [], "raw_text": "Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, and Vinodkumar Prabhakaran. 2021. Re-imagining Algorithmic Fairness in India and Be- yond. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 315-328, Virtual Event Canada. ACM.", "links": null }, "BIBREF108": { "ref_id": "b108", "title": "Tali Bers, Thomas Wolf, and Alexander M. Rush. 2021. Multitask Prompted Training Enables Zero-Shot Task Generalization", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Albert", "middle": [], "last": "Webson", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Stephen", "middle": [ "H" ], "last": "Bach", "suffix": "" }, { "first": "Lintang", "middle": [], "last": "Sutawika", "suffix": "" }, { "first": "Zaid", "middle": [], "last": "Alyafeai", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Chaffin", "suffix": "" }, { "first": "Arnaud", "middle": [], "last": "Stiegler", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Scao", "suffix": "" }, { "first": "Arun", "middle": [], "last": "Raja", "suffix": "" }, { "first": "Manan", "middle": [], "last": "Dey", "suffix": "" }, { "first": "M", "middle": [], "last": "Bari", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Urmish", "middle": [], "last": "Thakker", "suffix": "" }, { "first": "Shanya", "middle": [], "last": "Sharma Sharma", "suffix": "" }, { "first": "Eliza", "middle": [], "last": "Szczechla", "suffix": "" }, { "first": "Taewoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Gunjan", "middle": [], "last": "Chhablani", "suffix": "" }, { "first": "Nihal", "middle": [], "last": "Nayak", "suffix": "" }, { "first": "Debajyoti", "middle": [], "last": "Datta", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Tian-Jian", "suffix": "" }, { "first": "Han", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Manica", "suffix": "" }, { "first": "", "middle": [], "last": "Shen", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2110.08207[cs].ArXiv:2110.08207" ] }, "num": null, "urls": [], "raw_text": "Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M. Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, De- bajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Stella Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush. 2021. Mul- titask Prompted Training Enables Zero-Shot Task Generalization. arXiv:2110.08207 [cs]. ArXiv: 2110.08207.", "links": null }, "BIBREF109": { "ref_id": "b109", "title": "Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Vianna", "suffix": "" }, { "first": "Xuhui", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2111.07997[cs].ArXiv:2111.07997" ] }, "num": null, "urls": [], "raw_text": "Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, and Noah A. Smith. 2021. Annotators with Attitudes: How Annotator Be- liefs And Identities Bias Toxic Language Detection. arXiv:2111.07997 [cs]. ArXiv: 2111.07997.", "links": null }, "BIBREF110": { "ref_id": "b110", "title": "Principles to Practices for Responsible AI: Closing the Gap", "authors": [ { "first": "Daniel", "middle": [], "last": "Schiff", "suffix": "" } ], "year": 2020, "venue": "European Conference on AI Workshop on Advancing Towards the SDGs", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Schiff. 2020. Principles to Practices for Re- sponsible AI: Closing the Gap. 2020 European Con- ference on AI Workshop on Advancing Towards the SDGs.", "links": null }, "BIBREF111": { "ref_id": "b111", "title": "Evaluating gender bias in natural language inference", "authors": [ { "first": "Shanya", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Manan", "middle": [], "last": "Dey", "suffix": "" }, { "first": "Koustuv", "middle": [], "last": "Sinha", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shanya Sharma, Manan Dey, and Koustuv Sinha. 2021. Evaluating gender bias in natural language inference.", "links": null }, "BIBREF112": { "ref_id": "b112", "title": "Societal Biases in Language Generation: Progress and Challenges", "authors": [ { "first": "Emily", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Prem", "middle": [], "last": "Natarajan", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "4275--4293", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.330" ] }, "num": null, "urls": [], "raw_text": "Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2021. Societal Biases in Language Generation: Progress and Challenges. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4275-4293, Online. Association for Computational Linguistics.", "links": null }, "BIBREF113": { "ref_id": "b113", "title": "The Woman Worked as a Babysitter: On Biases in Language Generation", "authors": [ { "first": "Emily", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Premkumar", "middle": [], "last": "Natarajan", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3405--3410", "other_ids": { "DOI": [ "10.18653/v1/D19-1339" ] }, "num": null, "urls": [], "raw_text": "Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The Woman Worked as a Babysitter: On Biases in Language Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3405- 3410, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF114": { "ref_id": "b114", "title": "Saurabh Tiwary, and Bryan Catanzaro. 2022. Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model", "authors": [ { "first": "Shaden", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Mostofa", "middle": [], "last": "Patwary", "suffix": "" }, { "first": "Brandon", "middle": [], "last": "Norick", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Legresley", "suffix": "" }, { "first": "Samyam", "middle": [], "last": "Rajbhandari", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Casper", "suffix": "" }, { "first": "Zhun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shrimai", "middle": [], "last": "Prabhumoye", "suffix": "" }, { "first": "George", "middle": [], "last": "Zerveas", "suffix": "" }, { "first": "Vijay", "middle": [], "last": "Korthikanti", "suffix": "" }, { "first": "Elton", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "Reza", "middle": [ "Yazdani" ], "last": "Aminabadi", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Bernauer", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Song", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Shoeybi", "suffix": "" }, { "first": "Yuxiong", "middle": [], "last": "He", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Houston", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2201.11990" ] }, "num": null, "urls": [], "raw_text": "Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, and Bryan Catanzaro. 2022. Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Gen- erative Language Model. arXiv:2201.11990 [cs].", "links": null }, "BIBREF115": { "ref_id": "b115", "title": "Normal Life: Administrative Violence, Critical Trans Politics, and the Limits of Law", "authors": [ { "first": "Dean", "middle": [], "last": "Spade", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1215/9780822374794" ] }, "num": null, "urls": [], "raw_text": "Dean Spade. 2015. Normal Life: Administrative Vio- lence, Critical Trans Politics, and the Limits of Law. Duke University Press.", "links": null }, "BIBREF116": { "ref_id": "b116", "title": "2021. A Survey on Gender Bias in Natural Language Processing", "authors": [ { "first": "Karolina", "middle": [], "last": "Stanczak", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2112.14168[cs].ArXiv:2112.14168" ] }, "num": null, "urls": [], "raw_text": "Karolina Stanczak and Isabelle Augenstein. 2021. A Survey on Gender Bias in Natural Language Process- ing. arXiv:2112.14168 [cs]. ArXiv: 2112.14168.", "links": null }, "BIBREF117": { "ref_id": "b117", "title": "Evaluating gender bias in machine translation", "authors": [ { "first": "Gabriel", "middle": [], "last": "Stanovsky", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1679--1684", "other_ids": { "DOI": [ "10.18653/v1/P19-1164" ] }, "num": null, "urls": [], "raw_text": "Gabriel Stanovsky, Noah A. Smith, and Luke Zettle- moyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF118": { "ref_id": "b118", "title": "Allennlp: Fairness and bias mitigation", "authors": [ { "first": "Arjun", "middle": [], "last": "Subramonian", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arjun Subramonian. 2021. Allennlp: Fairness and bias mitigation.", "links": null }, "BIBREF119": { "ref_id": "b119", "title": "Mitigating Gender Bias in Natural Language Processing: Literature Review", "authors": [ { "first": "Tony", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Gaut", "suffix": "" }, { "first": "Shirlyn", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Yuxin", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Mai", "middle": [], "last": "Elsherief", "suffix": "" }, { "first": "Jieyu", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Diba", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Belding", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1630--1640", "other_ids": { "DOI": [ "10.18653/v1/P19-1159" ] }, "num": null, "urls": [], "raw_text": "Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating Gender Bias in Natural Language Processing: Literature Review. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 1630-1640, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF120": { "ref_id": "b120", "title": "Joachim Bingel, and Isabelle Augenstein. 2021. Disembodied Machine Learning: On the Illusion of Objectivity in NLP", "authors": [ { "first": "Zeerak", "middle": [], "last": "Talat", "suffix": "" }, { "first": "Smarika", "middle": [], "last": "Lulz", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zeerak Talat, Smarika Lulz, Joachim Bingel, and Is- abelle Augenstein. 2021. Disembodied Machine Learning: On the Illusion of Objectivity in NLP. ArXiv: 2101.11974.", "links": null }, "BIBREF121": { "ref_id": "b121", "title": "It's morphin' time! Combating linguistic discrimination with inflectional perturbations", "authors": [ { "first": "Samson", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Shafiq", "middle": [], "last": "Joty", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2920--2935", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.263" ] }, "num": null, "urls": [], "raw_text": "Samson Tan, Shafiq Joty, Min-Yen Kan, and Richard Socher. 2020. It's morphin' time! Combating linguis- tic discrimination with inflectional perturbations. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2920- 2935, Online. Association for Computational Linguis- tics.", "links": null }, "BIBREF122": { "ref_id": "b122", "title": "2022. LaMDA: Language Models for Dialog Applications", "authors": [ { "first": "Romal", "middle": [], "last": "Thoppilan", "suffix": "" }, { "first": "Daniel", "middle": [ "De" ], "last": "Freitas", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Apoorv", "middle": [], "last": "Kulshreshtha", "suffix": "" }, { "first": "", "middle": [], "last": "Heng-Tze", "suffix": "" }, { "first": "Alicia", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Leslie", "middle": [], "last": "Bos", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Baker", "suffix": "" }, { "first": "Yaguang", "middle": [], "last": "Du", "suffix": "" }, { "first": "Hongrae", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Amin", "middle": [], "last": "Huaixiu Steven Zheng", "suffix": "" }, { "first": "Marcelo", "middle": [], "last": "Ghafouri", "suffix": "" }, { "first": "Yanping", "middle": [], "last": "Menegali", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "James", "middle": [], "last": "Lepikhin", "suffix": "" }, { "first": "Dehao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Yuanzhong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Bosma", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chung-Ching", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Will", "middle": [], "last": "Krivokon", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Rusch", "suffix": "" }, { "first": "Pranesh", "middle": [], "last": "Pickett", "suffix": "" }, { "first": "Laichee", "middle": [], "last": "Srinivasan", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Man", "suffix": "" }, { "first": "", "middle": [], "last": "Meier-Hellstern", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2201.08239[cs].ArXiv:2201.08239" ] }, "num": null, "urls": [], "raw_text": "Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, De- hao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kath- leen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny So- raker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Ale- jandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Co- hen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera- Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. LaMDA: Language Models for Dia- log Applications. arXiv:2201.08239 [cs]. ArXiv: 2201.08239.", "links": null }, "BIBREF123": { "ref_id": "b123", "title": "Detecting 'Dirt' and 'Toxicity': Rethinking Content Moderation as Pollution Behaviour", "authors": [ { "first": "Nanna", "middle": [], "last": "Thylstrup", "suffix": "" }, { "first": "Zeerak", "middle": [], "last": "Talat", "suffix": "" } ], "year": 2020, "venue": "SSRN Electronic Journal", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.2139/ssrn.3709719" ] }, "num": null, "urls": [], "raw_text": "Nanna Thylstrup and Zeerak Talat. 2020. Detecting 'Dirt' and 'Toxicity': Rethinking Content Moderation as Pollution Behaviour. SSRN Electronic Journal.", "links": null }, "BIBREF124": { "ref_id": "b124", "title": "Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities", "authors": [ { "first": "Nenad", "middle": [], "last": "Tomasev", "suffix": "" }, { "first": "Kevin", "middle": [ "R" ], "last": "Mckee", "suffix": "" }, { "first": "Jackie", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Shakir", "middle": [], "last": "Mohamed", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "254--265", "other_ids": { "DOI": [ "10.1145/3461702.3462540" ] }, "num": null, "urls": [], "raw_text": "Nenad Tomasev, Kevin R. McKee, Jackie Kay, and Shakir Mohamed. 2021. Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities, page 254-265. Association for Computing Machinery, New York, NY, USA.", "links": null }, "BIBREF125": { "ref_id": "b125", "title": "Multilingual is not enough: BERT for Finnish", "authors": [ { "first": "Antti", "middle": [], "last": "Virtanen", "suffix": "" }, { "first": "Jenna", "middle": [], "last": "Kanerva", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Ilo", "suffix": "" }, { "first": "Jouni", "middle": [], "last": "Luoma", "suffix": "" }, { "first": "Juhani", "middle": [], "last": "Luotolahti", "suffix": "" }, { "first": "Tapio", "middle": [], "last": "Salakoski", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.07076[cs].ArXiv:1912.07076" ] }, "num": null, "urls": [], "raw_text": "Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: BERT for Finnish. arXiv:1912.07076 [cs]. ArXiv: 1912.07076.", "links": null }, "BIBREF126": { "ref_id": "b126", "title": "Universal adversarial triggers for attacking and analyzing NLP", "authors": [ { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Shi", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Kandpal", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2153--2162", "other_ids": { "DOI": [ "10.18653/v1/D19-1221" ] }, "num": null, "urls": [], "raw_text": "Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial trig- gers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153-2162, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF127": { "ref_id": "b127", "title": "Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations", "authors": [ { "first": "Tianlu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jieyu", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Vicente", "middle": [], "last": "Ordonez", "suffix": "" } ], "year": 2019, "venue": "2019 IEEE/CVF International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "5309--5318", "other_ids": { "DOI": [ "10.1109/ICCV.2019.00541" ] }, "num": null, "urls": [], "raw_text": "Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, and Vicente Ordonez. 2019. Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations. In 2019 IEEE/CVF International Conference on Computer Vi- sion (ICCV), pages 5309-5318, Seoul, Korea (South). IEEE.", "links": null }, "BIBREF128": { "ref_id": "b128", "title": "Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2021a. Ethical and social risks of harm from Language Models", "authors": [ { "first": "Laura", "middle": [], "last": "Weidinger", "suffix": "" }, { "first": "John", "middle": [], "last": "Mellor", "suffix": "" }, { "first": "Maribeth", "middle": [], "last": "Rauh", "suffix": "" }, { "first": "Conor", "middle": [], "last": "Griffin", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Uesato", "suffix": "" }, { "first": "Po-Sen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Myra", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Mia", "middle": [], "last": "Glaese", "suffix": "" }, { "first": "Borja", "middle": [], "last": "Balle", "suffix": "" }, { "first": "Atoosa", "middle": [], "last": "Kasirzadeh", "suffix": "" }, { "first": "Zac", "middle": [], "last": "Kenton", "suffix": "" }, { "first": "Sasha", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Will", "middle": [], "last": "Hawkins", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2112.04359[cs].ArXiv:2112.04359" ] }, "num": null, "urls": [], "raw_text": "Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2021a. Ethical and social risks of harm from Lan- guage Models. arXiv:2112.04359 [cs]. ArXiv: 2112.04359.", "links": null }, "BIBREF129": { "ref_id": "b129", "title": "Atoosa Kasirzadeh, et al. 2021b. Ethical and social risks of harm from language models", "authors": [ { "first": "Laura", "middle": [], "last": "Weidinger", "suffix": "" }, { "first": "John", "middle": [], "last": "Mellor", "suffix": "" }, { "first": "Maribeth", "middle": [], "last": "Rauh", "suffix": "" }, { "first": "Conor", "middle": [], "last": "Griffin", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Uesato", "suffix": "" }, { "first": "Po-Sen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Myra", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Mia", "middle": [], "last": "Glaese", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2112.04359" ] }, "num": null, "urls": [], "raw_text": "Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. 2021b. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.", "links": null }, "BIBREF130": { "ref_id": "b130", "title": "Discriminating Systems: Gender, Race, and Power in AI", "authors": [ { "first": "Sarah", "middle": [], "last": "West", "suffix": "" }, { "first": "Meredith", "middle": [], "last": "Whittaker", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Crawford", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah West, Meredith Whittaker, and Kate Crawford. 2019. Discriminating Systems: Gender, Race, and Power in AI. Technical report, AI Now Institute, New York.", "links": null }, "BIBREF131": { "ref_id": "b131", "title": "Do artifacts have politics? Daedalus", "authors": [ { "first": "Langdon", "middle": [], "last": "Winner", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "121--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Langdon Winner. 1980. Do artifacts have politics? Daedalus, pages 121-136.", "links": null }, "BIBREF132": { "ref_id": "b132", "title": "Gender bias in coreference resolution: Evaluation and debiasing methods", "authors": [ { "first": "Jieyu", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Tianlu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Vicente", "middle": [], "last": "Ordonez", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "15--20", "other_ids": { "DOI": [ "10.18653/v1/N18-2003" ] }, "num": null, "urls": [], "raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 15-20, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF133": { "ref_id": "b133", "title": "Examining Gender Bias in Languages with Grammatical Gender", "authors": [ { "first": "Pei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Weijia", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jieyu", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Kuan-Hao", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Muhao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5275--5283", "other_ids": { "DOI": [ "10.18653/v1/D19-1531" ] }, "num": null, "urls": [], "raw_text": "Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang. 2019. Examining Gender Bias in Languages with Grammatical Gender. In Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Con- ference on Natural Language Processing (EMNLP- IJCNLP), pages 5275-5283, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF134": { "ref_id": "b134", "title": "Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology", "authors": [ { "first": "Ran", "middle": [], "last": "Zmigrod", "suffix": "" }, { "first": "Sabrina", "middle": [], "last": "Mielke", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1651--1661", "other_ids": { "DOI": [ "10.18653/v1/P19-1161" ] }, "num": null, "urls": [], "raw_text": "Ran Zmigrod, Sabrina Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual Data Aug- mentation for Mitigating Gender Stereotypes in Lan- guages with Rich Morphology. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 1651-1661, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF135": { "ref_id": "b135", "title": "to determine and outline author contributions", "authors": [ { "first": "Allen", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "Stella Biderman: Writing -Original Draft (Section 4), Writing -Review & Editing. Miruna Clinciu: Conceptualization, Writing -Original Draft", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "We follow the recommendations and taxonomy provided by Allen et al. (2019) to determine and outline author contributions. Stella Biderman: Writing -Original Draft (Sec- tion 4), Writing -Review & Editing. Miruna Clinciu: Conceptualization, Writing - Original Draft (Section 3), Writing -Review & Editing (Section 3.5).", "links": null }, "BIBREF136": { "ref_id": "b136", "title": "Writing -Original draft preparation (Section 5), Writing -Review and Editing. Shayne Longpre: Writing -Original draft preparation (Section 1-2)", "authors": [ { "first": "Manan", "middle": [], "last": "Dey", "suffix": "" } ], "year": null, "venue": "Writing -Review & Editing (Section", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manan Dey: Writing -Original draft preparation (Section 5), Writing -Review and Editing. Shayne Longpre: Writing -Original draft prepara- tion (Section 1-2), Writing -Review & Editing (Sec- tion 3).", "links": null }, "BIBREF137": { "ref_id": "b137", "title": "Writing -Review & Editing. Maraim Masoud: Conceptualization, Writing -Original draft preparation (Section 4), Writing -Review & Editing", "authors": [ { "first": "Alexandra", "middle": [], "last": "Sasha", "suffix": "" }, { "first": "Luccioni", "middle": [], "last": "", "suffix": "" } ], "year": null, "venue": "Writing -Original Draft (Section 4)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandra Sasha Luccioni: Writing -Original Draft (Section 4), Writing -Review & Editing. Maraim Masoud: Conceptualization, Writing - Original draft preparation (Section 4), Writing -Re- view & Editing (Section 4).", "links": null }, "BIBREF138": { "ref_id": "b138", "title": "Writing -Original draft & Review & Editing", "authors": [ { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Margaret Mitchell: Writing -Original draft & Re- view & Editing.", "links": null }, "BIBREF139": { "ref_id": "b139", "title": "Aur\u00e9lie N\u00e9v\u00e9ol: Supervision, Writing -Original draft preparation (Abstract, Section 3), Writing -Review & Editing", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aur\u00e9lie N\u00e9v\u00e9ol: Supervision, Writing -Original draft preparation (Abstract, Section 3), Writing -Re- view & Editing.", "links": null }, "BIBREF140": { "ref_id": "b140", "title": "Writing -Original draft & Review & Editing. Shanya Sharma: Writing -Original draft preparation", "authors": [ { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": null, "venue": "Writing -Review and Editing (Sections 3 & 5)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragomir Radev: Writing -Original draft & Re- view & Editing. Shanya Sharma: Writing -Original draft prepara- tion (Section 5), Writing -Review and Editing (Sec- tions 3 & 5).", "links": null }, "BIBREF141": { "ref_id": "b141", "title": "Writing -Original draft preparation (Sections 2, 3, & 5), Writing -Review & Editing", "authors": [ { "first": "Arjun", "middle": [], "last": "Subramonian", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arjun Subramonian: Writing -Original draft preparation (Sections 2, 3, & 5), Writing -Review & Editing.", "links": null }, "BIBREF142": { "ref_id": "b142", "title": "Writing -Review & Editing. Deepak Tunuguntla: Conceptualization, Writing -Original draft preparation (Section 1-2), Writing -Review & Editing. Oskar van der Wal: Conceptualization, Writing -Original Draft", "authors": [ { "first": "Jaesung", "middle": [], "last": "Tae", "suffix": "" } ], "year": null, "venue": "Writing -Original draft preparation (Section 1)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaesung Tae: Writing -Original draft preparation (Section 1), Writing -Review & Editing. Zeerak Talat: Supervision, Conceptualization, Writ- ing -Original draft preparation (Abstract, Section 1- 2,4,6), Writing -Review & Editing. Samson Tan: Supervision, Conceptualization, Writ- ing -Original draft preparation (Sections 3.2 & 4.2), Writing -Review & Editing. Deepak Tunuguntla: Conceptualization, Writing - Original draft preparation (Section 1-2), Writing -Re- view & Editing. Oskar van der Wal: Conceptualization, Writing - Original Draft (Section 3), Writing -Review & Editing (Section 3).", "links": null } }, "ref_entries": {} } }