ACL-OCL / Base_JSON /prefixL /json /ltedi /2021.ltedi-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:12:07.379889Z"
},
"title": "An Overview of Fairness in Data -Illuminating the Bias in Data Pipeline",
"authors": [
{
"first": "Senthil",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tamil Nadu",
"location": {}
},
"email": "senthil@ssn.edu.in"
},
{
"first": "Chandrabose",
"middle": [],
"last": "Aravindan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tamil Nadu",
"location": {}
},
"email": ""
},
{
"first": "Bharathi",
"middle": [
"Raja"
],
"last": "Chakravarthi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Ireland",
"location": {
"settlement": "Galway"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Data in general encodes human biases by default; being aware of this is a good start, and the research around how to handle it is ongoing. The term 'bias' is extensively used in various contexts in NLP systems. In our research the focus is specific to biases such as gender, racism, religion, demographic and other intersectional views on biases that prevail in text processing systems responsible for systematically discriminating specific population, which is not ethical in NLP. These biases exacerbate the lack of equality, diversity and inclusion of specific population while utilizing the NLP applications. The tools and technology at the intermediate level utilize biased data, and transfer or amplify this bias to the downstream applications. However, it is not enough to be colourblind, gender-neutral alone when designing a unbiased technologyinstead, we should take a conscious effort by designing a unified framework to measure and benchmark the bias. In this paper, we recommend six measures and one augment measure based on the observations of the bias in data, annotations, text representations and debiasing techniques.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Data in general encodes human biases by default; being aware of this is a good start, and the research around how to handle it is ongoing. The term 'bias' is extensively used in various contexts in NLP systems. In our research the focus is specific to biases such as gender, racism, religion, demographic and other intersectional views on biases that prevail in text processing systems responsible for systematically discriminating specific population, which is not ethical in NLP. These biases exacerbate the lack of equality, diversity and inclusion of specific population while utilizing the NLP applications. The tools and technology at the intermediate level utilize biased data, and transfer or amplify this bias to the downstream applications. However, it is not enough to be colourblind, gender-neutral alone when designing a unbiased technologyinstead, we should take a conscious effort by designing a unified framework to measure and benchmark the bias. In this paper, we recommend six measures and one augment measure based on the observations of the bias in data, annotations, text representations and debiasing techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Irrespective of the data sources, the majority of the bias prevails in data itself (Lam et al., 2011; Fine et al., 2014; Jones et al., 2020) . The inherent bias in data affects the core NLP tasks such as Part-Of-Speech (POS) tagging, POS Chunking (Manzini et al., 2019) , and dependency parsing (Garimella et al., 2019) . Other than data bias, the techniques used to represent the data also pose a threat to NLP systems (Bolukbasi et al., 2016; Caliskan et al., 2017; Ethayarajh et al., 2019; Zhou et al., 2019) . Eventually the bias magnifies itself due to these biased data representation in downstream applications such as Named Entity Recognition (NER) (Manzini et al., 2019) , coreference resolution (Zhao et al., 2017; Rudinger et al., 2018; , sentiment analysis (Kiritchenko and Mohammad, 2018) , machine translation (Stanovsky et al., 2019; Escud\u00e9 Font et al., 2019) , social data analysis (Waseem and Hovy, 2016; Davidson et al., 2017; Sap et al., 2019; Hasanuzzaman et al., 2017) , and bio-medical language processing (Rios et al., 2020) . However,a very little attention is given to data collection and processing. Wikipedia seems like a diverse data source but fewer than 18% of the site's biographical entries represents women (Wagner et al., 2015) . Olteanu et al. (2019) observed that the dataset extracted from social media data has its own bias in various aspects such as age, gender, racism, location, job, and religion. Existing research classified biases at various levels, including the bias in data -source itself, the bias in the data analysis pipeline, and the biased data in building the systems. For example, using a biased abusive language detection system may result in discrimination against a group of minority peoples such as African-American (Waseem and Hovy, 2016) .",
"cite_spans": [
{
"start": 83,
"end": 101,
"text": "(Lam et al., 2011;",
"ref_id": "BIBREF25"
},
{
"start": 102,
"end": 120,
"text": "Fine et al., 2014;",
"ref_id": "BIBREF12"
},
{
"start": 121,
"end": 140,
"text": "Jones et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 247,
"end": 269,
"text": "(Manzini et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 295,
"end": 319,
"text": "(Garimella et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 420,
"end": 444,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 445,
"end": 467,
"text": "Caliskan et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 468,
"end": 492,
"text": "Ethayarajh et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 493,
"end": 511,
"text": "Zhou et al., 2019)",
"ref_id": "BIBREF50"
},
{
"start": 657,
"end": 679,
"text": "(Manzini et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 705,
"end": 724,
"text": "(Zhao et al., 2017;",
"ref_id": "BIBREF48"
},
{
"start": 725,
"end": 747,
"text": "Rudinger et al., 2018;",
"ref_id": "BIBREF33"
},
{
"start": 769,
"end": 801,
"text": "(Kiritchenko and Mohammad, 2018)",
"ref_id": "BIBREF24"
},
{
"start": 824,
"end": 848,
"text": "(Stanovsky et al., 2019;",
"ref_id": "BIBREF39"
},
{
"start": 849,
"end": 874,
"text": "Escud\u00e9 Font et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 898,
"end": 921,
"text": "(Waseem and Hovy, 2016;",
"ref_id": "BIBREF45"
},
{
"start": 922,
"end": 944,
"text": "Davidson et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 945,
"end": 962,
"text": "Sap et al., 2019;",
"ref_id": "BIBREF34"
},
{
"start": 963,
"end": 989,
"text": "Hasanuzzaman et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 1028,
"end": 1047,
"text": "(Rios et al., 2020)",
"ref_id": "BIBREF32"
},
{
"start": 1240,
"end": 1261,
"text": "(Wagner et al., 2015)",
"ref_id": "BIBREF43"
},
{
"start": 1264,
"end": 1285,
"text": "Olteanu et al. (2019)",
"ref_id": "BIBREF30"
},
{
"start": 1774,
"end": 1797,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The survey by Blodgett and O'Connor (2017) on bias in NLP systems found that works related to bias failed to explain why the system behaviours are 'biased', harmful, what kind of behaviours lead to bias, in what ways, to whom and why. The paper also discusses the need to conceptualise bias better by linking the languages and social hierarchies in the society. Another survey by Sun et al. (2019) analyses the bias framework and issues with the current debiasing methods. Their study reveals that the gender bias in NLP is still at a nascent level and lacks unified metrics and benchmark evaluation methods. This paper audits or surveys the present situation in analysing the bias at various levels of data.",
"cite_spans": [
{
"start": 380,
"end": 397,
"text": "Sun et al. (2019)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper has been divided into four section. In section 2, we analyse the bias in language resources or corpora, in word representations, in pre-trained language models. In section 3, based on the observation of bias in corpora level, social data, text representations, metric to measure and mitigate bias, we infer six recommendations and one measure augmenting the existing one. In section 4, we conclude with the need of practicing standard procedures across the data pipeline in NLP systems to support ethical and fairness usage of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Language resources are present in two forms: 1) Online digital data 2) Annotated corpus. In fact language technologies are data-driven methods based on statistical learning techniques. There needs to be a framework to monitor the flow of data from the source to the NLP system and to measure the bias emanating at each level. The dataset can be representative of a specific genre or domain and a balanced one to avoid selectional bias as noted by S\u00f8gaard et al. (2014) . The bias also emanates from labelling the corpus by annotators. Dickinson and Meurers (2003) observed bias in widely-used Penn Treebank annotated corpora which contain many errors. Apart from the linguistic experts, using non-experts in the annotation process through crowd-sourcing platforms also leads to considerable bias in data (Waseem, 2016) . Figure. 1 shows the effects of bias in data which leads to bias in gender, bias in syntactic and shallow parsing across domains, bias due to the disparity in language resources represented by tree hierarchy.",
"cite_spans": [
{
"start": 447,
"end": 468,
"text": "S\u00f8gaard et al. (2014)",
"ref_id": "BIBREF37"
},
{
"start": 535,
"end": 563,
"text": "Dickinson and Meurers (2003)",
"ref_id": "BIBREF9"
},
{
"start": 804,
"end": 818,
"text": "(Waseem, 2016)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [
{
"start": 821,
"end": 828,
"text": "Figure.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Bias",
"sec_num": "2"
},
{
"text": "As language resources or corpora are crucial for any NLP systems, the following show the gender and domain bias in data itself, occurred due to tagging and parsing, annotation issues in corpus creation. Bias in Diversity and Inclusion of languages:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "\u2022 The quantitative analysis by reveals the disparity amongst language resources. The taxonomy of languages in the aspect of resource distributions shows that among six classes of languages, the Class-0, which contains the largest section of languages -2,191 languages -are still ignored in the aspect of language technologies, and there is no resource present.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "This non-availability of language resources aggravates the disparity in language resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "\u2022 Hence the Diversity and Inclusion (DI) score should be recommended to measure the diversity of NLP system methods to apply for different languages and the system's contributions to the inclusivity of poorly resourced or less represented languages .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "\u2022 As languages are dynamic, warned the consequence of using ageold limited training data to train a model which could be evaluated on new data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "Bias across domain:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "\u2022 Fine et al. (2014) investigated bias across five annotated corpora: Google Ngram, BNC (written, spoken), CELEX (written, spoken), Switchboard (Penn Treebank) using psycholinguistic measures identified domain bias in the corpora. Google appears to be familiar with terminology dealing with adult material and technology related terms. Similarly, BNC is biased towards Briticisms. The Switchboard corpus overestimates how quickly humans will react to colloquialisms and backchannels during telephone conversations.",
"cite_spans": [
{
"start": 2,
"end": 20,
"text": "Fine et al. (2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "Bias in annotating social data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "\u2022 Davidson et al. (2017) found a substantial racial bias that exists among all datasets and recommended the way to measure the origin of bias during collection and labelling.",
"cite_spans": [
{
"start": 2,
"end": 24,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "\u2022 The bias in social data cripples during the labelling process as noted by Waseem (2016) . The author criticized the decision by annotators by labelling the words from African-American English (AAE) as offensive since it is being used widely among its users. The models built on expert annotations perform comparatively better than the amateur annotators as they are more likely to label an item as hate speech. Indeed annotating task for social data is complex if the task is to categorize the abusive behaviour, as there is no standard procedure, and what qualifies as abusive is still not clear (Founta et al., 2018 ).",
"cite_spans": [
{
"start": 76,
"end": 89,
"text": "Waseem (2016)",
"ref_id": "BIBREF44"
},
{
"start": 599,
"end": 619,
"text": "(Founta et al., 2018",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "\u2022 The dialect-based bias in the corpora was observed by Sap et al. (2019) . Amazon Mechanical Turk annotators were asked to find whether Twitter user's dialect and race are offensive to themselves or others. The result showed that dialect and race are less significant to label AAE tweet as offensive to someone. Thus the racial bias emanates from skew annotation process can be removed by not having skewed demographics of annotators.",
"cite_spans": [
{
"start": 56,
"end": 73,
"text": "Sap et al. (2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "Gender bias in data / corpus: \u2022 How gender bias has developed in the Wikipedia was analysed by Schmahl et al. (2020) using four categories: Career, Science, Family and Arts. The stereotypical gender bias in the categories family and science is decreasing, and art-related words are becoming more biased towards females. Their findings also reveal that the selection of corpus for word embedding depends on the task. For example, to have a gender-neutral word embedding related to Science, one may best use the corpus of 2018.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "\u2022 After",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "\u2022 Analysis on the word embedding representation of the Google Ngram corpus shows that stereotypical gender associations in languages have decreased over time but still exists (Jones et al., 2020) .",
"cite_spans": [
{
"start": 175,
"end": 195,
"text": "(Jones et al., 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "Bias in syntactic and shallow parsing:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "\u2022 Apart from the labelling bias in Penn Treebank, Garimella et al. 2019observed a gender bias in syntactic labelling and parsing for the Wall Street Journal (WSJ) articles from Penn Treebank with gender information. The POS taggers, syntactic parsers trained on any data performed well when tested with femaleauthored articles. Whereas the male writings performed better only on sufficient maleauthored articles as training data. This highlights the grammatical differences between gender and the need for better tools for syntactic annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "\u2022 Off-the-shelf POS taggers show differences in performance for languages of the people in different age groups, and the linguistic differences are not solely on lexical but also grammatical .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Language resources or corpora",
"sec_num": "2.1"
},
{
"text": "Bias is pervasive in word embeddings and neural models in general. Apart from analysing the language resources and corpora, the bias in the language can also be studied by using the word embeddings -distributed representation of words in vector form. Biases in word embeddings may, in turn, have unwanted consequences in downstream applications. Bias in word representations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Word Representations",
"sec_num": "2.2"
},
{
"text": "\u2022 A pioneer work in the detection of bias on word embeddings by Bolukbasi et al. (2016) proposed two debiasing techniques: hard and soft-debias. They showed that gender bias could be captured by identifying the direction in embedding subspace and could be neutralised. The hard-debiasing effectively reduce gender stereotypes when compared with the soft-debiasing technique.",
"cite_spans": [
{
"start": 64,
"end": 87,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Word Representations",
"sec_num": "2.2"
},
{
"text": "\u2022 Caliskan et al. (2017) measured the bias in the Word2Vec embeddings on Google News corpus and pre-trained GloVe using WEAT, WEFAT score. The authors noted that the gender association strength of occupation words is highly correlated.",
"cite_spans": [
{
"start": 2,
"end": 24,
"text": "Caliskan et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Word Representations",
"sec_num": "2.2"
},
{
"text": "\u2022 Using the notion of bias subspace, the WEAT shows systemic error in overestimating the bias (Ethayarajh et al., 2019) . Even though Bolukbasi et al. (2016) shown that removing bias by subspace projection method, there is no theoretical guarantee that the debiased vectors are entirely unbiased or the method works for embedding models other than the skipgram with negative sampling (SGNS). Proposed the measure relational inner product association (RIPA) that analyses how much gender association in embedding space is due to embedding model and training corpus. For further notion on the impact of bias, subspace and debiasing refer Ethayarajh et al. (2019) .",
"cite_spans": [
{
"start": 94,
"end": 119,
"text": "(Ethayarajh et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 134,
"end": 157,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF2"
},
{
"start": 636,
"end": 660,
"text": "Ethayarajh et al. (2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Word Representations",
"sec_num": "2.2"
},
{
"text": "Representation bias in social data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Word Representations",
"sec_num": "2.2"
},
{
"text": "\u2022 The Twitter data was analysed using demographic embeddings (location, age, location), which shows that the gender variable has a small impact when compared with the location variable in classifying the racism tweets (Hasanuzzaman et al., 2017) .",
"cite_spans": [
{
"start": 218,
"end": 245,
"text": "(Hasanuzzaman et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Word Representations",
"sec_num": "2.2"
},
{
"text": "\u2022 The Word2Vec trained on L2-Reddit shows bias in the dataset using multi-class for a race, gender and religion (Manzini et al., 2019) . Embeddings are debiased using hard, and softdebias and its downstream effect shows decreased performance by POS tagging and an increase in NER and Chunking.",
"cite_spans": [
{
"start": 112,
"end": 134,
"text": "(Manzini et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Word Representations",
"sec_num": "2.2"
},
{
"text": "\u2022 To detect an unintended bias in word embeddings Sweeney and Najafian (2019) proposed a Relative Negative Sentiment Bias (RNSB) framework. WEAT score shows that the Word2vec and GloVe word embeddings are biased with respect to national origin, whereas RNSB measure the discrimination with respect to more than two demographics within a protected group. The study also reveals that the unintended bias is more in GloVE embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation bias in applications:",
"sec_num": null
},
{
"text": "Representation bias in language resources:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation bias in applications:",
"sec_num": null
},
{
"text": "\u2022 Brunet et al. (2019) used PPMI, Word2Vec and GloVe embeddings to train the two corpora Wikipedia and NYT, used WEAT metric to quantify the amount of bias contributed by the data in training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation bias in applications:",
"sec_num": null
},
{
"text": "\u2022 Using Word2vec trained on Google News, GloVe on NYT and COHA embeddings for temporal analysis Garg et al. (2018) found the occupational and demographic shifts over time. The embeddings are leveraged to quantify the attitude towards women and ethnic minorities.",
"cite_spans": [
{
"start": 96,
"end": 114,
"text": "Garg et al. (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representation bias in applications:",
"sec_num": null
},
{
"text": "\u2022 reported that bias gets amplified while training the biased dataset. For example, some of the training set verbs have small bias and are heavily biased towards females after training. Kitchen and technologyoriented categories in MS-COCO are aligned towards females and males, respectively that have been amplified during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation bias in applications:",
"sec_num": null
},
{
"text": "Instead of using the word embeddings directly from the models for classification, the embeddings from pre-trained language models (LM) such as GloVe, ELMo, BERT, GPT can be used to train the taskspecific datasets using transfer learning. Pre-trained LM in MT:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Pre-trained Language Models",
"sec_num": "2.3"
},
{
"text": "\u2022 Escud\u00e9 Font et al. (2019) found gender bias in the translation of English-Spanish in the news domain. The baseline system is trained with GloVe and use hard-debias along with GN-GloVE embeddings for debiasing. The BLEU score increased for the pre-trained embedding and improved slightly for the debiased model using transfer learning which means the translation is preserved while enhancing the gender debias. Neutralization are used to fine-tune the embeddings, which reduces bias.",
"cite_spans": [
{
"start": 9,
"end": 27,
"text": "Font et al. (2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Pre-trained Language Models",
"sec_num": "2.3"
},
{
"text": "Pre-trained LM in Social data analysis:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Pre-trained Language Models",
"sec_num": "2.3"
},
{
"text": "\u2022 Park and Fung (2017) trained the Twitter dataset on CNN, GRU and Bi-directional GRU with attention models using FastText, Word2vec pre-trained embeddings. The gender bias on models trained with sexist and abusive tweets was mitigated by debiased word embeddings, data augmentation and finetuning with a large corpus. Result concluded that debiased word embeddings alone do not effectively correct the bias, while fine-tuning bias with a less biased source dataset greatly improves the performance with drop-in bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fallacies in Pre-trained Language Models",
"sec_num": "2.3"
},
{
"text": "The biases that are much extensively experimented can be categorized as: fine-grained such as gender, racism, religion, and location; coarse-grained such as demographic and gender+occupation. Most experiments or analysis are performed on observing gender bias and its stereotypes from data and through word representations. The other types of bias, such as racism, sexism, and religion, are extensively studied in social data. The other biases from intersectional bias attributes such as demographic (age, gender, location), gender+racism, and gender+location+job are not extensively studied. Much attention is needed to study the other types and bias based on the intersectional view.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Observations and Inferences",
"sec_num": "3"
},
{
"text": "A standard process has to be defined on how data are collected, processed and organized. Gebru et al. (2020) recommended that every dataset should have datasheets that are metadata that documents its motivation, composition, collection process, recommended uses, and so on. Every training dataset should be accompanied by information on how the data were collected and annotated. If data contain information about people, then summary statistics on geography, gender, ethnicity and other demographic information should be provided. Specifically, steps should be taken to ensure that such datasets are diverse and do not represent any particular groups. The model performance may hinder if the dataset reflects unwanted biases or its deployment domain does not match its training or evaluation domain. Inter-Annotator agreement (Waseem, 2016) , (Sap et al., 2019) annotations for Twitter data collection Misleading label in financial (Chen et al., 2020) tweets by annotators (Gebru et al., 2020) . Bender and Friedman (2018) proposed data statements to be included in all NLP systems' publication and documentation. These data statements will address exclusion and bias in language technologies that do not misrepresent the users from others.",
"cite_spans": [
{
"start": 89,
"end": 108,
"text": "Gebru et al. (2020)",
"ref_id": null
},
{
"start": 827,
"end": 841,
"text": "(Waseem, 2016)",
"ref_id": "BIBREF44"
},
{
"start": 844,
"end": 874,
"text": "(Sap et al., 2019) annotations",
"ref_id": null
},
{
"start": 933,
"end": 952,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 974,
"end": 994,
"text": "(Gebru et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias in data / corpora",
"sec_num": "3.1"
},
{
"text": "2. D&I metric for corpora: The metric to measure the language diversity and inclusivity is necessary for each corpus that helps to measure the language support by the corpora and its power to include poor-resource languages .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias in data / corpora",
"sec_num": "3.1"
},
{
"text": "The schema-level constructs such as Wino-Bias, Winogender for coreference systems, WinoMT for MT and EEC for sentiment analysis are used to measure and mitigate the bias for task / application-specific purposes. These schemas provide semantic cues in sentences apart from syntactic cues to detect the bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data design -Schema templates:",
"sec_num": "3."
},
{
"text": "4. Metric for subset selection -The subset selection is a fundamental task that does not consider the metric to select the set of instances from a more extensive set. Mitchell et al. (2019) formalized how these subsets can be measured for diversity and inclusion for image datasets. There should be a similar kind of metrics that should be formulated for text datasets too.",
"cite_spans": [
{
"start": 167,
"end": 189,
"text": "Mitchell et al. (2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data design -Schema templates:",
"sec_num": "3."
},
{
"text": "Intersectional view -Bias also emanates due to the cross-section of the gender and racism. Solomon et al. (2018) demands the study of bias through the intersection of race and gender, which would alleviate the problem experienced by Black Women. This is done by analysing multiple identities through an intersectional lens. Research shows that Black or women are not inclusive of Black women's unique perspectives.",
"cite_spans": [
{
"start": 91,
"end": 112,
"text": "Solomon et al. (2018)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias through an",
"sec_num": "5."
},
{
"text": "Our recommendations based on the above:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias through an",
"sec_num": "5."
},
{
"text": "\u2022 Recommendation-1: Research on various semantic-level constructs specific to applications is needed. Like data design practiced in database systems to capture the semantics of data using schema representations, schemalevel constructs are needed for task-specific applications to mitigate bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias through an",
"sec_num": "5."
},
{
"text": "\u2022 Recommendation-2: The need for Taxonomy of bias for LT w.r.t EDI -which focus other than binary gender such as transgender,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias through an",
"sec_num": "5."
},
{
"text": "LGBTQ, and bias through an intersectional view.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias through an",
"sec_num": "5."
},
{
"text": "\u2022 Recommendation-3: Biases defined for linguistic resources can not be used or may unmatch for the data from other domains such as bio-medical, science and law. Need of domain-wise bias study as it may differ in its own perspectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias through an",
"sec_num": "5."
},
{
"text": "If the data labelling is done through crowdsourcing, then necessary information about the crowd participants should be included, alongside the exact request or instructions that they were given. To help identify sources of bias, annotators should systematically label the content of training data sets with standardised metadata. Table 2 shows the analysis of bias in labelling corpora, social data annotations. against the naive use of social data in NLP tasks (J\u00f8rgensen et al., 2015) . Olteanu et al. (2019) categorised four biases that occur at the source level and how biases manifest themselves. Applications using social data must be validated and audited, effectively identifying biases introduced at various levels in a data pipeline. Other harmful blind spots along a data analysis pipeline further require better auditing and evaluation frameworks.",
"cite_spans": [
{
"start": 462,
"end": 486,
"text": "(J\u00f8rgensen et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 489,
"end": 510,
"text": "Olteanu et al. (2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 330,
"end": 337,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Bias in corpus labelling or annotations",
"sec_num": "3.2"
},
{
"text": "Our recommendation based on the above:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias in corpus labelling or annotations",
"sec_num": "3.2"
},
{
"text": "\u2022 Recommendation-4: The need for the study on bias in social data by NLP research community: Social data contains diverse dialects, which is not a norm in language resources. There are methodological limitations, and pitfalls (J\u00f8rgensen et al., 2015) as well as ethical boundaries and unexpected consequences (Olteanu et al., 2019) in social data. Hence the propagation and amplification of bias in the data pipeline of social data differ, which needs to be addressed.",
"cite_spans": [
{
"start": 226,
"end": 250,
"text": "(J\u00f8rgensen et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 309,
"end": 331,
"text": "(Olteanu et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias in corpus labelling or annotations",
"sec_num": "3.2"
},
{
"text": "Word embeddings contain bias and amplify bias present in the data, such as stereotypes. Word embeddings trained on different models produce different results. Bias in word embeddings is noted by (Bolukbasi et al., 2016) in Word2Vec, which represents a stereotypical bias among the pair of words using word analogy. Recently word embeddings specifically to neutralize the gender in embeddings -GN-GloVe, embeddings on demographic attributes -GeoSGLM are used to detect and mitigate bias in word representations. Table 3 shows the analysis of bias in text representations for monolingual, bilingual and embeddings to neutralize the bias.",
"cite_spans": [
{
"start": 195,
"end": 219,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 511,
"end": 518,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Bias in text representations",
"sec_num": "3.3"
},
{
"text": "1. Limitations in Transformer Models -Limitations of this models are noted in computational abilities (Hahn, 2020) , multilingual embeddings produced by BERT (Singh et al., 2019) , mBERT (Wu and Dredze, 2020) , bilingual embeddings by MUSE (Zhou et al., 2019) .",
"cite_spans": [
{
"start": 102,
"end": 114,
"text": "(Hahn, 2020)",
"ref_id": "BIBREF18"
},
{
"start": 158,
"end": 178,
"text": "(Singh et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 187,
"end": 208,
"text": "(Wu and Dredze, 2020)",
"ref_id": "BIBREF46"
},
{
"start": 240,
"end": 259,
"text": "(Zhou et al., 2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias in text representations",
"sec_num": "3.3"
},
{
"text": "2. Fairness in pre-trained LM -The model cards proposed by Mitchell et al. (2019) are recommended to the pre-trained LM that can substantiate the context in which the models can be used and can provide a benchmarked evaluation on various bias types. Currently, the GPT-2 model card 2 have not mentioned the type of bias in the dataset used to train and the model fitness for specific applications.",
"cite_spans": [
{
"start": 59,
"end": 81,
"text": "Mitchell et al. (2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias in text representations",
"sec_num": "3.3"
},
{
"text": "3. Is pre-trained LM bias-free? - Dale (2021) Measure/Mitigate Metric/Techniques Work by Metric proposed to WEAT -for SGNS model (Caliskan et al., 2017) ",
"cite_spans": [
{
"start": 34,
"end": 45,
"text": "Dale (2021)",
"ref_id": "BIBREF7"
},
{
"start": 129,
"end": 152,
"text": "(Caliskan et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias in text representations",
"sec_num": "3.3"
},
{
"text": "Relative norm distance (Garg et al., 2018 ) WEAT1, WEAT2 -bias b/w embeddings (Brunet et al., 2019) SEAT -bias in sentence embeddings (May et al., 2019) RIPA -for any embedding model (Ethayarajh et al., 2019) MWEAT -bias in MT (Zhou et al., found that the pre-trained LM such as GPT-3 has the output embody all the biases that might be found in its training data.",
"cite_spans": [
{
"start": 23,
"end": 41,
"text": "(Garg et al., 2018",
"ref_id": "BIBREF14"
},
{
"start": 78,
"end": 99,
"text": "(Brunet et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 134,
"end": 152,
"text": "(May et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 183,
"end": 208,
"text": "(Ethayarajh et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 227,
"end": 240,
"text": "(Zhou et al.,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "evaluate bias",
"sec_num": null
},
{
"text": "4. Transfer learning is ecological?-Training the language models is costly to train financially and environmentally, which uses more carbon dioxide than the manufacturing and lifetime use of a car (Strubell et al., 2019) .",
"cite_spans": [
{
"start": 197,
"end": 220,
"text": "(Strubell et al., 2019)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "evaluate bias",
"sec_num": null
},
{
"text": "Our recommendation and augmentation based on the above:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "evaluate bias",
"sec_num": null
},
{
"text": "\u2022 Recommendation-5: A semantic-aware neural architecture to generate debiased embeddings for monolingual, cross-lingual and multi-lingual applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "evaluate bias",
"sec_num": null
},
{
"text": "\u2022 Augmentation-1: We augment Gebru et al. (2020) to adopt and extend Datasheets for the language resources, annotated corpora and Model cards by Mitchell et al. (2019) for the algorithms used in pre-train LM and techniques for debiasing.",
"cite_spans": [
{
"start": 29,
"end": 48,
"text": "Gebru et al. (2020)",
"ref_id": null
},
{
"start": 145,
"end": 167,
"text": "Mitchell et al. (2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "evaluate bias",
"sec_num": null
},
{
"text": "Since the plurality in languages inherently carries bias, it is essential to analyze the normative procedure's bias. Because of the complexity and types of bias, the detection and quantification of bias are not always possible by using formal mathematical techniques. Table 4 shows the metrics or methods used to measure and mitigate the bias.",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 275,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Measuring and mitigating bias",
"sec_num": "3.4"
},
{
"text": "1. Fairness in bias metric -Gonen and Goldberg (2019) observed that the bias emanating from word stereotypes and learned from the corpus is ingrained much more deeply in the word embeddings. Ethayarajh et al. (2019) found that the commonly used WEAT does not measure the bias appropriately for the embeddings other than the SGNS model. May et al. (2019) proposed SEAT to measure bias in sentence embeddings. This implies the need of a specific algorithm or method that can be used across all the embeddings to measure the bias (Gonen and Goldberg, 2019; Davidson et al., 2017) .",
"cite_spans": [
{
"start": 191,
"end": 215,
"text": "Ethayarajh et al. (2019)",
"ref_id": "BIBREF11"
},
{
"start": 336,
"end": 353,
"text": "May et al. (2019)",
"ref_id": "BIBREF28"
},
{
"start": 527,
"end": 553,
"text": "(Gonen and Goldberg, 2019;",
"ref_id": "BIBREF17"
},
{
"start": 554,
"end": 576,
"text": "Davidson et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring and mitigating bias",
"sec_num": "3.4"
},
{
"text": "2. Application specific bias -The bias can also be measured specific to the applications. For example, RNSB, TGBI (Cho et al., 2019) are proposed to measure the bias in Sentiment Analysis, MT, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring and mitigating bias",
"sec_num": "3.4"
},
{
"text": "3. Fairness in debiasing -The experiment on GN-GloVE and hard-debias reveals the presence of systemic bias in the embeddings independent of gender directions measure the gender association (Gonen and Goldberg, 2019) . Even though many systems use hard-debias or soft-debias as de-facto standards to mitigate bias, they do not effectively remove it from the word representations. This requires a standardized framework that effectively measures and mitigate the bias across the domains and applications.",
"cite_spans": [
{
"start": 189,
"end": 215,
"text": "(Gonen and Goldberg, 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring and mitigating bias",
"sec_num": "3.4"
},
{
"text": "Our recommendation based on the above:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring and mitigating bias",
"sec_num": "3.4"
},
{
"text": "\u2022 Recommendation-6: An Unified and End-to-End Framework -there is a need for a unified framework to measure and mitigate the bias based on benchmark metrics and methods at various levels data pipeline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring and mitigating bias",
"sec_num": "3.4"
},
{
"text": "The above analysis at various levels reveals that most NLP systems consider bias at a nascent level. Computer scientists should strive to develop algorithms that are more robust to biases in the data. Various approaches are being pursued. Such debiasing approaches are promising, but they need to be refined and evaluated in the real world. All need to think about appropriate notions of fairness in data. Should the data be representative of the world as it is, or of a world that many would aspire to? To address these questions and evaluate the broader impact of training data and algorithms, machine-learning researchers must engage with social scientists, and experts in other domains. Based on the observations of methods used at various levels, we recommend six measures and augment one measure to support ethical practice and fairness in the usage of data. Practising and adopting various normative procedures across the data pipeline in NLP systems would enhance the Equality, Diversity and Inclusion of different subgroups of peoples and their languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "https://en.wikipedia.org/wiki& /Wikipedia:Neutral_point_of_view",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/openai/gpt-2/blob/ master/model_card.md",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Batya",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "587--604",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00041"
]
},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Racial disparity in natural language processing: A case study of social media african-american english",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Brendan O'",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Lin Blodgett and Brendan O'Connor. 2017. Racial disparity in natural language processing: A case study of social media african-american english.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Saligrama",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16",
"volume": "",
"issue": "",
"pages": "4356--4364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Pro- ceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, page 4356-4364, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Understanding the origins of bias in word embeddings",
"authors": [
{
"first": "Marc-Etienne",
"middle": [],
"last": "Brunet",
"suffix": ""
},
{
"first": "Colleen",
"middle": [],
"last": "Alkalay-Houlihan",
"suffix": ""
},
{
"first": "Ashton",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "97",
"issue": "",
"pages": "803--811",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ash- ton Anderson, and Richard Zemel. 2019. Under- standing the origins of bias in word embeddings. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Ma- chine Learning Research, pages 803-811. PMLR.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {
"DOI": [
"10.1126/science.aal4230"
]
},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Issues and perspectives from 10,000 annotated financial social media data",
"authors": [
{
"first": "Chung-Chi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hen-Hsen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "6106--6110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chung-Chi Chen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2020. Issues and perspectives from 10,000 annotated financial social media data. In Proceed- ings of the 12th Language Resources and Evaluation Conference, pages 6106-6110, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On measuring gender bias in translation of gender-neutral pronouns",
"authors": [
{
"first": "Ji",
"middle": [
"Won"
],
"last": "Won Ik Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Seok",
"suffix": ""
},
{
"first": "Nam",
"middle": [
"Soo"
],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "173--181",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3824"
]
},
"num": null,
"urls": [],
"raw_text": "Won Ik Cho, Ji Won Kim, Seok Min Kim, and Nam Soo Kim. 2019. On measuring gender bias in translation of gender-neutral pronouns. In Proceed- ings of the First Workshop on Gender Bias in Natu- ral Language Processing, pages 173-181, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Gpt-3: What's it good for?",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 2021,
"venue": "Natural Language Engineering",
"volume": "27",
"issue": "1",
"pages": "113--118",
"other_ids": {
"DOI": [
"10.1017/S1351324920000601"
]
},
"num": null,
"urls": [],
"raw_text": "Robert Dale. 2021. Gpt-3: What's it good for? Natural Language Engineering, 27(1):113-118.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "T",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Davidson, Dana Warmsley, M. Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In ICWSM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Detecting errors in part-of-speech annotation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dickinson",
"suffix": ""
},
{
"first": "W. Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "107--114",
"other_ids": {
"DOI": [
"10.3115/1067807.1067823"
]
},
"num": null,
"urls": [],
"raw_text": "Markus Dickinson and W. Detmar Meurers. 2003. Detecting errors in part-of-speech annotation. In Proceedings of the Tenth Conference on European Chapter of the Association for Computational Lin- guistics -Volume 1, EACL '03, page 107-114, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Equalizing gender bias in neural machine translation with word embeddings techniques",
"authors": [
{
"first": "Joel",
"middle": [
"Escud\u00e9"
],
"last": "Font",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Costa-Jussa",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "147--154",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3821"
]
},
"num": null,
"urls": [],
"raw_text": "Joel Escud\u00e9 Font, Marta Costa-jussa, and R. 2019. Equalizing gender bias in neural machine transla- tion with word embeddings techniques. In Proceed- ings of the First Workshop on Gender Bias in Natu- ral Language Processing, pages 147-154, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Understanding undesirable word embedding associations",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Duvenaud",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1696--1705",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019. Understanding undesirable word embedding associations. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 1696-1705, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Biases in predicting the human language model",
"authors": [
{
"first": "Alex",
"middle": [
"B"
],
"last": "Fine",
"suffix": ""
},
{
"first": "Austin",
"middle": [
"F"
],
"last": "Frank",
"suffix": ""
},
{
"first": "T",
"middle": [
"Florian"
],
"last": "Jaeger",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "7--12",
"other_ids": {
"DOI": [
"10.3115/v1/P14-2002"
]
},
"num": null,
"urls": [],
"raw_text": "Alex B. Fine, Austin F. Frank, T. Florian Jaeger, and Benjamin Van Durme. 2014. Biases in predicting the human language model. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 7-12, Baltimore, Maryland. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Large scale crowdsourcing and characterization of twitter abusive behavior",
"authors": [
{
"first": "Antigoni-Maria",
"middle": [],
"last": "Founta",
"suffix": ""
},
{
"first": "Constantinos",
"middle": [],
"last": "Djouvas",
"suffix": ""
},
{
"first": "Despoina",
"middle": [],
"last": "Chatzakou",
"suffix": ""
},
{
"first": "Ilias",
"middle": [],
"last": "Leontiadis",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Blackburn",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "Stringhini",
"suffix": ""
},
{
"first": "Athena",
"middle": [],
"last": "Vakali",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Sirivianos",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Kourtellis",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antigoni-Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. CoRR, abs/1802.00393.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Londa",
"middle": [],
"last": "Schiebinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2018,
"venue": "Sciences",
"volume": "115",
"issue": "16",
"pages": "3635--3644",
"other_ids": {
"DOI": [
"10.1073/pnas.1720347115"
]
},
"num": null,
"urls": [],
"raw_text": "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635-E3644.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Women's syntactic resilience and men's grammatical luck: Gender-bias in part-ofspeech tagging and dependency parsing",
"authors": [
{
"first": "Aparna",
"middle": [],
"last": "Garimella",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3493--3498",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1339"
]
},
"num": null,
"urls": [],
"raw_text": "Aparna Garimella, Carmen Banea, Dirk Hovy, and Rada Mihalcea. 2019. Women's syntactic resilience and men's grammatical luck: Gender-bias in part-of- speech tagging and dependency parsing. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3493-3498, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Hal Daum\u00e9 III au2, and Kate Crawford. 2020. Datasheets for datasets",
"authors": [
{
"first": "Timnit",
"middle": [],
"last": "Gebru",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Morgenstern",
"suffix": ""
},
{
"first": "Briana",
"middle": [],
"last": "Vecchione",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"Wortman"
],
"last": "Vaughan",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timnit Gebru, Jamie Morgenstern, Briana Vec- chione, Jennifer Wortman Vaughan, Hanna Wal- lach, Hal Daum\u00e9 III au2, and Kate Crawford. 2020. Datasheets for datasets.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "609--614",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1061"
]
},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609-614, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Theoretical limitations of selfattention in neural sequence models",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Hahn",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "156--171",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00306"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Hahn. 2020. Theoretical limitations of self- attention in neural sequence models. Transactions of the Association for Computational Linguistics, 8:156-171.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Demographic word embeddings for racism detection on Twitter",
"authors": [
{
"first": "Mohammed",
"middle": [],
"last": "Hasanuzzaman",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Dias",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "926--936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammed Hasanuzzaman, Ga\u00ebl Dias, and Andy Way. 2017. Demographic word embeddings for racism detection on Twitter. In Proceedings of the Eighth International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 926-936, Taipei, Taiwan.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Tagging performance correlates with author age",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "483--488",
"other_ids": {
"DOI": [
"10.3115/v1/P15-2079"
]
},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy and Anders S\u00f8gaard. 2015. Tagging perfor- mance correlates with author age. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 483-488, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Stereotypical gender associations in language have decreased over time",
"authors": [
{
"first": "Jason",
"middle": [
"Jeffrey"
],
"last": "Jones",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Amin",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2020,
"venue": "Sociological Science",
"volume": "7",
"issue": "",
"pages": "1--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Jeffrey Jones, M. Amin, Jessica Kim, and S. Skiena. 2020. Stereotypical gender associations in language have decreased over time. Sociological Science, 7:1-35.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Challenges of studying and processing dialects in social media",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "J\u00f8rgensen",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "9--18",
"other_ids": {
"DOI": [
"10.18653/v1/W15-4302"
]
},
"num": null,
"urls": [],
"raw_text": "Anna J\u00f8rgensen, Dirk Hovy, and Anders S\u00f8gaard. 2015. Challenges of studying and processing dialects in social media. In Proceedings of the Workshop on Noisy User-generated Text, pages 9-18, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The state and fate of linguistic diversity and inclusion in the NLP world",
"authors": [
{
"first": "Pratik",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Sebastin",
"middle": [],
"last": "Santy",
"suffix": ""
},
{
"first": "Amar",
"middle": [],
"last": "Budhiraja",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.560"
]
},
"num": null,
"urls": [],
"raw_text": "Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Examining gender and race bias in two hundred sentiment analysis systems",
"authors": [
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "43--53",
"other_ids": {
"DOI": [
"10.18653/v1/S18-2005"
]
},
"num": null,
"urls": [],
"raw_text": "Svetlana Kiritchenko and Saif Mohammad. 2018. Ex- amining gender and race bias in two hundred sen- timent analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Compu- tational Semantics, pages 43-53, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Wp:clubhouse? an exploration of wikipedia's gender imbalance",
"authors": [
{
"first": ")",
"middle": [
"K"
],
"last": "Shyong (tony",
"suffix": ""
},
{
"first": "Anuradha",
"middle": [],
"last": "Lam",
"suffix": ""
},
{
"first": "Zhenhua",
"middle": [],
"last": "Uduwage",
"suffix": ""
},
{
"first": "Shilad",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "David",
"middle": [
"R"
],
"last": "Sen",
"suffix": ""
},
{
"first": "Loren",
"middle": [],
"last": "Musicant",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Terveen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Riedl",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 7th International Symposium on Wikis and Open Collaboration, WikiSym '11",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.1145/2038558.2038560"
]
},
"num": null,
"urls": [],
"raw_text": "Shyong (Tony) K. Lam, Anuradha Uduwage, Zhenhua Dong, Shilad Sen, David R. Musicant, Loren Ter- veen, and John Riedl. 2011. Wp:clubhouse? an exploration of wikipedia's gender imbalance. In Proceedings of the 7th International Symposium on Wikis and Open Collaboration, WikiSym '11, page 1-10, New York, NY, USA. Association for Comput- ing Machinery.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Towards debiasing sentence representations",
"authors": [
{
"first": "Irene",
"middle": [
"Mengze"
],
"last": "Paul Pu Liang",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5502--5515",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.488"
]
},
"num": null,
"urls": [],
"raw_text": "Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis- Philippe Morency. 2020. Towards debiasing sen- tence representations. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 5502-5515, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Manzini",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Chong",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "615--621",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1062"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 615-621, Minneapo- lis, Minnesota. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "On measuring social biases in sentence encoders",
"authors": [
{
"first": "Chandler",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "622--628",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1063"
]
},
"num": null,
"urls": [],
"raw_text": "Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measur- ing social biases in sentence encoders. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628, Minneapo- lis, Minnesota. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Model cards for model reporting",
"authors": [
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zaldivar",
"suffix": ""
},
{
"first": "Parker",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Hutchinson",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Spitzer",
"suffix": ""
},
{
"first": "Deborah",
"middle": [],
"last": "Inioluwa",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Raji",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gebru",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* '19",
"volume": "",
"issue": "",
"pages": "220--229",
"other_ids": {
"DOI": [
"10.1145/3287560.3287596"
]
},
"num": null,
"urls": [],
"raw_text": "Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Account- ability, and Transparency, FAT* '19, page 220-229, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Social data: Biases, methodological pitfalls, and ethical boundaries",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Olteanu",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Castillo",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Emre",
"middle": [],
"last": "Kiciman",
"suffix": ""
}
],
"year": 2019,
"venue": "Frontiers in Big Data",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kiciman. 2019. Social data: Bi- ases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "One-step and two-step classification for abusive language detection on Twitter",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Park",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "41--45",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3006"
]
},
"num": null,
"urls": [],
"raw_text": "Ji Ho Park and Pascale Fung. 2017. One-step and two-step classification for abusive language detec- tion on Twitter. In Proceedings of the First Work- shop on Abusive Language Online, pages 41-45, Vancouver, BC, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Quantifying 60 years of gender bias in biomedical research with word embeddings",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Reenam",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Hejin",
"middle": [],
"last": "Shin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {
"DOI": [
"10.18653/v1/2020.bionlp-1.1"
]
},
"num": null,
"urls": [],
"raw_text": "Anthony Rios, Reenam Joshi, and Hejin Shin. 2020. Quantifying 60 years of gender bias in biomedical research with word embeddings. In Proceedings of the 19th SIGBioMed Workshop on Biomedical Lan- guage Processing, pages 1-13, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Gender bias in coreference resolution",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Leonard",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "8--14",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2002"
]
},
"num": null,
"urls": [],
"raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The risk of racial bias in hate speech detection",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Dallas",
"middle": [],
"last": "Card",
"suffix": ""
},
{
"first": "Saadia",
"middle": [],
"last": "Gabriel",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1668--1678",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1163"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1668-1678, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Is Wikipedia succeeding in reducing gender bias? assessing changes in gender bias in Wikipedia using word embeddings",
"authors": [
{
"first": "",
"middle": [],
"last": "Katja Geertruida",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"Julian"
],
"last": "Schmahl",
"suffix": ""
},
{
"first": "Stavros",
"middle": [],
"last": "Viering",
"suffix": ""
},
{
"first": "Arman",
"middle": [
"Naseri"
],
"last": "Makrodimitris",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Jahfari",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Tax",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Loog",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science",
"volume": "",
"issue": "",
"pages": "94--103",
"other_ids": {
"DOI": [
"10.18653/v1/2020.nlpcss-1.11"
]
},
"num": null,
"urls": [],
"raw_text": "Katja Geertruida Schmahl, Tom Julian Viering, Stavros Makrodimitris, Arman Naseri Jahfari, David Tax, and Marco Loog. 2020. Is Wikipedia succeeding in reducing gender bias? assessing changes in gen- der bias in Wikipedia using word embeddings. In Proceedings of the Fourth Workshop on Natural Lan- guage Processing and Computational Social Sci- ence, pages 94-103, Online. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "BERT is not an interlingua and the bias of tokenization",
"authors": [
{
"first": "Jasdeep",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP",
"volume": "",
"issue": "",
"pages": "47--55",
"other_ids": {
"DOI": [
"10.18653/v1/D19-6106"
]
},
"num": null,
"urls": [],
"raw_text": "Jasdeep Singh, Bryan McCann, Richard Socher, and Caiming Xiong. 2019. BERT is not an interlingua and the bias of tokenization. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 47-55, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Selection bias, label bias, and bias in ground truth",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "11--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard, Barbara Plank, and Dirk Hovy. 2014. Selection bias, label bias, and bias in ground truth. In Proceedings of COLING 2014, the 25th Inter- national Conference on Computational Linguistics: Tutorial Abstracts, pages 11-13, Dublin, Ireland. Dublin City University and Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Not just black and not just a woman: Black women belonging in computing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Solomon",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "A",
"middle": [
"L"
],
"last": "Roberts",
"suffix": ""
},
{
"first": "J",
"middle": [
"E"
],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 Research on Equity and Sustained Participation in Engineering, Computing, and Technology (RESPECT)",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {
"DOI": [
"10.1109/RESPECT.2018.8491700"
]
},
"num": null,
"urls": [],
"raw_text": "A. Solomon, D. Moon, A. L. Roberts, and J. E. Gilbert. 2018. Not just black and not just a woman: Black women belonging in computing. In 2018 Research on Equity and Sustained Participation in Engineer- ing, Computing, and Technology (RESPECT), pages 1-5.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Evaluating gender bias in machine translation",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1679--1684",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1164"
]
},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky, Noah A. Smith, and Luke Zettle- moyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Energy and policy considerations for deep learning in NLP",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Ananya",
"middle": [],
"last": "Ganesh",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3645--3650",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1355"
]
},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Mitigating gender bias in natural language processing: Literature review",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Gaut",
"suffix": ""
},
{
"first": "Shirlyn",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Yuxin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mai",
"middle": [],
"last": "Elsherief",
"suffix": ""
},
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Diba",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Belding",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1630--1640",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1159"
]
},
"num": null,
"urls": [],
"raw_text": "Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1630-1640, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A transparent framework for evaluating unintended demographic bias in word embeddings",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Sweeney",
"suffix": ""
},
{
"first": "Maryam",
"middle": [],
"last": "Najafian",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1662--1667",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Sweeney and Maryam Najafian. 2019. A trans- parent framework for evaluating unintended demo- graphic bias in word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1662-1667, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "It's a man's wikipedia? assessing gender inequality in an online encyclopedia",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Garc\u00eda",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jadidi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Strohmaier",
"suffix": ""
}
],
"year": 2015,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Wagner, D. Garc\u00eda, M. Jadidi, and M. Strohmaier. 2015. It's a man's wikipedia? assessing gender inequality in an online encyclope- dia. In ICWSM.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Are you a racist or am I seeing things? annotator influence on hate speech detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Workshop on NLP and Computational Social Science",
"volume": "",
"issue": "",
"pages": "138--142",
"other_ids": {
"DOI": [
"10.18653/v1/W16-5618"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem. 2016. Are you a racist or am I seeing things? annotator influence on hate speech detection on Twitter. In Proceedings of the First Workshop on NLP and Computational Social Science, pages 138- 142, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {
"DOI": [
"10.18653/v1/N16-2013"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Are all languages created equal in multilingual BERT?",
"authors": [
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 5th Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "120--130",
"other_ids": {
"DOI": [
"10.18653/v1/2020.repl4nlp-1.16"
]
},
"num": null,
"urls": [],
"raw_text": "Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 120-130, Online. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Gender bias in contextualized word embeddings",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "629--634",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1064"
]
},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cot- terell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629-634, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Men also like shopping: Reducing gender bias amplification using corpus-level constraints",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2979--2989",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1323"
]
},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2979-2989, Copenhagen, Denmark. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "15--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 15-20. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Examining gender bias in languages with grammatical gender",
"authors": [
{
"first": "Pei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Weijia",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kuan-Hao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Muhao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5276--5284",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1531"
]
},
"num": null,
"urls": [],
"raw_text": "Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang. 2019. Examining gender bias in languages with grammatical gender. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5276-5284, Hong Kong, China. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Data bias tree hierarchy",
"num": null,
"uris": null
},
"TABREF2": {
"text": "",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF3": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>shows the analysis of</td></tr></table>",
"html": null,
"num": null
},
"TABREF4": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Analysis of bias in labelling / annotation</td></tr><tr><td>bias in data.</td></tr><tr><td>1. Datasheets &amp; Data Statements: Datasheet</td></tr><tr><td>for data should be documented well, to avoid</td></tr><tr><td>the misuse of data sets for unmatched do-</td></tr><tr><td>mains and tasks</td></tr></table>",
"html": null,
"num": null
},
"TABREF6": {
"text": "Analysis of bias in text representations",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF8": {
"text": "Metric/Methods used to evaluate bias and debias",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}