ACL-OCL / Base_JSON /prefixN /json /nlposs /2020.nlposs-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:43:59.785345Z"
},
"title": "Fair Embedding Engine: A Library for Analyzing and Mitigating Gender Bias in Word Embeddings",
"authors": [
{
"first": "Vaibhav",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Delhi Technological University Delhi",
"location": {
"country": "India"
}
},
"email": "kumar.vaibhav1o1@gmail.com"
},
{
"first": "Tenzin",
"middle": [],
"last": "Singhay Bhotia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Delhi Technological University Delhi",
"location": {
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Non-contextual word embedding models have been shown to inherit human-like stereotypical biases of gender, race and religion from the training corpora. To counter this issue, a large body of research has emerged which aims to mitigate these biases while keeping the syntactic and semantic utility of embeddings intact. This paper describes Fair Embedding Engine (FEE), a library for analysing and mitigating gender bias in word embeddings. FEE combines various state of the art techniques for quantifying, visualising and mitigating gender bias in word embeddings under a standard abstraction. FEE will aid practitioners in fast track analysis of existing debiasing methods on their embedding models. Further, it will allow rapid prototyping of new methods by evaluating their performance on a suite of standard metrics.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Non-contextual word embedding models have been shown to inherit human-like stereotypical biases of gender, race and religion from the training corpora. To counter this issue, a large body of research has emerged which aims to mitigate these biases while keeping the syntactic and semantic utility of embeddings intact. This paper describes Fair Embedding Engine (FEE), a library for analysing and mitigating gender bias in word embeddings. FEE combines various state of the art techniques for quantifying, visualising and mitigating gender bias in word embeddings under a standard abstraction. FEE will aid practitioners in fast track analysis of existing debiasing methods on their embedding models. Further, it will allow rapid prototyping of new methods by evaluating their performance on a suite of standard metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Non-contextual word embedding models such as Word2Vec (Mikolov et al., 2013b,a) , GloVe (Pennington et al., 2014) and FastText (Bojanowski et al., 2017) have been established as the cornerstone of modern natural language processing (NLP) techniques. The ease of usage followed by performance improvements (Turian et al., 2010) have made word embeddings pervasive across various NLP tasks. However, as with most things, the gains come at a cost, word embeddings also pose the risk of introducing unwanted stereotypical biases in the downstream tasks. Bolukbasi et al. (2016a) showed that a Word2Vec model trained on the Google news corpus, when evaluated for the analogy man:computer programmer :: woman:? results to the answer homemaker, reflecting the stereotypical biases towards woman. Further, Zhao et al. (2018a) showed that models operating on biased word embeddings can leverage stereotypical cues in downstream tasks like co-reference resolution as heuristics to make thier final predictions.",
"cite_spans": [
{
"start": 54,
"end": 79,
"text": "(Mikolov et al., 2013b,a)",
"ref_id": null
},
{
"start": 88,
"end": 113,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 127,
"end": 152,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 305,
"end": 326,
"text": "(Turian et al., 2010)",
"ref_id": "BIBREF20"
},
{
"start": 550,
"end": 574,
"text": "Bolukbasi et al. (2016a)",
"ref_id": "BIBREF2"
},
{
"start": 789,
"end": 817,
"text": "Further, Zhao et al. (2018a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Addressing the issues of unwanted biases in learned word representations, recent years have seen a surge in the development of word embedding debiasing procedures. The fundamental aim of a debiasing procedure is to mitigate stereotypical biases while introducing minimal semantic offset, hence maintaining the usability of embeddings. Based upon the mode of operation, the debiasing methods can be classified into two categories: First, post-processing methods, which operate upon pre-trained word vectors (Bolukbasi et al., 2016a; Kaneko and Bollegala, 2019; Yang and Feng, 2020) . Second, learning based methods, which involve re-training the word embedding models by either making changes to the training data or to the training objective. (Zhao et al., 2018b; Lu et al., 2018; Bordia and Bowman, 2019) . Along with the development of debiasing procedures, numerous metrics to evaluate the efficacy of each debiasing procedure have also been proposed (Zhao et al., 2018b; Bolukbasi et al., 2016a; Kumar et al., 2020) . Although the domain has largely benefited from the contributions of different researchers, the domain still lacks open source software projects that unify such diverse but fundamentally similar methods in an organized and standard manner. Therefore, the domain has a high barrier for newcomers to overcome, and the domain experts may still need to put in extra effort of building and maintaining their own codebases.",
"cite_spans": [
{
"start": 506,
"end": 531,
"text": "(Bolukbasi et al., 2016a;",
"ref_id": "BIBREF2"
},
{
"start": 532,
"end": 559,
"text": "Kaneko and Bollegala, 2019;",
"ref_id": "BIBREF10"
},
{
"start": 560,
"end": 580,
"text": "Yang and Feng, 2020)",
"ref_id": "BIBREF22"
},
{
"start": 743,
"end": 763,
"text": "(Zhao et al., 2018b;",
"ref_id": "BIBREF24"
},
{
"start": 764,
"end": 780,
"text": "Lu et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 781,
"end": 805,
"text": "Bordia and Bowman, 2019)",
"ref_id": "BIBREF4"
},
{
"start": 954,
"end": 974,
"text": "(Zhao et al., 2018b;",
"ref_id": "BIBREF24"
},
{
"start": 975,
"end": 999,
"text": "Bolukbasi et al., 2016a;",
"ref_id": "BIBREF2"
},
{
"start": 1000,
"end": 1019,
"text": "Kumar et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To solve this problem, we introduce Fair Embedding Engine (FEE), a library which combines state of the art techniques for debiasing, quantifying and visualizing gender bias in non-contextual word embeddings for the English language. The goal of FEE is to serve as a unified framework towards the analysis of biases in word embeddings and the efficient development of better debiasing and bias evaluation methods. Conforming to the common style of implementation in some existing research works (Bolukbasi et al., 2016b; Zhao et al., 2018b) , we use Numpy (Oliphant, 2006; Van Der Walt et al., 2011) arrays to store word vectors while keeping an index mapping to the strings of corresponding words. Further, we use the PyTorch (Paszke et al., 2017) Autograd engine for gradient based optimization and Matplotlib (Hunter, 2007) for generating plots. FEE is made available at: https://github. com/FEE-Fair-Embedding-Engine/FEE.",
"cite_spans": [
{
"start": 494,
"end": 519,
"text": "(Bolukbasi et al., 2016b;",
"ref_id": "BIBREF3"
},
{
"start": 520,
"end": 539,
"text": "Zhao et al., 2018b)",
"ref_id": "BIBREF24"
},
{
"start": 555,
"end": 571,
"text": "(Oliphant, 2006;",
"ref_id": "BIBREF15"
},
{
"start": 572,
"end": 598,
"text": "Van Der Walt et al., 2011)",
"ref_id": "BIBREF21"
},
{
"start": 726,
"end": 747,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 811,
"end": 825,
"text": "(Hunter, 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the study of stereotypical biases in NLP has received attention only in the recent years, the domain has not been a part of open source efforts that attempt to integrate diverse sets of independent methods. The only relevant open source software (OSS) that we came across during our investigation was Word Embedding Fairness Evaluation (WEFE) framework (Badilla et al., 2020) . For a given collection of pre-trained word embedding and a set of fairness criteria, WEFE ranks the embeddings based on their performance on an encapsulation of the fairness metrics. In order to achieve this ranking over an otherwise disparate set of fairness metrics (WEAT (Caliskan et al., 2017a) , RND (Garg et al., 2018) , and RNSB (Sweeney and Najafian, 2019)) WEFE introduces an abstraction which generalizes over the metrics using a set of target (the intended social class for which fairness is to be evaluated) and attribute words (the traits over which bias might exist for the selected target words). Fur-ther, (Badilla et al., 2020) conclude that while existing fairness metrics show a strong correlation when used for evaluating gender bias, only a weak correlation results when evaluating biases like religion and race.",
"cite_spans": [
{
"start": 359,
"end": 381,
"text": "(Badilla et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 658,
"end": 682,
"text": "(Caliskan et al., 2017a)",
"ref_id": "BIBREF5"
},
{
"start": 689,
"end": 708,
"text": "(Garg et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 1006,
"end": 1028,
"text": "(Badilla et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Therefore, the focus of WEFE is limited to the evaluation of pre-trained word vectors on a suite of fairness metrics, lacking any support for debiasing methods. Further, only those evaluation metrics can be used which comply with the abstraction. FEE, on the other hand, provides holistic functionality by equipping a suite of evaluation and debiasing methods, along with a flexible design to assist researchers in developing new solutions. FEE currently offers three debiasing methods as a part of its debiasing module: HardDebias (Bolukbasi et al., 2016b) , HSRDebias (Yang and Feng, 2020) , and RANDebias (Kumar et al., 2020) . The bias metrics module consist of the following: SemBias (Zhao et al., 2018b) , direct and indirect bias (Bolukbasi et al., 2016a) , Gender-basied Illicit Proximity Estimate (GIPE) and Proximity bias (Kumar et al., 2020), Percent Male Neighbours (PMN) (Gonen and Goldberg, 2019) and Word Embedding Association Test (WEAT) (Caliskan et al., 2017b) .",
"cite_spans": [
{
"start": 532,
"end": 557,
"text": "(Bolukbasi et al., 2016b)",
"ref_id": "BIBREF3"
},
{
"start": 570,
"end": 591,
"text": "(Yang and Feng, 2020)",
"ref_id": "BIBREF22"
},
{
"start": 608,
"end": 628,
"text": "(Kumar et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 689,
"end": 709,
"text": "(Zhao et al., 2018b)",
"ref_id": "BIBREF24"
},
{
"start": 737,
"end": 762,
"text": "(Bolukbasi et al., 2016a)",
"ref_id": "BIBREF2"
},
{
"start": 884,
"end": 910,
"text": "(Gonen and Goldberg, 2019)",
"ref_id": "BIBREF8"
},
{
"start": 954,
"end": 978,
"text": "(Caliskan et al., 2017b)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The core functionality of FEE is governed by five modules, namely Loader, Debias, Bias Metrics, Visualization, and Report. Figure 1 illustrates the components for each module of FEE. In the following subsections, we delineate upon the implementation of each of the modules along with the motivation for their development.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Fair Embedding Engine",
"sec_num": "3"
},
{
"text": "Motivation: The foremost step in the analysis of word embeddings is to load them into the random access memory. However, different formats of local embedding files, and heterogeneous formats of pre-trained embedding sources may entail disparate forms of access, making the loading process non-trivial. The loader module abstracts this preprocessing step and provides a standardized object based access of word embeddings to its users. Working: The workhorse of the loader module is its Word Embedding class, WE. Any version of a word embedding model can be considered as a unique instance of the WE class. It consists of a user accessible loader() method that either takes in an embedding name representing a pretrained word embedding, or a local embedding file path as input and returns an initialized WE object. We integrate the well established Gensim (\u0158eh\u016f\u0159ek and Sojka, 2010) API in our loader module for providing access to several pre-trained embeddings. However, since FEE focuses on the bias domain, it also provides the functionality to either store the debiased counterparts of Gensim-loaded embeddings, or load an externally downloaded debiased embedding file. For flexibility, the loader module supports three prominent file formats i.e. .txt, .bin, and .vocab (words) + .npy (vectors). Once, a WE object is initialized with an embedding version via the loader() method, a user can obtain the vector representation for a word by calling its vector method, v() with that word as its argument. All the subsequent modules of FEE operate on the WE object for achieving their objectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loader Module",
"sec_num": "3.1"
},
{
"text": "Motivation: The domain of bias in word representations considers effective debiasing methods as one of their ultimate objectives. Much effort has been made in the recent years to develop good debiasing methods. However, most works flaunt the efficacy of their debiasing procedures by applying them to a limited number of pre-trained embeddings. We hope that future works try to experiment their new methods or the existing ones on different embeddings. However, such a task involves refactoring and modification of individually tailored prior works. The debiasing module of FEE re-implements these diverse algorithms and provides a standardized access to users while facilitating reproducible research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Debiasing Module",
"sec_num": "3.2"
},
{
"text": "Working: The debiasing module of FEE currently provides access to some of the proposed postprocessing debiasing procedures in the past, as shown in Figure 1 . Each debiasing method is represented by a unique class in the module. For instance, the Hard Debias methods proposed by Bolukbasi et al. (2016a) is assigned a class named, HardDebias. Since all debiasing methods are fundamentally applied to a word embedding, the class of each debiasing method is initialised by a WE object. Each debiasing class has a common method called run() that takes in a list of words as argument and runs the entire debiasing procedure on it. As the debiasing procedure operates upon the WE object, the engineering effort in dealing with different embedding formats is mitigated.",
"cite_spans": [
{
"start": 279,
"end": 303,
"text": "Bolukbasi et al. (2016a)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Debiasing Module",
"sec_num": "3.2"
},
{
"text": "Motivation: Evaluation metrics provide the necessary quantitative support for comparing and contrasting between different debiasing methods. However, different research articles often show different results for the same metric despite having theoretically similar configurations. The bias metrics module of FEE is aimed at filling this gap, it provides a suite of bias metrics built on a common framework for facilitating reliable inference. Working: The bias metrics module of FEE currently provides access to a number of evaluation metrics, as shown in Figure 1 . Each evaluation metric is represented by a unique class in the module. Each metric class depends on some common utilities and consists of multiple methods that implement their unique evaluation procedure. Similar to the debiasing module, each metric class's instance is initialised by a WE object. The metrics either operate on a single word, pair of words or list of words. Each metric class has a common method called compute() that returns the final result by accepting different arguments corresponding to the type of metric. The unified design of bias metric module fosters a standardized access to any bias based evaluation metric and facilitates reproducible research.",
"cite_spans": [],
"ref_spans": [
{
"start": 555,
"end": 563,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Bias Metrics Module",
"sec_num": "3.3"
},
{
"text": "Motivation: Visualizations provide useful insights into the behaviour of a set of data points. Many prior debiasing methods (Bolukbasi et al., 2016a; Kumar et al., 2020) have strongly motivated their work by illustrating certain undesirable associations prevalent in standard word embeddings. Thus, through FEE we also provide off the shelf visualization capabilities that might help users to build reliable intuitions and uncover hidden biases in their models. Working: In this module, we implement a separate class for each visualization type. Just like other modules, a visualization class object is initialized by WE object, and makes use of some common utilities. Each visualization class has a run() method that takes in a word list and other optional arguments for producing the final visualizations. Figure 1 illustrates some off the shelf visualization options provided by FEE.",
"cite_spans": [
{
"start": 124,
"end": 149,
"text": "(Bolukbasi et al., 2016a;",
"ref_id": "BIBREF2"
},
{
"start": 150,
"end": 169,
"text": "Kumar et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 808,
"end": 817,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Visualization Module",
"sec_num": "3.4"
},
{
"text": "Motivation: The bias metrics and visualization modules incorporate a plethora of components which provide an exhaustive set of results. However, sometimes a specific combination of their components can provide the needed information succinctly. Accordingly, the report module aims to provide a descriptive summary of bias in word embeddings at the word and global level. Working: The report module is comprised of two separate classes that are representative of a word and a global level report respectively. Both the classes operate on WE initialized embedding object and implement a common generate() method that creates a descriptive report. The WordReport class is useful for providing an abridged information about a single word vector in terms of bias. A call to the generate() method of WordReport utilizes the components of other modules and instantly reports the direct bias, proximity bias, neighbour analysis (NeighboursAnalysis), neighbour plot and a neighbour word cloud for a word. The GlobalReport class, in contrast creates a concise report at the entire embedding level. Unlike the word level, GlobalReport class does not make use of the other modules, since it achieves all the required content from the embedding object. The generate() method of WordReport provides the information about n most and least biased words in a word embedding space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Report Module",
"sec_num": "3.5"
},
{
"text": "Despite the development of a large number of debiasing methods, the issue of bias in word representations still persists (Gonen and Goldberg, 2019) making it an active area of research. We believe that the design and wide variety of tools provided by FEE can play a significant role in assisting practitioners and researchers to develop better debiasing and evaluation methods. Figure 2 portrays FEE assisted workflows which abstract the routing engineering tasks and allow users to invest more time on the intellectually demanding questions.",
"cite_spans": [
{
"start": 121,
"end": 147,
"text": "(Gonen and Goldberg, 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 378,
"end": 386,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Developing new methods with FEE",
"sec_num": "4"
},
{
"text": "In this paper, we described Fair Embedding Engine (FEE), a python library which provides central access to the state-of-the-art techniques for quantifying, mitigating and visualizing gender bias in non-contextual word embedding models. We believe that FEE will facilitate the development and testing of debiasing methods for word embeddings. Further, it will make it easier to visualize the existing bias present in word vectors. In future, we would like to expand the capabilities of FEE towards contextual word vectors and also provide support towards biases other than gender and language other than English. We also look forward to integrate OSS such as WEFE (Badilla et al., 2020) to enhance the bias evaluation capabilities of FEE.",
"cite_spans": [
{
"start": 663,
"end": 685,
"text": "(Badilla et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Wefe: The word embeddings fairness evaluation framework",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Badilla",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20",
"volume": "",
"issue": "",
"pages": "430--436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pablo Badilla, Felipe Bravo-Marquez, and Jorge P\u00e9rez. 2020. Wefe: The word embeddings fairness evalua- tion framework. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelli- gence, IJCAI-20, pages 430-436. International Joint Conferences on Artificial Intelligence Organization.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"T"
],
"last": "Saligrama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "4349--4357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016a. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Ad- vances in neural information processing systems, pages 4349-4357.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"T"
],
"last": "Saligrama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "4349--4357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016b. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Ad- vances in neural information processing systems, pages 4349-4357.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Identifying and reducing gender bias in word-level language models",
"authors": [
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikha Bordia and Samuel R. Bowman. 2019. Identify- ing and reducing gender bias in word-level language models. CoRR, abs/1904.03035.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017a. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017b. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Londa",
"middle": [],
"last": "Schiebinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2018,
"venue": "Sciences",
"volume": "115",
"issue": "16",
"pages": "3635--3644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635-E3644.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "609--614",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), page 609-614.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Matplotlib: A 2d graphics environment",
"authors": [
{
"first": "J",
"middle": [
"D"
],
"last": "Hunter",
"suffix": ""
}
],
"year": 2007,
"venue": "Computing in Science & Engineering",
"volume": "9",
"issue": "3",
"pages": "90--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. D. Hunter. 2007. Matplotlib: A 2d graphics en- vironment. Computing in Science & Engineering, 9(3):90-95.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Gender-preserving debiasing for pre-trained word embeddings",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1641--1650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre-trained word embeddings. Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, page 1641-1650.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Nurse is closer to woman than surgeon? mitigating genderbiased proximities in word embeddings",
"authors": [
{
"first": "Vaibhav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Tenzin",
"middle": [],
"last": "Singhay Bhotia",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "486--503",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vaibhav Kumar, Tenzin Singhay Bhotia, Vaibhav Ku- mar, and Tanmoy Chakraborty. 2020. Nurse is closer to woman than surgeon? mitigating gender- biased proximities in word embeddings. Transac- tions of the Association for Computational Linguis- tics, 8:486-503.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Preetam Amancharla, and Anupam Datta",
"authors": [
{
"first": "Kaiji",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Mardziel",
"suffix": ""
},
{
"first": "Fangjing",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2018,
"venue": "Gender bias in neural natural language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.11714"
]
},
"num": null,
"urls": [],
"raw_text": "Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Aman- charla, and Anupam Datta. 2018. Gender bias in neural natural language processing. arXiv preprint arXiv:1807.11714.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "1st International Conference on Learning Representations, ICLR 2013,Workshop Track Proceedings",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. 1st International Conference on Learning Representations, ICLR 2013,Workshop Track Proceedings, pages 1-12.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A guide to NumPy",
"authors": [
{
"first": "E",
"middle": [],
"last": "Travis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Oliphant",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Travis E Oliphant. 2006. A guide to NumPy, volume 1. Trelgol Publishing USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic differentiation in pytorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta. ELRA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A transparent framework for evaluating unintended demographic bias in word embeddings",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Sweeney",
"suffix": ""
},
{
"first": "Maryam",
"middle": [],
"last": "Najafian",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1662--1667",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Sweeney and Maryam Najafian. 2019. A trans- parent framework for evaluating unintended demo- graphic bias in word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1662-1667, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Word representations: A simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev-Arie",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384-394, Up- psala, Sweden. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The numpy array: a structure for efficient numerical computation",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Van Der Walt",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Colbert",
"suffix": ""
},
{
"first": "Gael",
"middle": [],
"last": "Varoquaux",
"suffix": ""
}
],
"year": 2011,
"venue": "Computing in Science & Engineering",
"volume": "13",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Van Der Walt, S Chris Colbert, and Gael Varo- quaux. 2011. The numpy array: a structure for effi- cient numerical computation. Computing in Science & Engineering, 13(2):22.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A causal inference method for reducing gender bias in word embedding relations",
"authors": [
{
"first": "Zekun",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Feng",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "9434--9441",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zekun Yang and Juan Feng. 2020. A causal inference method for reducing gender bias in word embedding relations. In AAAI, pages 9434-9441.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.06876"
]
},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. arXiv preprint arXiv:1804.06876.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning gender-neutral word embeddings",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yichao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4847--4853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018b. Learning gender-neutral word embeddings. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, page 4847-4853.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "An inventory of implemented methods under four major modules that constitute FEE. Out of the box, FEE provides a subset of the prominent methods in each the modules. Further, each module can be easily extended to incorporate latest methods.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "(a) Developing novel debiasing methods (b) Developing novel bias metrics",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "FEE serves as a centralized resource for practitioners and researchers to develop novel debiasing methods and bias evaluation metrics.Figure (a)and (b) illustrate the possible workflow associated with each of the tasks respectively all made possible by the powerful abstraction provided by FEE.",
"uris": null,
"type_str": "figure",
"num": null
}
}
}
}