ACL-OCL / Base_JSON /prefixW /json /wanlp /2020.wanlp-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:58:54.200609Z"
},
"title": "AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Lauscher",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Mannheim",
"location": {}
},
"email": ""
},
{
"first": "Rafik",
"middle": [],
"last": "Takieddin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Mannheim",
"location": {}
},
"email": "rafik.takieddin@gmail.com"
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Mannheim",
"location": {}
},
"email": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Mannheim",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent work has shown that distributional word vector spaces often encode human biases like sexism or racism. In this work, we conduct an extensive analysis of biases in Arabic word embeddings by applying a range of recently introduced bias tests on a variety of embedding spaces induced from corpora in Arabic. We measure the presence of biases across several dimensions, namely: embedding models (SKIP-GRAM, CBOW, and FASTTEXT) and vector sizes, types of text (encyclopedic text, and news vs. user-generated content), dialects (Egyptian Arabic vs. Modern Standard Arabic), and time (diachronic analyses over corpora from different time periods). Our analysis yields several interesting findings, e.g., that implicit gender bias in embeddings trained on Arabic news corpora steadily increases over time (between 2007 and 2017). We make the Arabic bias specifications (AraWEAT) publicly available.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent work has shown that distributional word vector spaces often encode human biases like sexism or racism. In this work, we conduct an extensive analysis of biases in Arabic word embeddings by applying a range of recently introduced bias tests on a variety of embedding spaces induced from corpora in Arabic. We measure the presence of biases across several dimensions, namely: embedding models (SKIP-GRAM, CBOW, and FASTTEXT) and vector sizes, types of text (encyclopedic text, and news vs. user-generated content), dialects (Egyptian Arabic vs. Modern Standard Arabic), and time (diachronic analyses over corpora from different time periods). Our analysis yields several interesting findings, e.g., that implicit gender bias in embeddings trained on Arabic news corpora steadily increases over time (between 2007 and 2017). We make the Arabic bias specifications (AraWEAT) publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent research offered evidence that distributional word representations (i.e., word embeddings) induced from human-created text corpora exhibit a range of human biases, such as racism and sexism (Bolukbasi et al., 2016; Caliskan et al., 2017) . With word embeddings ubiquitously used as input for (neural) natural language processing (NLP) models, this brings about the jeopardy of introducing stereotypical unfairness into NLP models, which can reinforce existing social hierarchies, and therefore be harmful in practical applications. For instance, consider the seminal gender bias example \"Man is to computer programmer as woman is to homemaker\", which is algebraically encoded in the embedding space with the analogical relation man\u2212 computer programmer \u2248 woman\u2212 homemaker (Bolukbasi et al., 2016) . The existence of such biases in word embeddings stems from the combination of (1) human biases manifesting themselves in terms of word co-occurrences (e.g., the word woman appearing in a training corpus much more often in the context of homemaker than together with computer programmer) and (2) the distributional nature of the word embedding models (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) , which induce word vectors precisely by exploiting word co-occurrences, i.e., thus also encoding the human biases as a (negative) side-effect, which represents, expressed according to the taxnomy of harms proposed by Blodgett et al. (2020), a representational harm, more specifically, stereotyping. In order to quantify the amount of bias in word embeddings, Caliskan et al. (2017) proposed the Word Embedding Association Test (WEAT), which is based on the associative difference in terms of semantic similarity between two sets of target terms, e.g., male and female terms, towards two sets of attribute terms, e.g., career and family terms. Most recently, the WEAT test, measuring the degree of explicit bias in the distributional space, has been coupled with other tests, aiming to measure other aspects of bias, such as the amount of implicit bias (Gonen and Goldberg, 2019) or the presence of the analogical bias (Lauscher et al., 2020) .",
"cite_spans": [
{
"start": 197,
"end": 221,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 222,
"end": 244,
"text": "Caliskan et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 779,
"end": 803,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF3"
},
{
"start": 1156,
"end": 1178,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF17"
},
{
"start": 1179,
"end": 1203,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 1204,
"end": 1228,
"text": "Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 1589,
"end": 1611,
"text": "Caliskan et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 2082,
"end": 2108,
"text": "(Gonen and Goldberg, 2019)",
"ref_id": "BIBREF10"
},
{
"start": 2148,
"end": 2171,
"text": "(Lauscher et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While there is evidence that distributional vectors often encode human biases, the amount of biases does not seem to be universal across different languages and corpora, as recently shown by Lauscher and Glava\u0161 (2019) in the analysis of distributional biases across seven different languages. In this work, we focus on the multi-dimensional analysis of biases in Arabic word embeddings. The motivation for this work is twofold: (1) Arabic is one of the most widely spoken languages in the world: 1 this means that the biases encoded in language technology for Arabic have the potential for affecting more people than for most other languages; (2) language resources for Arabic -large corpora (Goldhahn et al., 2012) , pretrained word embeddings (Mohammad et al., 2017; Bojanowski et al., 2017) , and datasets for measuring semantic quality of Arabic embeddings (Elrazzaz et al., 2017; Cer et al., 2017) -are publicly available, allowing for the analyses of biases that these resources potentially hide.",
"cite_spans": [
{
"start": 191,
"end": 217,
"text": "Lauscher and Glava\u0161 (2019)",
"ref_id": "BIBREF13"
},
{
"start": 692,
"end": 715,
"text": "(Goldhahn et al., 2012)",
"ref_id": "BIBREF9"
},
{
"start": 745,
"end": 768,
"text": "(Mohammad et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 769,
"end": 793,
"text": "Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 861,
"end": 884,
"text": "(Elrazzaz et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 885,
"end": 902,
"text": "Cer et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a first step in the analysis of language technology biases for Arabic, we present ARAWEAT, an Arabic extension to the multilingual XWEAT framework (Lauscher and Glava\u0161, 2019) . Because the WEAT test (Caliskan et al., 2017) , though it has the notable advantage of drawing inspiration from psychology literature, has recently been shown to systematically overestimate the bias present in an embedding space (Ethayarajh et al., 2019) , in this work, we couple it with several other bias tests, designed to capture and quantify other aspects of human biases: Embedding Coherence Test (Dev and Phillips, 2019) , Bias Analogy Test (Lauscher et al., 2020) and Implicit Bias Tests (Gonen and Goldberg, 2019) .",
"cite_spans": [
{
"start": 150,
"end": 177,
"text": "(Lauscher and Glava\u0161, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 202,
"end": 225,
"text": "(Caliskan et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 409,
"end": 434,
"text": "(Ethayarajh et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 584,
"end": 608,
"text": "(Dev and Phillips, 2019)",
"ref_id": "BIBREF6"
},
{
"start": 629,
"end": 652,
"text": "(Lauscher et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 677,
"end": 703,
"text": "(Gonen and Goldberg, 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work, which is to the best of our knowledge the first study on quantifying biases in Arabic distributional word vector spaces, yields some interesting findings: biases seem more prominent in vectors trained on texts written in Egyptian Arabic than those written in Modern Standard Arabic (MSA). Also, the implicit gender bias in Arabic news corpora seems to be steadily on the rise over the ten year period between 2007 and 2017. Finally, we find evidence that the explicit bias effects, as measured by the WEAT test, in embeddings trained on the entire Arabic news corpus roughly correspond to averaging the biases measured across embeddings trained on temporally disjunct subsets of the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present ARAWEAT, our framework allowing for multi-dimensional analysis of bias in Arabic distributional word vector spaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AraWEAT",
"sec_num": "2"
},
{
"text": "At the core of our extension are the Arabic bias test specifications, which are based on the original English WEAT test data. WEAT is an adaptation of the Implicit Association Test (Nosek et al., 2002) , which quantifies biases as association differences measured in terms of response times of human subjects when exposed to different sets of stimuli. WEAT, in turn, measures the association differences in terms of the difference in semantic similarity between two sets of target terms towards two sets of attribute terms.",
"cite_spans": [
{
"start": 181,
"end": 201,
"text": "(Nosek et al., 2002)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data for Measuring Bias",
"sec_num": "2.1"
},
{
"text": "T1 math algebra geometry calculus equations computation numbers addition T2 poetry art dance literature novel symphony drama sculpture A1 male man boy brother he him his son A2 female woman girl sister she her hers daughter T1 T2 A1 Our creation of WEAT tests for Arabic starts with automatically translating, using Google Translate, the term sets (i.e., the terms from each of two target and two attribute lists) from the English WEAT tests. We then hired a native speaker of modern standard Arabic (MSA), who manually verified and, when needed, corrected the translations. As Arabic is a language with grammatical genders, we made sure to account for both genders when translating the terms so that we do not artificially introduce a bias in our test specifications (e.g., we translated the genderless English word engineer as both (engineer m.) and (engineer f.). While initially considered, we did not translate WEAT test specifications to the different Arabic dialects, as the differences between the MSA translations and dialectal translations for the terms from the WEAT test were observed only in a negligible fraction of Test Bias Type Target Set #1 Target Set #2 Attribute Set #1 Attribute Set #2 1 Universal Flowers (e.g., aster) Insects (e.g., ant, flea) Pleasant (e.g., health) Unpleasant (e.g., abuse) 2 Militant Instruments (e.g., cello)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data for Measuring Bias",
"sec_num": "2.1"
},
{
"text": "Weapons (e.g., gun) Pleasant Unpleasant 7 Gender",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A2",
"sec_num": null
},
{
"text": "Math (e.g., algebra, geometry) Arts (e.g., poetry) Male (e.g., brother, son) Female (e.g., woman, sister) 8 Gender Science (e.g., experiment) Arts Male Female 9 Disease Physical (e.g., virus)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A2",
"sec_num": null
},
{
"text": "Mental (e.g., sad) Long-term (e.g., always) Short-term (e.g., occasional) Table 2 : WEAT bias tests.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A2",
"sec_num": null
},
{
"text": "cases; and even in those cases the MSA translation is also in usage in other Arabic dialects. 2 We further omitted WEAT tests 3-6 and 10 as they are based on proper names. While it has been shown that names are a good proxy for identifying and removing bias towards specific groups of people (Hall Maudslay et al., 2019) , it is difficult to \"translate\" them. 3 As an example of the resulting AraWEAT test, Table 1 Table 3 : Bias scores for 300-dimensional pretrained FastText (FT) and AraVec (AV) n-gram distributional word vector spaces. We omitted test 9 as less than 20% of the test vocabulary was found in the FT and AV embedding spaces. We report explicit bias scores in terms of WEAT (W), ECT, and BAT, and implicit bias in terms of KMeans++ accuracy (KM). For AV, we report results for the Skip-gram (SG) and CBOW (CB) models. Asterisks indicate WEAT bias effects or pearson correlation scores that are insignificant at \u03b1 < 0.05.",
"cite_spans": [
{
"start": 94,
"end": 95,
"text": "2",
"ref_id": null
},
{
"start": 292,
"end": 320,
"text": "(Hall Maudslay et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 360,
"end": 361,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 407,
"end": 414,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 415,
"end": 422,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A2",
"sec_num": null
},
{
"text": "Aiming towards a holistic picture of biases encoded in Arabic word vectors, we put together several bias tests that quantify both implicit and explicit biases: (1) WEAT (Caliskan et al., 2017) , (2) Embedding Coherence Test (ECT) (Dev and Phillips, 2019), (3) Bias Analogy Test (BAT) (Lauscher et al., 2020) , and (4) Implicit Bias Test with K-Means++ (KM) (Gonen and Goldberg, 2019) . For all bias tests, we adopt the notion of implicit and explicit bias specifications as proposed by Lauscher et al. (2020) : an explicit bias specification consists here of two sets of target terms and two sets of attribute terms",
"cite_spans": [
{
"start": 169,
"end": 192,
"text": "(Caliskan et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 284,
"end": 307,
"text": "(Lauscher et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 357,
"end": 383,
"text": "(Gonen and Goldberg, 2019)",
"ref_id": "BIBREF10"
},
{
"start": 486,
"end": 508,
"text": "Lauscher et al. (2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "B E (T 1 , T 2 , A 1 , A 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "The idea is to measure the bias between the target sets, e.g., science and art, towards the attribute sets, e.g., male vs. female terms, or vice versa. In contrast, an implicit bias specification consists of target terms only, i.e., B E (T 1 , T 2 ). Accordingly, the intuition is to measure bias between the target term representations only, and not its explicit manifestation with regard to other concepts. Furthermore, we report the semantic quality for all word embedding spaces we induced ourselves: to this end, we report the scores on predicting sentence-level semantic similarity for Arabic (SemEval 2017 Task 1; we obtain sentence embeddings as averages of word embeddings) (Cer et al., 2017) .",
"cite_spans": [
{
"start": 683,
"end": 701,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "Word Embedding Association Test (WEAT). Let B E (T 1 , T 2 , A 1 , A 2 ) be an explicit bias specification consisting of two sets of target terms T 1 and T 2 , and two sets of attribute terms, A 1 and A 2 . Caliskan Table 6 : Explicit (WEAT, W) and implicit (K-Means++, KM) bias scores for 300-dim. embedding spaces induced using CBOW on the Arabic portions of the Leipzig News Corpora of 1M sentences between 2007 and 2017; comparison of averaged biases over temporally non-overlapping portions (AVG) with those in embeddings induced on the whole corpus (CONC). Asterisks: insignificant bias effects (\u03b1 < 0.05).",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 223,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "et al. 2017define the WEAT test statistic s(T 1 , T 2 , A 1 , A 2 ) as the association difference that T 1 and T 2 exhibit w.r.t. A 1 and A 2 -the association is measured as the average semantic similarity of T 1 /T 2 terms with terms from A 1 and A 2 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(T1, T2, A1, A2) = t 1 \u2208T 1 s(t1, A1, A2) \u2212 t 2 \u2208T 2 s(t2, A1, A2) ,",
"eq_num": "(1)"
}
],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "with associative difference for a term t computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(t, A1, A2) = 1 |A1| a 1 \u2208A 1 cos(t, a1) \u2212 1 |A2| a 2 \u2208A 2 cos(t, a2) ,",
"eq_num": "(2)"
}
],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "with t as the distributional vector of term t and cos as the cosine of the angle between two vectors. The significance of the test statistic is measured by the non-parametric permutation test in which the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "s(T 1 , T 2 , A 1 , A 2 ) is compared to s(X 1 , X 2 , A 1 , A 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": ", where (X 1 , X 2 ) denotes a random, equally-sized split of the terms in T 1 \u222a T 2 . A larger WEAT effect size indicates a larger bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "Embedding Coherence Test (ECT). Given an ARAWEAT explicit bias specification B E (T 1 , T 2 , A 1 , A 2 ), ECT operates on the bias specification which \"collapses\" the two AraWEAT attribute sets into a single set:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "B E (T 1 , T 2 , A = A 1 \u222a A 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": ". Next, as proposed by Dev and Phillips (2019), we compute the vectors t 1 and t 2 as averages of the word vectors of terms in T 1 and T 2 , respectively. Then, we obtain two vectors of similarities by computing the cosine similarity between the vector of each term in A and the mean vectors t 1 and t 2 . The ECT score is finally the Spearman correlation between the two obtained similarity vectors. The intuition is to assess, whether the similarities of the average vectors t 1 and t 2 , which represent the two target term sets, with the attribute terms are correlating. The larger the ECT correlation, the lower the bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "Bias Analogy Test (BAT). Inspired by Bolukbasi et al. (2016) 's famous anology, the idea behind BAT is to quantify the fraction of biased analogies that result from querying the embedding space. Given an AraWEAT test B E (T 1 , T 2 , A 1 , A 2 ), following Lauscher et al. (2020), we create all possible biased analogies t 1 \u2212 t 2 \u2248 a 1 \u2212 a 2 for (t 1 , t 2 , a 1 , a 2 ) \u2208 T 1 \u00d7 T 2 \u00d7 A 1 \u00d7 A 2 . Next we create two query vectors -q 1 = t 1 \u2212 t 2 + a 2 and q 2 = a 1 \u2212 t 1 + t 2 -for each tuple (t 1 , t 2 , a 1 , a 2 ). We then rank the vectors in the vector space according to the Euclidean distance with q 1 and q 2 , respectively, and report the percentage of cases where: a 1 is ranked higher than a term a 2 \u2208 A 2 \\ {a 2 } for q 1 and a 2 is ranked higher than a term a 1 \u2208 A 1 \\ {a 1 } for q 2 . The higher the BAT score, the higher the bias.",
"cite_spans": [
{
"start": 37,
"end": 60,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "Implicit Bias Test: K-Means++ (KM). Sometimes, bias is not expressed explicitely, i.e., as bias between two target term sets in explicit relation towards certain attribute sets, but manifests implicitely. To also reflect this type of bias in our study, We follow Gonen and Goldberg (2019) and test the Arabic word vector spaces for the amount of implicit bias by clustering terms from T 1 and T 2 with KMeans++ (Arthur and Vassilvitskii, 2007) . The higher the clustering accuracy, the higher the bias. We report the averaged accuracy over 20 independent runs.",
"cite_spans": [
{
"start": 263,
"end": 288,
"text": "Gonen and Goldberg (2019)",
"ref_id": "BIBREF10"
},
{
"start": 411,
"end": 443,
"text": "(Arthur and Vassilvitskii, 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "Semantic Quality (SQ). For the embedding models we train ourselves, we additionally report the semantic quality of the space by predicting sentence-level semantic similarity on the SemEval 2017 Task 1 for Arabic (ar-ar) (Cer et al., 2017) . Let s a = e a1 , ..., e an be the set of embeddings of words in sentence a and let s b = e b1 , ..., e am be the sequence of embedding representations for individual words in sentence b. We obtain aggregated sentence representations, by averaging the embeddings of words in the sentence: s = 1 l l i=1 e i and finally predict the similarity score as cos(s a , s b ). 4 We report Pearson correlation between our predicitions and the gold similarity annotations.",
"cite_spans": [
{
"start": 220,
"end": 238,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 608,
"end": 609,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "Dimensions of Bias Analysis. We run our tests along 5 different dimensions: (1) embedding methods: we compare embeddings induced using SKIP-GRAM, CBOW and FASTTEXT embedding models; 2source text types: we analyze vector spaces induced from corpora originating from different sources (Wikipedia, news, Twitter); 5 (3) vector sizes and preprocessing: we hypothesize that biases might be more prominent in higher-dimensional vectors. To this end, we compare 100vs. 300-dimensional embeddings. Furthermore, we analyze the effect of unigram vs. n-gram preprocessing of Arabic text, as offered by pretrained vectors AraVec (Mohammad et al., 2017); (4) corpus size: Lauscher and Glava\u0161 (2019) hypothesize that biases might be more expressed in bigger corpora. To further investigate this, we run several experiments controlling for corpus size; (5) temporal intervals: lastly, we conduct a diachronic bias analysis by training embeddings on corpora from different time periods.",
"cite_spans": [
{
"start": 311,
"end": 312,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "Distributional Word Vector Spaces. We conduct our analysis on (a) pretrained distributional word vector spaces from AraVec 6 (Mohammad et al., 2017) and FastText 7 (Bojanowski et al., 2017) and (b) embedding spaces we trained in order to be able to control for corpora size and preprocessing. In (b), we use Arabic corpora from the Leipzig Corpora Collection 8 (Goldhahn et al., 2012) .",
"cite_spans": [
{
"start": 164,
"end": 189,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 361,
"end": 384,
"text": "(Goldhahn et al., 2012)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Evaluation Methodology",
"sec_num": "2.2"
},
{
"text": "We present and discuss the findings of our analysis employing ARAWEAT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Findings",
"sec_num": "3"
},
{
"text": "Embedding Methods, Text Sources, and Dialects. Bias scores for 300-dimensional pretrained Fasttext (FT) and AraVec (AV) embedding spaces are shown in Table 3 . For both (FT) and (AV), we evaluated all available spaces, pretrained on different corpora. For FT, we investigate two models, one trained on the portions of Wikipedia and CommonCrawl corpora written in Modern Standard Arabic (MS) and the other on portions written in Egyptian Arabic. 9 We evaluate the four variants of ARAVEC vectors: (a) trained using either Skip-Gram (SG) or CBOW (CB) on (b) either Wikipedia (WIKI) or Twitter (TWITTER) text. Interestingly, most of these embedding spaces fail to exhibit significant explicit gender biases according to WEAT tests T 7 and T 8. However, the gender biases seem to be rather present implicitly (KM) in most spaces. Comparing FT ARABIC versus FT EGYPTIAN, both implicit and explicit bias seems to be slightly more pronounced in the Egyptian than in the MSA corpus. Results of comparison over text types support the unexpected finding for other languages (Lauscher and Glava\u0161, 2019) : embeddings built from user-generated content on average do not encode more bias than their counterparts trained on Wikipedia.",
"cite_spans": [
{
"start": 445,
"end": 446,
"text": "9",
"ref_id": null
},
{
"start": 1064,
"end": 1091,
"text": "(Lauscher and Glava\u0161, 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 150,
"end": 157,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Findings",
"sec_num": "3"
},
{
"text": "Embedding dimensionality and preprocessing. Next, we evaluate the effects of specific hyperparameter settings using the ARAVEC pretrained vector spaces. ARAWEAT bias effect sizes for different embedding dimensionalities and model types are listed in Table 5 . For the ARAWEAT test specifications T 1, T 7, and T 8, we did not observe prominent variance in the amount of explicit bias w.r.t. the vector dimensionality or pre-processing type. For the remaining test -T 2 -the explicit bias (according to the WEAT test) is somewhat more pronounced in the lower-dimensional embeddings and in the n-gram versions of the AraVec embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 250,
"end": 257,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Findings",
"sec_num": "3"
},
{
"text": "Diachronic Analysis and Corpora Sizes. Table 4 displays WEAT effect sizes for test T 7 (gender bias) in MSA 300-dimensional distributional word vector spaces we trained on the (temporally) disjunctive Arabic portions of the Leipzig News Corpora of sizes 300K and 1M sentences, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Findings",
"sec_num": "3"
},
{
"text": "The smaller corpus, consisting of 300K sentences, exhibits no significant bias effect sizes across all years. This finding is in line with previous observations Lauscher and Glava\u0161 (2019) that biases might be more expressed in embedding spaces obtained on bigger corpora. This could be a reflection of the overall quality of distributional vectors, which is lower when vectors are trained on smaller corpora (as supported by the corresponding STS scores). In the spaces obtained on the larger corpora segments, consisting of 1M-sentences, significant explicit (W) gender biases are present in years 2009, 2015, and 2017, with very similar effect sizes (between .92 and .97). The implicit gender bias (KM), on the other hand, steadily rises over the entire period under investigation (2007) (2008) (2009) (2010) (2011) (2012) (2013) (2014) (2015) (2016) (2017) .",
"cite_spans": [
{
"start": 783,
"end": 789,
"text": "(2007)",
"ref_id": null
},
{
"start": 790,
"end": 796,
"text": "(2008)",
"ref_id": null
},
{
"start": 797,
"end": 803,
"text": "(2009)",
"ref_id": null
},
{
"start": 804,
"end": 810,
"text": "(2010)",
"ref_id": null
},
{
"start": 811,
"end": 817,
"text": "(2011)",
"ref_id": null
},
{
"start": 818,
"end": 824,
"text": "(2012)",
"ref_id": null
},
{
"start": 825,
"end": 831,
"text": "(2013)",
"ref_id": null
},
{
"start": 832,
"end": 838,
"text": "(2014)",
"ref_id": null
},
{
"start": 839,
"end": 845,
"text": "(2015)",
"ref_id": null
},
{
"start": 846,
"end": 852,
"text": "(2016)",
"ref_id": null
},
{
"start": 853,
"end": 859,
"text": "(2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Findings",
"sec_num": "3"
},
{
"text": "Finally, we investigate how the biases in the embedding space induced on the whole Arabic Leipzig News corpus (2007 relates to the biases detected in embedding spaces induced from its different, temporally non-overlapping subportions. To this end, we average the biases measured on embeddings trained on its yearly subsets (AVG). The correlation results, over all four tests and two measures (W, KM), are shown in Table 6 . Indeed, the biases of the whole corpus (CONC) seem to be highly correlated with the averages of biases of subcorpora (AVG). In fact, we measure a substantial Pearson correlation of 66% between the two sets of scores (AVG and CONC). This suggests that one can roughly predict the biases of (embeddings trained on) a large corpus by aggregating the biases of (embeddings trained on) its (non-overlapping) subsets. Bolukbasi et al. (2016) were the first to study bias in distributional word vector spaces. Using an analogy test, they demonstrate gender stereotypes manifesting in word embeddings and propose the notion of the bias direction, upon which they base a debiasing method called hard-debiasing. Caliskan et al. (2017) adapt the Implicit Association Test (IAT) (Nosek et al., 2002) from psychology for studying biases in distributional word vector spaces. The test, dubbed Word Embedding Association Test (WEAT), measures associations between words in an embedding space in terms of cosine similarity between the vectors. They propose 10 stimuli sets, which we adapt in our work. Later, McCurdy and Serbetci (2017) extend the analysis to three more languages, Dutch, German, and Spanish, but only focus on gender bias. XWEAT, the cross-lingual and multilingual WEAT framework (Lauscher and Glava\u0161, 2019) , covers German, Spanish, Italian, Russian, Croatian, and Turkish. XWEAT analyses also focused on other relevant dimensions such as embedding method and similarity measures. Zhou et al. (2019) focus on measuring bias in languages with grammatical gender. Several research efforts produced new bias tests: Dev and Phillips (2019) propose the Embedding Coherence Test (ECT) with the intuition of capturing whether two sets of target terms are coherently distant from a set of attribute terms. They also propose several debiasing methods. Gonen and Goldberg (2019) show that many debiasing methods only mask but do not fully remove biases present in the embedding spaces. They propose to additionally test for implicit biases, by trying to classify or cluster the sets of target terms. Lauscher et al. (2020) unify the different notions of biases into explicit and implicit bias specifications, based on which they propose methods for quantifying and removing biases. While their is some effort to account for gender-awareness in Arabic machine translation (Habash et al., 2019) , we are, to the best of our knowledge, the first to measure bias in Arabic Language Technology.",
"cite_spans": [
{
"start": 110,
"end": 115,
"text": "(2007",
"ref_id": "BIBREF16"
},
{
"start": 836,
"end": 859,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 1126,
"end": 1148,
"text": "Caliskan et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 1191,
"end": 1211,
"text": "(Nosek et al., 2002)",
"ref_id": "BIBREF19"
},
{
"start": 1517,
"end": 1544,
"text": "McCurdy and Serbetci (2017)",
"ref_id": "BIBREF15"
},
{
"start": 1706,
"end": 1733,
"text": "(Lauscher and Glava\u0161, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 1908,
"end": 1926,
"text": "Zhou et al. (2019)",
"ref_id": "BIBREF21"
},
{
"start": 2039,
"end": 2062,
"text": "Dev and Phillips (2019)",
"ref_id": "BIBREF6"
},
{
"start": 2270,
"end": 2295,
"text": "Gonen and Goldberg (2019)",
"ref_id": "BIBREF10"
},
{
"start": 2517,
"end": 2539,
"text": "Lauscher et al. (2020)",
"ref_id": "BIBREF14"
},
{
"start": 2788,
"end": 2809,
"text": "(Habash et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 414,
"end": 421,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Findings",
"sec_num": "3"
},
{
"text": "Language technologies aim to avoid reflecting negative human biases such as racism and sexism. Yet, the ubiquitous word embeddings, used as input for many NLP models, seem to encode many such biases. In this work, we extensively quantify and analyze the biases in different vector spaces built from text in Arabic, a major world language with close to 300M native speakers. To this effect, we translate existing bias specifications from English to Arabic and investigate biases in embedding spaces that differ over several dimensions of analysis: embedding models, corpora sizes, type of text, dialectal vs. standard Arabic, and time periods. Our analysis yields interesting results. First, we confirm some of the previous findings for other languages, e.g., that biases are generally not more pronounced in user-generated text and that embeddings trained on larger corpora lead to more prominent biases. Secondly, our results suggest more bias is present in dialectal (Egyptian) Arabic corpora than in Modern Standard Arabic corpora. Next, our diachronic analysis suggests that the implicit gender bias of Arabic news text steadily increases over time. Finally, we show that the bias effects of the whole corpus can be predicted from bias effects of its subcorpora. We hope that ARAWEAT, our framework for multidimensional analysis of stereotypical bias in Arabic text representations, fuels more research on bias in Arabic language technology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "According toMikael (2007), Arabic is the fifth most spoken language in the world with close to 300 million native speakers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Albeit possibly less frequently than the dialectal translation. 3 Furthermore, WEAT tests 3-5 are tailored to test racial biases towards African-Americans, which is arguably much less prominent in the Arabic cultural area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This method was used as the simple aggregation baseline in the corresponding SemEval shared task.5 While Arabic Wikipedia is dominantly written in MSA, TWITTER is likely to exhibit non-negligible amounts of dialectical and colloquial Arabic.6 https://github.com/bakrianoo/aravec 7 https://fasttext.cc/docs/en/crawl-vectors.html 8 http://wortschatz.uni-leipzig.de/en/download/ 9 The language identification was performed automatically using the FT Language Detector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Anne Lauscher and Goran Glava\u0161 are supported by the Eliteprogramm of the Baden-W\u00fcrttemberg Stiftung (AGREE grant). We would like to thank the anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "K-means++: The advantages of careful seeding",
"authors": [
{
"first": "David",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Sergei",
"middle": [],
"last": "Vassilvitskii",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SODA",
"volume": "",
"issue": "",
"pages": "1027--1035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Arthur and Sergei Vassilvitskii. 2007. K-means++: The advantages of careful seeding. In Proceedings of SODA, pages 1027-1035.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Language (technology) is power: A critical survey of\" bias\" in nlp",
"authors": [
{
"first": "",
"middle": [],
"last": "Su Lin",
"suffix": ""
},
{
"first": "Solon",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Barocas",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5454--5476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of\" bias\" in nlp. In Proceedings of the 58th Meeting of the Association for Computational Linguistics, pages 5454-5476, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the ACL",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the ACL, 5:135-146.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Saligrama",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "4356--4364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of NIPS, pages 4356- 4364.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th Inter- national Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, August.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Attenuating bias in word vectors",
"authors": [
{
"first": "Sunipa",
"middle": [],
"last": "Dev",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of AISTATS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunipa Dev and Jeff Phillips. 2019. Attenuating bias in word vectors. In Proceedings of AISTATS.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Methodical evaluation of Arabic word embeddings",
"authors": [
{
"first": "Mohammed",
"middle": [],
"last": "Elrazzaz",
"suffix": ""
},
{
"first": "Shady",
"middle": [],
"last": "Elbassuoni",
"suffix": ""
},
{
"first": "Khaled",
"middle": [],
"last": "Shaban",
"suffix": ""
},
{
"first": "Chadi",
"middle": [],
"last": "Helwe",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "454--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammed Elrazzaz, Shady Elbassuoni, Khaled Shaban, and Chadi Helwe. 2017. Methodical evaluation of Arabic word embeddings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 454-458, Vancouver, Canada, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Understanding undesirable word embedding associations",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Duvenaud",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019. Understanding undesirable word embedding asso- ciations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Building large monolingual dictionaries at the leipzig corpora collection: From 100 to 200 languages",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Goldhahn",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Eckart",
"suffix": ""
},
{
"first": "Uwe",
"middle": [],
"last": "Quasthoff",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC '12)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the leipzig corpora collection: From 100 to 200 languages. In In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC '12).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "609--614",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of NAACL-HLT, pages 609-614.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic gender identification and reinflection in Arabic",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Houda",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Chung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "155--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Habash, Houda Bouamor, and Christine Chung. 2019. Automatic gender identification and reinflection in Arabic. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 155-165, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "It's all in the name: Mitigating gender bias with name-based counterfactual data substitution",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Rowan Hall Maudslay",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Teufel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5270--5278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It's all in the name: Mitigating gender bias with name-based counterfactual data substitution. In Proceedings of the 2019 Conference on Empir- ical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5270-5278.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Are we consistently biased? multidimensional analysis of biases in distributional word vectors",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Lauscher",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (* SEM 2019)",
"volume": "",
"issue": "",
"pages": "85--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Lauscher and Goran Glava\u0161. 2019. Are we consistently biased? multidimensional analysis of biases in distributional word vectors. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (* SEM 2019), pages 85-91.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A general framework for implicit and explicit debiasing of distributional word vector spaces",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Lauscher",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceesings of AAAI",
"volume": "",
"issue": "",
"pages": "8131--8138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Lauscher, Goran Glava\u0161, Simone Paolo Ponzetto, and Ivan Vuli\u0107. 2020. A general framework for implicit and explicit debiasing of distributional word vector spaces. In Proceesings of AAAI, pages 8131-8138.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Grammatical gender associations outweigh topical gender bias in crosslinguistic word embeddings",
"authors": [
{
"first": "Katherine",
"middle": [],
"last": "Mccurdy",
"suffix": ""
},
{
"first": "Oguz",
"middle": [],
"last": "Serbetci",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of WiNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katherine McCurdy and Oguz Serbetci. 2017. Grammatical gender associations outweigh topical gender bias in crosslinguistic word embeddings. In Proceedings of WiNLP.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "V\u00e4rldens 100 st\u00f6rsta spr\u00e5k 2007 (the world's 100 largest languages in",
"authors": [
{
"first": "Parkvall",
"middle": [],
"last": "Mikael",
"suffix": ""
}
],
"year": 2007,
"venue": "National Encylkopedin",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parkvall Mikael. 2007. V\u00e4rldens 100 st\u00f6rsta spr\u00e5k 2007 (the world's 100 largest languages in 2007). National Encylkopedin.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceesings of NIPS",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceesings of NIPS, pages 3111-3119.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Aravec: A set of arabic word embedding models for use in arabic nlp",
"authors": [
{
"first": "Kareem",
"middle": [],
"last": "Abu Bakr Mohammad",
"suffix": ""
},
{
"first": "Samhaa",
"middle": [],
"last": "Eissa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "El-Beltagy",
"suffix": ""
}
],
"year": 2017,
"venue": "Procedia Computer Science",
"volume": "117",
"issue": "",
"pages": "256--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abu Bakr Mohammad, Kareem Eissa, and Samhaa El-Beltagy. 2017. Aravec: A set of arabic word embedding models for use in arabic nlp. Procedia Computer Science, 117:256-265, 11.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Harvesting implicit group attitudes and beliefs from a demonstration web site",
"authors": [
{
"first": "Brian",
"middle": [
"A"
],
"last": "Nosek",
"suffix": ""
},
{
"first": "Anthony",
"middle": [
"G"
],
"last": "Greenwald",
"suffix": ""
},
{
"first": "Mahzarin",
"middle": [
"R"
],
"last": "Banaji",
"suffix": ""
}
],
"year": 2002,
"venue": "Group Dynamics",
"volume": "6",
"issue": "",
"pages": "101--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian A. Nosek, Anthony G. Greenwald, and Mahzarin R. Banaji. 2002. Harvesting implicit group attitudes and beliefs from a demonstration web site. Group Dynamics, 6:101-115.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representa- tion. In Proceedings of EMNLP, pages 1532-1543.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Examining gender bias in languages with grammatical gender",
"authors": [
{
"first": "Pei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Weijia",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kuan-Hao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Muhao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5279--5287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang. 2019. Examining gender bias in languages with grammatical gender. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5279-5287.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"text": "Original English version and MSA translation of the WEAT Test 7 specification.",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF1": {
"html": null,
"text": "list the Arabic translation of WEAT test T 7. An overview on the remaining tests with their respective target and attribute term sets is provided inTable 2.",
"num": null,
"content": "<table><tr><td/><td/><td>T1</td><td>T2</td><td>T7</td><td>T8</td></tr><tr><td>Model</td><td>Lang</td><td colspan=\"3\">W ECT BAT KM W ECT BAT KM W ECT BAT KM W ECT BAT KM</td></tr><tr><td>FT ARABIC</td><td>MSA</td><td colspan=\"3\">0.85 0.69 0.47 0.71 0.51* 0.62 0.43 0.63 -0.15* 0.17* 0.5 0.56 0.05* 0.02* 0.44 0.56</td></tr><tr><td>FT EGYPT</td><td colspan=\"4\">Egyptian 1.17 0.45 0.49 0.95 0.97 0.56 0.51 0.65 0.65* 0.54 0.51 0.6 0.09* 0.63 0.47 0.6</td></tr><tr><td>AV SG WIKI</td><td>MSA</td><td colspan=\"3\">0.27* 0.82 0.49 0.62 0.98 0.61 0.43 0.92 0.22* -0.6 0.57 0.88 0.13* -0.53* 0.64 0.72</td></tr><tr><td colspan=\"2\">AV SG TWITTER Mixed</td><td colspan=\"3\">1.21 0.27* 0.50 0.63 0.87 0.33 0.41 0.75 0.38* 0.05* 0.46 0.70 -0.98* 0.50* 0.42 0.60</td></tr><tr><td>AV CB WIKI</td><td>MSA</td><td colspan=\"3\">0.43* 0.91 0.45 0.67 1.21 0.53 0.45 0.52 0.57* -0.35* 0.57 0.75 -0.38* 0.26* 0.53 0.58</td></tr><tr><td colspan=\"2\">AV CB TWITTER Mixed</td><td colspan=\"3\">1.00 0.53 0.39 0.78 0.92 0.54 0.43 0.71 0.41* 0.48* 0.31 0.72 -0.49* 0.83 0.40 0.76</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"text": "Sent W KM STS W KM STS W KM STS W KM STS W KM STS W KM STS W KM STS W KM STS 300K -.35* .75 .38 -.33* .76 .38 -.52* .53 .33 -.45* .50 .34 .06* .64 .35 -.02* .75 .38 .30* .80 .37 .16* .75 .36 1M- --.08* .53 .44 .92 .55 .40 .32* .58 .38 .50* .65 .37 .97 .71 .42 .72* .71 .43 .95 .72 .42",
"num": null,
"content": "<table><tr><td>2007</td><td>2008</td><td>2009</td><td>2010</td><td>2011</td><td>2015</td><td>2016</td><td>2017</td></tr><tr><td>#</td><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF3": {
"html": null,
"text": "Gender bias over time: WEAT test 7 bias effects and KMeans++ accuracy scores for 300dimensional distributional word vector spaces induced using CBOW on Leipzig news corpora of size consisting of 300k and 1M sentences between 2007 and 2017. Asterisks indicate bias effects that are insignificant at \u03b1 < 0.05.",
"num": null,
"content": "<table><tr><td>Model</td><td>Dim. T1 T2 T7</td><td>T8</td></tr><tr><td colspan=\"3\">ARAVEC SG UNIGRAM 100 0.04* 0.91 0.66* -0.50*</td></tr><tr><td colspan=\"3\">ARAVEC SG UNIGRAM 300 0.18* 0.66 0.13* -0.19*</td></tr><tr><td colspan=\"3\">ARAVEC SG N-GRAM 100 0.49* 1.19 0.54* -0.49*</td></tr><tr><td colspan=\"3\">ARAVEC SG N-GRAM 300 0.27* 0.98 0.22* 0.13*</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "",
"num": null,
"content": "<table><tr><td>T1</td><td>T2</td><td>T7</td><td>T8</td></tr><tr><td colspan=\"4\">W KM W KM W KM W KM STS</td></tr><tr><td colspan=\"4\">AVG 0.68 0.91 1.03 0.79 0.69 0.63 -0.47 0.71 0.41</td></tr><tr><td colspan=\"4\">CONC 0.65 0.75 1.28 0.68 0.56* 0.72 -0.73 0.77 0.52</td></tr></table>",
"type_str": "table"
}
}
}
}