ACL-OCL / Base_JSON /prefixG /json /gebnlp /2022.gebnlp-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:02:54.144251Z"
},
"title": "Unsupervised Mitigation of Gender Bias by Character Components: A Case Study of Chinese Word Embedding",
"authors": [
{
"first": "Xiuying",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "KAUST",
"location": {}
},
"email": "xiuying.chen@kaust.edu.sa"
},
{
"first": "Mingzhe",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "Ant Group",
"institution": "",
"location": {}
},
"email": "li_mingzhe@pku.edu.cn"
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Renmin University of China",
"location": {}
},
"email": ""
},
{
"first": "Xin",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "KAUST",
"location": {}
},
"email": ""
},
{
"first": "Xiangliang",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "KAUST",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word embeddings learned from massive text collections have demonstrated significant levels of discriminative biases. However, debiasing on the Chinese language, one of the most spoken languages, has been less explored. Meanwhile, existing literature relies on manually created supplementary data, which is time-and energy-consuming. In this work, we propose the first Chinese Gender-neutral word Embedding model (CGE) based on Word2vec, which learns gender-neutral word embeddings without any labeled data. Concretely, CGE utilizes and emphasizes the rich feminine and masculine information contained in radicals, i.e., a kind of component in Chinese characters, during the training procedure. This consequently alleviates discriminative gender biases. Experimental results show that our unsupervised method outperforms the state-of-the-art supervised debiased word embedding models without sacrificing the functionality of the embedding model.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Word embeddings learned from massive text collections have demonstrated significant levels of discriminative biases. However, debiasing on the Chinese language, one of the most spoken languages, has been less explored. Meanwhile, existing literature relies on manually created supplementary data, which is time-and energy-consuming. In this work, we propose the first Chinese Gender-neutral word Embedding model (CGE) based on Word2vec, which learns gender-neutral word embeddings without any labeled data. Concretely, CGE utilizes and emphasizes the rich feminine and masculine information contained in radicals, i.e., a kind of component in Chinese characters, during the training procedure. This consequently alleviates discriminative gender biases. Experimental results show that our unsupervised method outperforms the state-of-the-art supervised debiased word embedding models without sacrificing the functionality of the embedding model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Investigations into the representation learning revealed that word embeddings are often prone to exhibit discriminative gender stereotype biases (Caliskan et al., 2017) . Consequently, these biased word embeddings have effects on downstream applications (Dinan et al., 2020; Blodgett et al., 2020) . Mitigating gender stereotypes in word embedding are becoming a research hotspot due to its penitential application, and a number of the existing debias works are dedicated to the English language (Zhao et al., 2018a; Kaneko and Bollegala, 2019) . However, debiasing on Chinese, one of the most spoken languages, has drawn less attention these days.",
"cite_spans": [
{
"start": 145,
"end": 168,
"text": "(Caliskan et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 254,
"end": 274,
"text": "(Dinan et al., 2020;",
"ref_id": "BIBREF5"
},
{
"start": 275,
"end": 297,
"text": "Blodgett et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 496,
"end": 516,
"text": "(Zhao et al., 2018a;",
"ref_id": "BIBREF23"
},
{
"start": 517,
"end": 544,
"text": "Kaneko and Bollegala, 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the Chinese language, \"radical\" is a graphical component of Chinese characters, which serves * Equal Contribution \u2020 Corresponding authors as an indexing component in the Chinese dictionary. Radical can suggest part of the meaning of the character due to the phono-semantic attribute of the Chinese language. For example, \"\u6c35(water)\" is the radical of \"\u6cb3 (river), \u6e56 (lake)\". Consequently, a series of works have shown that radicals can enhance the word embedding quality (Chen et al., 2015; Yin et al., 2016; Chen and Hu, 2018) . As part of the radical system, the gender-related radicals, i.e., \"\u5973(female)\" and \"\u4ebb(man)\", contains gender information of the corresponding character. Specifically, the radical \"\u5973(female)\" can denote female and \"\u4ebb(man)\" can denote people, which includes male gender information. For example, characters \"\u59d0(sister),\u5987(wife),\u5988(mother),\u59e5(grandma)\" all have the radical of \"\u5973(female)\", demonstrating that these are feminine words. Hence, we assume that radical is a natural information source to capture feminine and masculine information, and such information can help the model learn gender definition. Once the model learns what is the definition of gender, it can identify the gender bias that is not actually relevant to gender.",
"cite_spans": [
{
"start": 472,
"end": 491,
"text": "(Chen et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 492,
"end": 509,
"text": "Yin et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 510,
"end": 528,
"text": "Chen and Hu, 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To this end, we propose our Chinese Genderneutral word Embedding model (CGE) that is based on the classic Word2vec model, where the basic idea is to predict the target word given its context words. CGE has two variations, i.e., Radical-added CGE and Radical-enhanced CGE. Radical-added CGE emphasizes the gender definition information by directly adding the radical embedding to the word embedding. We next propose a Radicalenhanced CGE, where radical embeddings are employed to predict the target word instead of adding to the word embedding. This is a more flexible approach, where the gradients of the embeddings of words and radicals can be different in the training process. Note that the radical can be extracted from the character itself, hence, our model can also learn gender-neutral word embedding in an unsupervised fashion. Experimental results show that our methods outperform the supervised models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Chinese Word Embedding. Different from the English language where words are usually taken as basic semantic units, Chinese words have complicated composition structures revealing their semantic meanings (Li et al., 2020 (Li et al., , 2021 . More specifically, a Chinese word is often composed of several characters, and most of the characters themselves can be further divided into components such as radicals. Chen et al. (2015) first presented a characterenhanced word embedding model (CWE). Following this work, Yin et al. (2016) proposed multigranularity embedding (MGE), which enriches word embeddings by incorporating finer-grained semantics from characters and radicals. Another work (Yu et al., 2017) proposed to jointly embed Chinese words as well as their characters and finegrained sub-character components. Chen and Hu (2018) used radical escaping mechanisms to extract the intrinsic information in the Chinese corpus. All the above works do not deal with the gender bias phenomena in Chinese word embeddings. Gender Biased Tasks. Gender biases have been identified in downstream NLP tasks (Hendricks et al., 2018; Holstein et al., 2019) . Zhao et al. (2018a) demonstrated that coreference resolution systems carry the risk of relying on societal stereotypes present in training data and introduced a new benchmark, WinoBias, for coreference resolution focused on gender bias. Gender bias also exists in machine translation (Prates et al., 2018), e.g., translating nurses as females and programmers as males, regardless of context. Stanovsky et al. (2019) presented the first challenge set and evaluation protocol for the analysis of gender bias in machine translation. Notable examples also include visual SRL (cooking is stereotypically done by women, construction workers are stereotypically men, (Zhao et al., 2017) ), lexical semantics (\"man is to computer programmer as woman is to homemaker\", (Bolukbasi et al., 2016) ) and so on. Gender-neutral Word Embedding. Previous works demonstrated that word embeddings can encode sexist stereotypes (Caliskan et al., 2017) . To reduce the gender stereotypes embedded inside word representations, Bolukbasi et al. (2016) projected gender-neutral words to a subspace, which is orthogonal to the gender dimension defined by a list of gender-definitional words. Concretely, they pro-posed a hard-debiasing method where the gender direction is computed as the vector difference between the embeddings of the corresponding genderdefinitional words, and a soft-debiasing method, which balances the objective of preserving the inner products between the original word embeddings. Zhao et al. (2018a) aimed to preserve gender information in certain dimensions of word vectors while compelling other dimensions to be free of gender influence. Kaneko and Bollegala (2019) debiased pre-trained word embeddings considering four types of information: feminine, masculine, gender-neutral, and stereotypical. Following this work, Kaneko and Bollegala (2021) applied the debiasing technique to pre-trained contextualized embedding model.",
"cite_spans": [
{
"start": 203,
"end": 219,
"text": "(Li et al., 2020",
"ref_id": "BIBREF12"
},
{
"start": 220,
"end": 238,
"text": "(Li et al., , 2021",
"ref_id": "BIBREF13"
},
{
"start": 411,
"end": 429,
"text": "Chen et al. (2015)",
"ref_id": "BIBREF3"
},
{
"start": 515,
"end": 532,
"text": "Yin et al. (2016)",
"ref_id": "BIBREF20"
},
{
"start": 691,
"end": 708,
"text": "(Yu et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 819,
"end": 837,
"text": "Chen and Hu (2018)",
"ref_id": "BIBREF4"
},
{
"start": 1102,
"end": 1126,
"text": "(Hendricks et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 1127,
"end": 1149,
"text": "Holstein et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 1152,
"end": 1171,
"text": "Zhao et al. (2018a)",
"ref_id": "BIBREF23"
},
{
"start": 1544,
"end": 1567,
"text": "Stanovsky et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 1812,
"end": 1831,
"text": "(Zhao et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 1912,
"end": 1936,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 2060,
"end": 2083,
"text": "(Caliskan et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 2157,
"end": 2180,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF1"
},
{
"start": 2633,
"end": 2652,
"text": "Zhao et al. (2018a)",
"ref_id": "BIBREF23"
},
{
"start": 2794,
"end": 2821,
"text": "Kaneko and Bollegala (2019)",
"ref_id": "BIBREF10"
},
{
"start": 2975,
"end": 3002,
"text": "Kaneko and Bollegala (2021)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Compared with previous works, our work is focused on the Chinese language, and utilizes radicals, a special component of Chinese character.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We will take CBOW for example and demonstrate our frameworks based on CBOW.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "As shown in Figure 1 (a), CBOW predicts the target word, given context words in a sliding window. Concretely, given the word sequence D = (x 1 , x 2 , ..., x T ), the ultimate goal is to maximize the average log probablity:",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "CBOW",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 T T \u2212c t=c log P (x t |x t\u2212c , ..., x t+c ),",
"eq_num": "(1)"
}
],
"section": "CBOW",
"sec_num": "3.1"
},
{
"text": "where c is the size of the training context. The prediction probability of x t based on its context word is defined using softmax function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CBOW",
"sec_num": "3.1"
},
{
"text": "P (x t |x t\u2212c , ..., x t+c ) = exp x \u22a4 o \u2022 x t x t \u2032 \u2208W exp (x \u22a4 o \u2022 x t \u2032 ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CBOW",
"sec_num": "3.1"
},
{
"text": "where W is the words in the vocabulary. x t is the embedding of word x t , and x o is the average of all context word vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CBOW",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x o = 1 2c \u2212c\u2264j\u2264c,j\u0338 =0 x t+j ,",
"eq_num": "(2)"
}
],
"section": "CBOW",
"sec_num": "3.1"
},
{
"text": "Since this formulation is impractical because of the training cost, hierarchical softmax and negative sampling are used when training CBOW (Mikolov et al., 2013b) . Figure 1 : Illustrations of baseline model and two proposed models. Radical-added CGE directly adds radical embedding to word embedding; Radical-enhanced CGE incorporates radical information to predict the target word.",
"cite_spans": [
{
"start": 139,
"end": 162,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 165,
"end": 173,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "CBOW",
"sec_num": "3.1"
},
{
"text": "\u5979 (she) \u5f8b \u5e08 (lawyer) \u662f (is) \u5979 (she) \u5f8b \u5e08 (lawyer) \u662f (is) \u2f25 \u2f3b \u2f31 + + \u5979 (she) \u5f8b \u5e08 (lawyer) \u662f (is) \u2f25 \u2f3b \u2f31 (a) CBOW (b) Radical-added CGE (c) Radical-enhanced CGE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CBOW",
"sec_num": "3.1"
},
{
"text": "Since radical contains rich semantic and gender information, our model considers radical information to improve gender-neutral word embeddings. In Radical-added CGE, we directly add the radical vector representation with word vector, as shown in Figure 1 (b).",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 254,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Radial-added CGE",
"sec_num": "3.2"
},
{
"text": "The pivotal idea of Radical-added CGE is to replace the stored vectors x t in CBOW with realtime compositions of w t and r t , but share the same objective in Equation 1. Formally, a context word embedding x t is represented as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Radial-added CGE",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x t = 1 2 w t + 1 Nt Nt k=1 r k t ,",
"eq_num": "(3)"
}
],
"section": "Radial-added CGE",
"sec_num": "3.2"
},
{
"text": "where N t is the number of radicals in word x t , w t is the word vector of x t , and r k t is the radical vector of k-th radical in x t . Take Figure 1 (b) for example, when predicting the word \"\u662f(is)\", we add the radical vector of \"\u5973\" to word embedding of \"\u5979(she)\", and add the average radical vector of \"\u5f73,\u5dfe\" to word embedding of \"\u5f8b\u5e08(lawyer)\".",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Radial-added CGE",
"sec_num": "3.2"
},
{
"text": "In Radical-added CGE, the context word embedding is the sum of the word vector and radical vector, which ensures that the context word embedding contains the radical information. In this subsection, we propose a more flexible gender-neutral model, i.e., Radical-enhanced CGE, where the radical embedding and the word embedding are separated, where the former is utilized to enhance the latter. The overview of Radical-enhanced CGE is shown in Figure 1(c) .",
"cite_spans": [],
"ref_spans": [
{
"start": 443,
"end": 454,
"text": "Figure 1(c)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Radical-enhanced CGE",
"sec_num": "3.3"
},
{
"text": "Concretely, the context word embedding x t now equals w t , which means that it does not contain radical embedding. Instead, we use context word vectors as well as context radical vectors to predict target words. Following setting in CBOW, we use x o to denote the average of context word vectors, and r o to denote the average of context radical vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Radical-enhanced CGE",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x o = 1 2c \u2212c\u2264j\u2264c,j\u0338 =0 x t+j ,",
"eq_num": "(4)"
}
],
"section": "Radical-enhanced CGE",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r o = 1 2c \u2212c\u2264j\u2264c,j\u0338 =0 1 N t+j N t+j k=1 r k t+j ,",
"eq_num": "(5)"
}
],
"section": "Radical-enhanced CGE",
"sec_num": "3.3"
},
{
"text": "where c is the size of context window. Next, x o is used to calculate the predicted probability P (x t |x t\u2212c , ..., x t+c ). Similarly, r o is also used to obtain the context radical prediction probability, which is represented as P (x t |r t\u2212c , ..., r t+c ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Radical-enhanced CGE",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (x t |x t\u2212c , ..., x t+c ) = exp x \u22a4 o \u2022 x t x t \u2032 \u2208W exp (x \u22a4 o \u2022 x t \u2032 ) ,",
"eq_num": "(6)"
}
],
"section": "Radical-enhanced CGE",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (x t |r t\u2212c , ..., r t+c ) = exp r \u22a4 o \u2022 x t x t \u2032 \u2208W exp (r \u22a4 o \u2022 x t \u2032 ) .",
"eq_num": "(7)"
}
],
"section": "Radical-enhanced CGE",
"sec_num": "3.3"
},
{
"text": "Finally, the optimization target is to maximize:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Radical-enhanced CGE",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 T T \u2212c t=c ( log P (x t |x t\u2212c , ..., x t+c )+ log P (x t |r t\u2212c , ..., r t+c )).",
"eq_num": "(8)"
}
],
"section": "Radical-enhanced CGE",
"sec_num": "3.3"
},
{
"text": "The intuition behind this model is that the contextual radical embedding r t interacts and predicts the target word embedding x t so that the genderrelated information in radicals is implicitly introduced in the word embeddings. During the backpropagation, the gradients of the embeddings of words and radical components can be different, while they are the same in Radical-enhanced CGE. Thus, the representations of words and radical components are decoupled and can be better trained. 4 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Radical-enhanced CGE",
"sec_num": "3.3"
},
{
"text": "We adopt the 1GB Chinese Wikipedia Dump 1 as our training corpus. We follow Yu et al. 2017when pre-processing the dataset, removing pure digits and non-Chinese characters. JIEBA 2 is used for Chinese word segmentation and POS tagging. We add all words in CSemBias in the tokenize vocab dictionary to ensure that the gender-related words are successfully recognized. Along with each character is its radical, and we crawled the radical information of each character from HTTPCN 3 . We obtained 20,879 characters and 218 radicals, of which 214 characters are equal to their radicals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "We compare our method against several baselines: GloVe: a global log-bilinear regression model proposed in (Pennington et al., 2014) . Word2vec: introduced by Mikolov et al. (2013a), which either predicts the current word based on the context or predicts surrounding words given the current word. We chose the CBOW model following Chen et al. (2015) ; Yu et al. (2017) . The above two models denote non-debiased versions of the word embeddings. Hard-GloVe: we use the implementation of harddebiasing (Bolukbasi et al., 2016) method to produce a debiased version of GloVe embeddings. GN-GloVe: preserves gender information in certain dimensions of embeddings (Zhao et al., 2018b) . GP(GloVe) and GP(GN): aims to remove gender biases from pre-trained word embeddings GloVe and GN-GloVe (Kaneko and Bollegala, 2019) .",
"cite_spans": [
{
"start": 107,
"end": 132,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 331,
"end": 349,
"text": "Chen et al. (2015)",
"ref_id": "BIBREF3"
},
{
"start": 352,
"end": 368,
"text": "Yu et al. (2017)",
"ref_id": "BIBREF21"
},
{
"start": 500,
"end": 524,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 658,
"end": 678,
"text": "(Zhao et al., 2018b)",
"ref_id": "BIBREF24"
},
{
"start": 784,
"end": 812,
"text": "(Kaneko and Bollegala, 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons",
"sec_num": "4.2"
},
{
"text": "The above three models all rely on additional labeled seed words including feminine, masculine, gender-neutral, and stereotype word lists. We translate their original word lists and adapt them to our Chinese domain. Namely, we add 22 out of 24word pairs in the test dataset into the supplementary data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons",
"sec_num": "4.2"
},
{
"text": "To compare our model with other structurebased Chinese embedding models, we include the performance of other models that also incorporate component information: CWE is a characterenhanced word embedding model presented in Chen et al. (2015) ; MGE and JWE are multigranularity embedding model that make full use of word-character-radical composition (Yin et al., 2016; Yu et al., 2017) ; RECWE is a radical enhanced word embedding model (Chen and Hu, 2018) . These baselines include radical information in the word embedding construction process, but also take other information sources such as character-level information into consideration, which diminishes the importance and effectiveness of gender-related radicals. The purpose of this comparison is to demonstrate that existing structurebased Chinese word embedding models still suffer from gender bias problems.",
"cite_spans": [
{
"start": 222,
"end": 240,
"text": "Chen et al. (2015)",
"ref_id": "BIBREF3"
},
{
"start": 349,
"end": 367,
"text": "(Yin et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 368,
"end": 384,
"text": "Yu et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 436,
"end": 455,
"text": "(Chen and Hu, 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons",
"sec_num": "4.2"
},
{
"text": "For all models, we use the same parameter settings. Following Yu et al. (2017) , we set the word vector dimension to 200, the window size to 5, the training iteration to 100, the initial learning rate to 0.025, and the subsampling parameter to 10 \u22124 . Words with a frequency of less than 5 were ignored during training. We used 10-word negative sampling for optimization. The whole training process takes about six hours.",
"cite_spans": [
{
"start": 62,
"end": 78,
"text": "Yu et al. (2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "CSemBias Dataset. To evaluate debiasing performance of our model, we come up with a new dataset named CSemBias (Chinese SemBias). Concretely, we hire three native Chinese speakers to translate the original English SemBias (Zhao et al., 2018b) dataset to the Chinese version. Each instance in CSemBias consists of four word pairs: a gender-definition word pair (Definition; e.g., \"\u795e \u7236-\u4fee\u5973(priest-nun)\"), a gender-stereotype word pair (Stereotype; e.g., \"\u533b\u751f-\u62a4\u58eb(doctor-nurse)\") and two other word-pairs that have similar meanings but not a gender relation (None; e.g., \"\u72d7-\u732b(dog-cat)\", \"\u8336\u676f-\u76d6\u5b50(duc-lid)\"). CSemBias contains 20 gender-stereotype word pairs and 22 gender-definitional word pairs, and we use their Cartesian product to generate 440 instances. In the annotation process, for the translatable words, the annotators obtain the same translation results to be included in CSemBias. For untranslatable words, each annotator comes up with a Chinese word belonging to the same category, and they decide the final word together. Examples are shown in Table 1 . Since some of the baselines follow the supervised style, we split the CSemBias into training and test datasets. Among the 22 gender-definitional word pairs, 20word pairs are used in the training, and the left 2 pairs are used for the test dataset. We name the outof-domain test dataset as CSemBias-subset.",
"cite_spans": [
{
"start": 222,
"end": 242,
"text": "(Zhao et al., 2018b)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 1050,
"end": 1057,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluating Debiasing Performance",
"sec_num": "5.1"
},
{
"text": "Debias Evaluation. To study the quality of the gender information present in each model, we follow Jurgens et al. (2012) to use the analogy dataset, CSemBias, with the goal to identify the correct analogy of \"he-she\" from four pairs of words. We measure relational similarity between (\u4ed6(he),\u5979(she)) word-pair and a wordpair (a, b) in CSemBias using the cosine similarity between the # \u00bb he \u2212 # \u00bb she gender directional vector and \u20d7 a \u2212 \u20d7 b directional vector. We select the word-pair with the highest cosine similarity with # \u00bb he \u2212 # \u00bb she as the predicted answer. If the trained embeddings are gender-neutral, the percentage of gender-definitions is expected to be 100%.",
"cite_spans": [
{
"start": 99,
"end": 120,
"text": "Jurgens et al. (2012)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Debiasing Performance",
"sec_num": "5.1"
},
{
"text": "From Table 2 , we can see our models achieve the best performances on both datasets. In terms of CSemBias-subset, component-based Chinese word embedding models achieve better performance than simple GloVe or Word2vec, which demonstrates that component information is indeed useful in alleviating gender bias. To our surprise, debias models perform poorly on CSemBias-subset, indicating that they do not generalize well to out-of-domain tests. Comparing the performances on CSemBias-subset and CSemBias, we can find that the performance of supervised baseline models highly relies on labeled gender-related word sets. As for our model, both Radical-added CGE and Radical-enhanced CGE achieve comparable and even better performance than the state-of-the-art GN-GloVe model and perform significantly better than Hard-GloVe. Radicaladded CGE outperforms Radical-enhanced CGE by a small margin, because it directly stores radical information in word embedding, emphasizing gender information explicitly. Since both of our models are unsupervised, the result means that the radical semantic information in Chinese is especially useful for alleviating gender discrimination, and our models can successfully utilize such information. We use the Clopper-Pearson confidence 125 intervals following Kaneko and Bollegala (2019) to do the significance test.",
"cite_spans": [
{
"start": 1288,
"end": 1315,
"text": "Kaneko and Bollegala (2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluating Debiasing Performance",
"sec_num": "5.1"
},
{
"text": "Apart from examining the quality of the gender information present in each model, it is also important that other information that is unrelated to gender biases is preserved. Otherwise, the performance of downstream tasks that use these embeddings might be influenced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preservation of Word Semantics",
"sec_num": "5.2"
},
{
"text": "Semantic Similarity Measurement. This task evaluates the ability of word embedding by its capacity of uncovering the semantic relatedness of word pairs. We select two different Chinese word similarity datasets, i.e., Wordsim-240 and Wordsim-295 provided by Chen et al. (2015) . Wordsim-240 contains 240 pairs of Chinese words and their corresponding human-labeled similarity scores, and the same is true for Wordsim-295. Previous work (Kaneko and Bollegala, 2019) noted that there exist gender-biases even in the English word similarity test dataset. However, we confirm that no stereotype examples exist in Chinese Wordsim-240 and wordsim-295. The similarity embedding score for a word pair is computed as the cosine similarity of their embeddings. We compute the Spearman correlation (Myers et al., 2010) between the human-labeled scores and similarity scores computed by embeddings. Higher correlation denotes better quality. From Table 3 , we can see that Radical-added CGE obtains the best performance on Wordsim-240 dataset, outperforming the best baseline Word2vec by 0.0111. A possible reason is that radical information is also useful in semantic similarity tests. Generally, two CGE models perform comparable to Word2vec, indicating that information encoded in Word2vec is preserved while stereotype gender bias is removed. Analogy Detection. This task examines the quality of word embedding by its ability to discover linguistic regularities between pairs of words. Take the tuple \"\u7f57 \u9a6c(Rome):\u610f \u5927 \u5229(Italy)-\u67cf \u6797(Berlin):\u5fb7\u56fd(Germany)\", the model can answer correctly if the nearest vector representation to # \u00bb Italy \u2212 # \u00bb Rome + # \u00bb Berlin among all words except Rome, Italy, and Berlin. More generally, given an analogy tuple \"a : b \u2212 c : d\", the model answers the analogy question \"a : b \u2212 c :?\" by finding x that: We use the same dataset as in (Yu et al., 2017) , which consists of 1,124 tuples of words and each tuple contains 4 words. There are three categories in this dataset, i.e., \"Capital\" (677 tuples), \"State\" (175 tuples), and \"Family\" (272 tuples). The percentage of correctly solved analogy questions is shown in Table 4 . We can see that there is no significant degradation of performance in our model and debias baselines. Specifically, Radicalenhanced CGE performs better than Radical-added CGE. One possible reason is that, in Capital and State related words, the semantic meanings can not be directly revealed by radicals.",
"cite_spans": [
{
"start": 257,
"end": 275,
"text": "Chen et al. (2015)",
"ref_id": "BIBREF3"
},
{
"start": 435,
"end": 463,
"text": "(Kaneko and Bollegala, 2019)",
"ref_id": "BIBREF10"
},
{
"start": 786,
"end": 806,
"text": "(Myers et al., 2010)",
"ref_id": "BIBREF16"
},
{
"start": 1854,
"end": 1871,
"text": "(Yu et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 934,
"end": 941,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 2135,
"end": 2142,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Preservation of Word Semantics",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg max x\u0338 =a,x\u0338 =b,x\u0338 =c cos( #\u00bb b \u2212 #\u00bb a + #\u00bb c , #\u00bb x )",
"eq_num": "(9)"
}
],
"section": "Preservation of Word Semantics",
"sec_num": "5.2"
},
{
"text": "In this paper, we proposed two methods for unsupervised training in Chinese gender-neutral word embedding by emphasizing gender information stored in Chinese radicals in explicit and implicit ways. Our first model directly incorporates radical embedding in its word embedding, and the second one implicitly utilizes radical information. Experimental results show that our unsupervised method outperforms the supervised debiased word embedding models without sacrificing the functionality of the embedding model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In this paper, we study stereotypical associations between male and female gender and professional occupations in contextual word embeddings. We regard a system as a biased system if the word embeddings of a specific gender are more related to certain professions. When such representations are used in downstream NLP applications, there is an additional risk of unequal performance across genders (Gonen and Webster, 2020) . We believe that the observed correlations between genders and occupations in word embeddings are a symptom of an inadequate training process, and decorrelating genders and occupations would enable systems to counteract rather than reinforce existing gender imbalances.",
"cite_spans": [
{
"start": 398,
"end": 423,
"text": "(Gonen and Webster, 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Statement",
"sec_num": "7"
},
{
"text": "In this work, we focus on evaluating the binary gender bias performance. However, gender bias can take various formats, and we are looking forward to evaluating the bias in Chinese word embeddings by various methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Statement",
"sec_num": "7"
},
{
"text": "http://download.wikipedia.com/zhwiki 2 https://github.com/fxsjy/jieba 3 http://tool.httpcn.com/zi/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their constructive comments. The work was supported by King Abdullah University of Science and Technology (KAUST) through grant awards Nos. BAS/1/1624-01, FCC/1/1976-18-01, FCC/1/1976-23-01, FCC/1/1976-25-01, FCC/1/1976 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Language (technology) is power: A critical survey of bias in nlp",
"authors": [
{
"first": "",
"middle": [],
"last": "Su Lin",
"suffix": ""
},
{
"first": "Solon",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Barocas",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5454--5476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of bias in nlp. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454-5476.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "James",
"middle": [
"Y"
],
"last": "Zou",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Saligrama",
"suffix": ""
},
{
"first": "Adam Tauman",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In NIPS.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "",
"pages": "183--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from lan- guage corpora contain human-like biases. Science, 356:183-186.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Joint learning of character and word embeddings",
"authors": [
{
"first": "Xinxiong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
}
],
"year": 2015,
"venue": "Twenty-Fourth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huanbo Luan. 2015. Joint learning of character and word embeddings. In Twenty-Fourth Interna- tional Joint Conference on Artificial Intelligence.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Radical enhanced chinese word embedding",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Keqi",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2018,
"venue": "Chinese computational linguistics and natural language processing based on naturally annotated big data",
"volume": "",
"issue": "",
"pages": "3--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Chen and Keqi Hu. 2018. Radical enhanced chinese word embedding. In Chinese computational linguistics and natural language processing based on naturally annotated big data, pages 3-11. Springer.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Queens are powerful too: Mitigating gender bias in dialogue generation",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "8173--8188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Angela Fan, Adina Williams, Jack Ur- banek, Douwe Kiela, and Jason Weston. 2020. Queens are powerful too: Mitigating gender bias in dialogue generation. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 8173-8188.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatically identifying gender issues in machine translation using perturbations",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Kellie",
"middle": [],
"last": "Webster",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "1991--1995",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Kellie Webster. 2020. Automatically identifying gender issues in machine translation us- ing perturbations. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1991-1995.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Women also snowboard: Overcoming bias in captioning models (extended abstract)",
"authors": [
{
"first": "Lisa",
"middle": [
"Anne"
],
"last": "Hendricks",
"suffix": ""
},
{
"first": "Kaylee",
"middle": [],
"last": "Burns",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rohrbach",
"suffix": ""
}
],
"year": 2018,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Anne Hendricks, Kaylee Burns, Kate Saenko, Trevor Darrell, and Anna Rohrbach. 2018. Women also snowboard: Overcoming bias in captioning mod- els (extended abstract). In ECCV.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving fairness in machine learning systems: What do industry practitioners need? ArXiv",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Holstein",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"Wortman"
],
"last": "Vaughan",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Miroslav",
"middle": [],
"last": "Dud\u00edk",
"suffix": ""
},
{
"first": "Hanna",
"middle": [
"M"
],
"last": "Wallach",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daum\u00e9, Miroslav Dud\u00edk, and Hanna M. Wallach. 2019. Improving fairness in machine learning sys- tems: What do industry practitioners need? ArXiv, abs/1812.05239.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semeval-2012 task 2: Measuring degrees of relational similarity",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Holyoak",
"suffix": ""
}
],
"year": 2012,
"venue": "* SEM 2012: The First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "356--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Jurgens, Saif Mohammad, Peter Turney, and Keith Holyoak. 2012. Semeval-2012 task 2: Measur- ing degrees of relational similarity. In * SEM 2012: The First Joint Conference on Lexical and Compu- tational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 356- 364.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Gender-preserving debiasing for pre-trained word embeddings",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1641--1650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre-trained word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 1641-1650.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Debiasing pre-trained contextualised embeddings",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1256--1266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masahiro Kaneko and Danushka Bollegala. 2021. De- biasing pre-trained contextualised embeddings. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 1256-1266.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Vmsmo: Learning to generate multimodal summary for videobased news articles",
"authors": [
{
"first": "Mingzhe",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiuying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shen",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Zhangming",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "9360--9369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingzhe Li, Xiuying Chen, Shen Gao, Zhangming Chan, Dongyan Zhao, and Rui Yan. 2020. Vmsmo: Learning to generate multimodal summary for video- based news articles. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 9360-9369.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The stylecontent duality of attractiveness: Learning to write eye-catching headlines via disentanglement",
"authors": [
{
"first": "Mingzhe",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiuying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shen",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "35",
"issue": "",
"pages": "13252--13260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingzhe Li, Xiuying Chen, Min Yang, Shen Gao, Dongyan Zhao, and Rui Yan. 2021. The style- content duality of attractiveness: Learning to write eye-catching headlines via disentanglement. In Pro- ceedings of the AAAI Conference on Artificial Intelli- gence, volume 35, pages 13252-13260.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed repre- sentations of words and phrases and their composi- tionality. In NIPS.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Research design and statistical analysis",
"authors": [
{
"first": "L",
"middle": [],
"last": "Jerome",
"suffix": ""
},
{
"first": "Arnold",
"middle": [],
"last": "Myers",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"Frederick"
],
"last": "Well",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lorch",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerome L Myers, Arnold Well, and Robert Frederick Lorch. 2010. Research design and statistical analy- sis. Routledge.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Assessing gender bias in machine translation: a case study with google translate",
"authors": [
{
"first": "O",
"middle": [
"R"
],
"last": "Marcelo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Prates",
"suffix": ""
},
{
"first": "H",
"middle": [
"C"
],
"last": "Pedro",
"suffix": ""
},
{
"first": "Lu\u00eds",
"middle": [
"C"
],
"last": "Avelar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lamb",
"suffix": ""
}
],
"year": 2018,
"venue": "Neural Computing and Applications",
"volume": "",
"issue": "",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcelo O. R. Prates, Pedro H. C. Avelar, and Lu\u00eds C. Lamb. 2018. Assessing gender bias in machine trans- lation: a case study with google translate. Neural Computing and Applications, pages 1-19.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Evaluating gender bias in machine translation",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky, Noah A. Smith, and Luke S. Zettle- moyer. 2019. Evaluating gender bias in machine translation. In ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Multi-granularity chinese word embedding",
"authors": [
{
"first": "Rongchao",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "981--986",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rongchao Yin, Quan Wang, Peng Li, Rui Li, and Bin Wang. 2016. Multi-granularity chinese word em- bedding. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 981-986.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Joint embeddings of chinese words, characters, and fine-grained subcharacter components",
"authors": [
{
"first": "Jinxing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xun",
"middle": [],
"last": "Jian",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Hao Xin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "286--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinxing Yu, Xun Jian, Hao Xin, and Yangqiu Song. 2017. Joint embeddings of chinese words, characters, and fine-grained subcharacter components. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 286-291.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Men also like shopping: Reducing gender bias amplification using corpus-level constraints",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In EMNLP.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In NAACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning gender-neutral word embeddings",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yichao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4847--4853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018b. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 4847-4853.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Representative cases in CSemBias dataset. English words with wavy lines are untranslatable and we replace them with new Chinese words belonging to the same category."
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"3\">CSemBias-subset Definition \u2191 GloVe Embeddings 40.0 37.5 Word2vec 47.5 30.0</td><td>22.5 22.5</td><td>49.1 72.5</td><td>CSemBias 31.4 17.7</td><td>19.5 9.8</td></tr><tr><td>CWE JWE RECWE MGE</td><td>45.5 45.0 50.0 57.5</td><td>27.5 25.0 25.0 32.5</td><td>27.0 30.0 25.0 10.0</td><td>57.3 52.3 60.4 63.6</td><td>25.2 25.9 21.4 30.7</td><td>17.5 21.8 18.2 5.7</td></tr><tr><td>Hard-GloVe GN-GloVe GP(GloVe) GP(GN)</td><td>17.5 17.5 15.0 12.5</td><td>57.5 50.0 52.5 50.0</td><td>25.0 32.5 32.5 37.5</td><td>73.6 92.5 71.1 90.4</td><td>15.7 4.5 16.4 7.3</td><td>10.7 3.0 12.5 2.3</td></tr><tr><td>Radical-added CGE Radical-enhanced CGE</td><td>82.5 \u2020 * 75.0 \u2020 *</td><td>15.0 \u2020 * 17.5 \u2020 *</td><td>2.5 \u2020 * 7.5 \u2020 *</td><td>93.4 \u2020 * 86.8 \u2020 *</td><td>3.9 \u2020 * 10.0 \u2020 *</td><td>2.7 \u2020 * 3.2 \u2020 *</td></tr></table>",
"text": "Stereotype \u2193 None \u2193 Definition \u2191 Stereotype \u2193 None \u2193"
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>Model</td><td colspan=\"2\">Wordsim-240 Wordsim-295</td></tr><tr><td>GloVe Word2vec Hard-GloVe GN-GloVe GP(GloVe) GP(GN) Radical-added CGE Radical-enhanced CGE</td><td>0.5078 0.5009 0.5046 0.5026 0.4959 0.4959 0.5120 0.5067</td><td>0.4419 0.5985 0.4378 0.4400 0.4451 0.4451 0.5875 0.5821</td></tr></table>",
"text": "Prediction accuracies for gender relational analogies. \u2020 and * indicate statistically significant differences against Word2vec and Hard-GloVe respectively."
},
"TABREF4": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Results on word similarity evaluation."
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Results on word analogy reasoning."
}
}
}
}