mishig HF Staff commited on
Commit
6ac77d8
Β·
verified Β·
1 Parent(s): efe2fc9

Add 1 files

Browse files
Files changed (1) hide show
  1. 2406/2406.12165.md +3259 -0
2406/2406.12165.md ADDED
@@ -0,0 +1,3259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Statistical Uncertainty in Word Embeddings: GloVe-V
2
+
3
+ URL Source: https://arxiv.org/html/2406.12165
4
+
5
+ Markdown Content:
6
+ Back to arXiv
7
+
8
+ This is experimental HTML to improve accessibility. We invite you to report rendering errors.
9
+ Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off.
10
+ Learn more about this project and help improve conversions.
11
+
12
+ Why HTML?
13
+ Report Issue
14
+ Back to Abstract
15
+ Download PDF
16
+ Abstract
17
+ 1Introduction
18
+ 2Background on GloVe
19
+ 3Variance Derivation for GloVe-V
20
+ 4Results
21
+ 5GloVe-V enables principled significance testing
22
+ 6Conclusion
23
+ References
24
+
25
+ HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
26
+
27
+ failed: inconsolata
28
+
29
+ Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.
30
+
31
+ License: CC BY 4.0
32
+ arXiv:2406.12165v1 [cs.CL] 18 Jun 2024
33
+ Statistical Uncertainty in Word Embeddings: GloVe-V
34
+ Andrea Vallebueno
35
+ Stanford University avaimar@stanford.edu
36
+ &Cassandra Handan-Nader1
37
+ Stanford University slnader@stanford.edu
38
+ \ANDChristopher D. Manning
39
+ Stanford University manning@cs.stanford.edu
40
+ &Daniel E. Ho Stanford University deho@stanford.edu
41
+ Equal contribution.Corresponding author.
42
+ Abstract
43
+
44
+ Static word embeddings are ubiquitous in computational social science applications and contribute to practical decision-making in a variety of fields including law and healthcare. However, assessing the statistical uncertainty in downstream conclusions drawn from word embedding statistics has remained challenging. When using only point estimates for embeddings, researchers have no streamlined way of assessing the degree to which their model selection criteria or scientific conclusions are subject to noise due to sparsity in the underlying data used to generate the embeddings. We introduce a method to obtain approximate, easy-to-use, and scalable reconstruction error variance estimates for GloVe (Pennington etΒ al., 2014), one of the most widely used word embedding models, using an analytical approximation to a multivariate normal model. To demonstrate the value of embeddings with variance (GloVe-V), we illustrate how our approach enables principled hypothesis testing in core word embedding tasks, such as comparing the similarity between different word pairs in vector space, assessing the performance of different models, and analyzing the relative degree of ethnic or gender bias in a corpus using different word lists.
45
+
46
+ Statistical Uncertainty in Word Embeddings: GloVe-V
47
+
48
+
49
+
50
+
51
+ Andrea Vallebueno†
52
+ Stanford University
53
+ avaimar@stanford.eduΒ Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Cassandra Handan-Nader1
54
+ Stanford University
55
+ slnader@stanford.edu
56
+
57
+
58
+
59
+
60
+ Christopher D. Manning
61
+ Stanford University
62
+ manning@cs.stanford.eduΒ Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Daniel E. Ho†
63
+ Stanford University
64
+ deho@stanford.edu
65
+
66
+
67
+
68
+ Figure 1:Conceptual diagram of the Glove-V method for one word. The top two rows illustrate the structural form and estimation of the original GloVe model (Pennington etΒ al., 2014), which models each row of a logged, weighted co-occurrence matrix as the product of a word vector and context vectors, plus constant terms. As shown in the third row, GloVe-V creates a distribution for the optimal GloVe word vector using the reconstruction error found through the GloVe minimization procedure. These distributions can be efficiently computed word-by-word by assuming conditional independence between words given the optimal context vectors and constants.
69
+ 1Introduction
70
+
71
+ Over the past decade, vector representations of words, or β€œword embeddings,” have become standard ways to quantify word meaning and semantic relationships due to their high performance on natural language tasks (Mikolov etΒ al., 2013b; Pennington etΒ al., 2014; Levy etΒ al., 2015). Word embeddings are now ubiquitous in a wide variety of downstream computational social science applications, including charting the semantic evolution of words over time (Hamilton etΒ al., 2016), generating simplifications of scientific terminology (Kim etΒ al., 2016), comparing the information density of languages (Aceves and Evans, 2024), assisting in legal interpretation (Choi, 2024), and detecting societal biases in educational texts (Lucy etΒ al., 2020), historical corpora (Garg etΒ al., 2018; Charlesworth etΒ al., 2022), legal documents (Matthews etΒ al., 2022; Sevim etΒ al., 2023), political writing (Knoche etΒ al., 2019), and annotator judgments (Davani etΒ al., 2023). Task performance metrics using word embeddings also factor prominently into the evaluation of more sophisticated, multimodal artificial intelligence systems, such as brain-computer interfaces (Tang etΒ al., 2023) and adversarial text-to-image generation (Liu etΒ al., 2023).
72
+
73
+ Though a vast amount of research has relied on a relatively narrow set of word embedding models, no unified framework has emerged for representing statistical uncertainty in how accurately the word embeddings reconstruct the relationships implied by the sample of word co-occurrences. Intuitively, we should be less certain about a word’s position in vector space the less data we have on its co-occurrences in the raw text Ethayarajh etΒ al. (2019a, b). While it is generally standard practice in the social and natural sciences to check results for statistical significance, the vast majority of applications have relied exclusively on point estimates of embeddings, ignoring uncertainty even when training vectors over smaller corpora (e.g., Sevim etΒ al., 2023; Knoche etΒ al., 2019). In select applications, approaches have ranged from a bootstrap on the documents in the training corpus (Lucy etΒ al., 2020), to permutations of the word list or lexicon (Caliskan etΒ al., 2017; Garg etΒ al., 2018). While useful, such bootstrap and permutation approaches are computationally intractable on large datasets. Moreover, they address uncertainty from document or lexicon selection, even though embeddings are parameters of a data generating process on word co-occurrences, not on collections of documents or sets of words (Ethayarajh etΒ al., 2019b). Until now, accounting for the fundamental uncertainty from the data-generating process has eluded the NLP community.
74
+
75
+ To fill this gap, we develop GloVe-V, a scalable, easy-to-use, computationally efficient method for approximating reconstruction error variances for the GloVe model (Pennington etΒ al., 2014), one of the most widely used word embedding models. Our approach leverages the core insight that if context vectors and constant terms are held fixed at optimal values, GloVe word embeddings are the optimal parameters for a multivariate normal probability model on a weighted log transformation of the rows of the co-occurrence matrix. If we assume that the rows of the co-occurrence matrix are independent given their context vectors and constant terms, the word embedding variances according to this likelihood are computationally tractable on large vocabularies. This assumption is reasonable and is also employed in other settings, such as measuring the influence of particular documents on downstream embedding statistics (Brunet etΒ al., 2019). Such variance estimates enable researchers to conduct rigorous assessments of model performance and principled statistical hypothesis tests on downstream tasks, responding to the need to account for statistical uncertainty and significance testing in natural language processing and machine learning (Card etΒ al., 2020; Dror etΒ al., 2020; Liao etΒ al., 2021; Bowman and Dahl, 2021; Ulmer etΒ al., 2022).
76
+
77
+ Our contributions are threefold: (a)Β we provide the statistical foundations for a principled notion of reconstruction error variances for GloVe word embeddings; (b)Β we show that incorporating uncertainty can change conclusions about textual similarities, model selection, and textual bias; (c)Β we provide a data release including pre-computed word embeddings and variances for the most frequently occurring words in the Corpus of American English (COHA), the largest corpus of historical American English that is widely used to track the usage and linguistic evolution of English terms over time (e.g., Ng etΒ al., 2015; Newberry etΒ al., 2017; Garg etΒ al., 2018; Xiao etΒ al., 2023; Charlesworth and Hatzenbuehler, 2024).1
78
+
79
+ 2Background on GloVe
80
+
81
+ We use upper case bold letters for matrices
82
+ 𝐗
83
+ , lower case bold letters for vectors
84
+ 𝐱
85
+ , and regular non-bolded letters for scalars
86
+ π‘₯
87
+ , except when indexing into a matrix or vector (i.e., the
88
+ 𝑖
89
+ ⁒
90
+ 𝑗
91
+ 𝑑
92
+ ⁒
93
+ β„Ž
94
+ entry of the matrix
95
+ 𝐗
96
+ is
97
+ 𝐗
98
+ 𝑖
99
+ ⁒
100
+ 𝑗
101
+ ). Sets are represented by script letters
102
+ 𝒳
103
+ .
104
+
105
+ Word embedding models learn a shared vector space representation of words in a corpus. The training data are word co-occurrences in the corpus, which can be represented by a
106
+ 𝑉
107
+ Γ—
108
+ 𝑉
109
+ co-occurrence matrix
110
+ 𝐗
111
+ , where
112
+ 𝐗
113
+ 𝑖
114
+ ⁒
115
+ 𝑗
116
+ is the weighted number of times word
117
+ 𝑗
118
+ appears in the context of word
119
+ 𝑖
120
+ ,2 and
121
+ 𝑉
122
+ is the number of words in the vocabulary. A word embedding is a vector representation of a given word that emerges from the model.
123
+
124
+ The GloVe word embedding model (Pennington etΒ al., 2014) learns two embeddings
125
+ 𝐰
126
+ π‘˜
127
+ ,
128
+ 𝐯
129
+ π‘˜
130
+ ∈
131
+ ℝ
132
+ 𝐷
133
+ for each word
134
+ π‘˜
135
+ , by minimizing the following cost function:
136
+
137
+
138
+ 𝐽
139
+ =
140
+ βˆ‘
141
+ 𝑖
142
+ =
143
+ 1
144
+ 𝑉
145
+ βˆ‘
146
+ 𝑗
147
+ =
148
+ 1
149
+ 𝑉
150
+ 𝑓
151
+ ⁒
152
+ (
153
+ 𝐗
154
+ 𝑖
155
+ ⁒
156
+ 𝑗
157
+ )
158
+ ⁒
159
+ (
160
+ 𝐰
161
+ 𝑖
162
+ 𝑇
163
+ ⁒
164
+ 𝐯
165
+ 𝑗
166
+ +
167
+ 𝑏
168
+ 𝑖
169
+ +
170
+ 𝑐
171
+ 𝑗
172
+ βˆ’
173
+ log
174
+ ⁑
175
+ 𝐗
176
+ 𝑖
177
+ ⁒
178
+ 𝑗
179
+ )
180
+ 2
181
+
182
+ (1)
183
+
184
+ where
185
+ 𝑓
186
+ ⁒
187
+ (
188
+ 𝐗
189
+ 𝑖
190
+ ⁒
191
+ 𝑗
192
+ )
193
+ is a non-negative weighting function with properties that ensure that very rare or very frequent co-occurrences do not receive too much weight,
194
+ 𝑏
195
+ 𝑖
196
+ and
197
+ 𝑐
198
+ 𝑗
199
+ are constant terms associated with word
200
+ 𝑖
201
+ and
202
+ 𝑗
203
+ respectively.3 The vectors
204
+ 𝐯
205
+ π‘˜
206
+ are called β€œcontext” vectors and
207
+ 𝐰
208
+ π‘˜
209
+ are called β€œcenter” vectors, representing that word co-occurrences are defined based on words that appear within a fixed context window around a center word. The original implementation computed
210
+ 𝐰
211
+ π‘˜
212
+ +
213
+ 𝐯
214
+ π‘˜
215
+ in a post-processing step to obtain a single embedding for a word
216
+ π‘˜
217
+ . In this paper, we focus on the center vector
218
+ 𝐰
219
+ π‘˜
220
+ as the embedding of interest for wordΒ 
221
+ π‘˜
222
+ .4
223
+
224
+ 3Variance Derivation for GloVe-V
225
+
226
+ We now derive the GloVe-V variance estimator by recasting the optimization problem and recovering a probabilistic interpretation of GloVe embeddings.
227
+
228
+ 3.1Reformulating the GloVe optimization problem
229
+
230
+ The GloVe optimization problem using the cost in EquationΒ 1 can be written in matrix form as
231
+
232
+
233
+
234
+ min
235
+ 𝐛
236
+ ,
237
+ 𝐜
238
+ ,
239
+ 𝐖
240
+ ,
241
+ 𝐕
242
+ ⁑
243
+ β€–
244
+ 𝐅
245
+ βŠ™
246
+ 𝐑
247
+ β€–
248
+ 𝐹
249
+
250
+
251
+ 𝑠
252
+ .
253
+ 𝑑
254
+ .
255
+ 𝐑
256
+ =
257
+ log
258
+ 𝐗
259
+ βˆ’
260
+ 𝐖
261
+ 𝑇
262
+ 𝐕
263
+ βˆ’
264
+ π›πŸ
265
+ 𝑇
266
+ βˆ’
267
+ 𝟏
268
+ 𝐜
269
+ 𝑇
270
+
271
+
272
+ rank
273
+ ⁒
274
+ (
275
+ 𝐖
276
+ )
277
+ ,
278
+ rank
279
+ ⁒
280
+ (
281
+ 𝐕
282
+ )
283
+ ≀
284
+ 𝐷
285
+
286
+ (2)
287
+
288
+ where
289
+ 𝐅
290
+ 𝑖
291
+ ⁒
292
+ 𝑗
293
+ =
294
+ 𝑓
295
+ ⁒
296
+ (
297
+ 𝐗
298
+ 𝑖
299
+ ⁒
300
+ 𝑗
301
+ )
302
+ ,
303
+ 𝐰
304
+ 𝑖
305
+ and
306
+ 𝐯
307
+ 𝑖
308
+ are the
309
+ 𝑖
310
+ 𝑑
311
+ ⁒
312
+ β„Ž
313
+ and
314
+ 𝑗
315
+ 𝑑
316
+ ⁒
317
+ β„Ž
318
+ columns of matrices
319
+ 𝐖
320
+ and
321
+ 𝐕
322
+ respectively,
323
+ 𝑏
324
+ 𝑖
325
+ and
326
+ 𝑐
327
+ 𝑗
328
+ are the
329
+ 𝑖
330
+ 𝑑
331
+ ⁒
332
+ β„Ž
333
+ and
334
+ 𝑗
335
+ 𝑑
336
+ ⁒
337
+ β„Ž
338
+ elements of vectors
339
+ 𝐛
340
+ and
341
+ 𝐜
342
+ respectively, and
343
+ βŠ™
344
+ is the element-wise product. EquationΒ 2 is an element-wise weighted low-rank approximation problem that can be solved in two steps (e.g., Markovsky, 2012):
345
+
346
+
347
+ min
348
+ 𝐛
349
+ ∈
350
+ ℝ
351
+ 𝑉
352
+ ,
353
+ 𝐜
354
+ ∈
355
+ ℝ
356
+ 𝑉
357
+ ,
358
+ 𝐕
359
+ ∈
360
+ ℝ
361
+ 𝑉
362
+ Γ—
363
+ 𝐷
364
+ ⁑
365
+ min
366
+ 𝐖
367
+ ∈
368
+ ℝ
369
+ 𝑉
370
+ Γ—
371
+ 𝐷
372
+ ⁑
373
+ β€–
374
+ 𝐅
375
+ βŠ™
376
+ 𝐑
377
+ β€–
378
+ 𝐹
379
+
380
+ (3)
381
+
382
+ That is, holding the choice of
383
+ (
384
+ 𝐛
385
+ ,
386
+ 𝐜
387
+ ,
388
+ 𝐕
389
+ )
390
+ fixed at their globally optimal values
391
+ (
392
+ 𝐛
393
+ βˆ—
394
+ ,
395
+ 𝐜
396
+ βˆ—
397
+ ,
398
+ 𝐕
399
+ βˆ—
400
+ )
401
+ , the inner minimization to find the optimal
402
+ 𝐖
403
+ decomposes to
404
+ 𝑉
405
+ weighted least squares projections with solutions:
406
+
407
+
408
+ 𝐰
409
+ 𝑖
410
+ βˆ—
411
+ =
412
+ (
413
+ 𝐕
414
+ 𝒦
415
+ βˆ—
416
+ 𝑇
417
+ ⁒
418
+ 𝐃
419
+ 𝒦
420
+ ⁒
421
+ 𝐕
422
+ 𝒦
423
+ βˆ—
424
+ )
425
+ βˆ’
426
+ 1
427
+ ⁒
428
+ 𝐕
429
+ 𝒦
430
+ βˆ—
431
+ 𝑇
432
+ ⁒
433
+ 𝐃
434
+ 𝒦
435
+ ⁒
436
+ (
437
+ log
438
+ ⁑
439
+ 𝐱
440
+ 𝑖
441
+ βˆ’
442
+ 𝑏
443
+ 𝑖
444
+ βˆ—
445
+ ⁒
446
+ 𝟏
447
+ βˆ’
448
+ 𝐜
449
+ βˆ—
450
+ )
451
+
452
+ (4)
453
+
454
+ where
455
+ 𝒦
456
+ is the set of column indices with non-zero co-occurrences for word
457
+ 𝑖
458
+ ,
459
+ 𝐕
460
+ 𝒦
461
+ βˆ—
462
+ is a matrix whose columns belong to the set
463
+ {
464
+ 𝐯
465
+ 𝑗
466
+ βˆ—
467
+ :
468
+ 𝑗
469
+ ∈
470
+ 𝒦
471
+ }
472
+ ,
473
+ 𝐃
474
+ 𝒦
475
+ =
476
+ diag
477
+ ⁒
478
+ (
479
+ {
480
+ 𝐅
481
+ 𝑖
482
+ ⁒
483
+ 𝑗
484
+ 2
485
+ :
486
+ 𝑗
487
+ ∈
488
+ 𝒦
489
+ }
490
+ )
491
+ , and
492
+ 𝐱
493
+ 𝑖
494
+ is the
495
+ 𝑖
496
+ 𝑑
497
+ ⁒
498
+ β„Ž
499
+ row of
500
+ 𝐗
501
+ .
502
+
503
+ 3.2A probabilistic model for an approximate problem
504
+
505
+ Recasting the optimization problem in this fashion allows us to state the natural probabilistic model for which
506
+ 𝐰
507
+ 𝑖
508
+ βˆ—
509
+ in EquationΒ 4 is optimal. Conditional on the optimal context vector subspace spanned by
510
+ 𝐕
511
+ βˆ—
512
+ and optimal constant vectors
513
+ (
514
+ 𝐛
515
+ βˆ—
516
+ ,
517
+ 𝐜
518
+ βˆ—
519
+ )
520
+ , we write the following weighted multivariate normal model for the rows of
521
+ 𝐗
522
+ :
523
+
524
+
525
+
526
+ log
527
+ ⁑
528
+ 𝐱
529
+ 𝑖
530
+ =
531
+ 𝑏
532
+ 𝑖
533
+ βˆ—
534
+ ⁒
535
+ 𝟏
536
+ +
537
+ 𝐜
538
+ βˆ—
539
+ +
540
+ 𝐰
541
+ 𝑖
542
+ 𝑇
543
+ ⁒
544
+ 𝐕
545
+ 𝒦
546
+ βˆ—
547
+ +
548
+ 𝐞
549
+ 𝑖
550
+
551
+
552
+ 𝐞
553
+ 𝑖
554
+ ∼
555
+ 𝒩
556
+ ⁒
557
+ (
558
+ 𝟎
559
+ ,
560
+ 𝐃
561
+ 𝒦
562
+ βˆ’
563
+ 1
564
+ ⁒
565
+ 𝜎
566
+ 𝑖
567
+ 2
568
+ )
569
+
570
+ (5)
571
+
572
+ where we always have
573
+ (
574
+ 𝐃
575
+ 𝒦
576
+ )
577
+ 𝑖
578
+ ⁒
579
+ 𝑖
580
+ >
581
+ 0
582
+ due to the fact the
583
+ 𝐗
584
+ 𝑖
585
+ ⁒
586
+ 𝑗
587
+ >
588
+ 0
589
+ for
590
+ 𝑗
591
+ ∈
592
+ 𝒦
593
+ , and we assume that the rows of
594
+ log
595
+ ⁑
596
+ 𝐗
597
+ are independent given the optimal parameters, i.e.,
598
+ log
599
+ ⁑
600
+ 𝐱
601
+ 𝑖
602
+ |
603
+ 𝐛
604
+ βˆ—
605
+ ,
606
+ 𝐜
607
+ βˆ—
608
+ ,
609
+ 𝐕
610
+ 𝒦
611
+ βˆ—
612
+ βŸ‚
613
+ log
614
+ ⁑
615
+ 𝐱
616
+ 𝑗
617
+ |
618
+ 𝐛
619
+ βˆ—
620
+ ,
621
+ 𝐜
622
+ βˆ—
623
+ ,
624
+ 𝐕
625
+ 𝒦
626
+ βˆ—
627
+ for
628
+ 𝑖
629
+ β‰ 
630
+ 𝑗
631
+ .5 Then, under standard assumptions for weighted least squares estimators (e.g., Romano and Wolf, 2017), the covariance matrix for
632
+ 𝐖
633
+ simplifies into a
634
+ 𝑉
635
+ ⁒
636
+ 𝐷
637
+ Γ—
638
+ 𝑉
639
+ ⁒
640
+ 𝐷
641
+ block diagonal matrix with the
642
+ 𝑖
643
+ 𝑑
644
+ ⁒
645
+ β„Ž
646
+
647
+ 𝐷
648
+ Γ—
649
+ 𝐷
650
+ block given by:
651
+
652
+
653
+ 𝚺
654
+ 𝑖
655
+ =
656
+ 𝜎
657
+ 𝑖
658
+ 2
659
+ ⁒
660
+ (
661
+ βˆ‘
662
+ 𝑗
663
+ ∈
664
+ 𝒦
665
+ 𝑓
666
+ ⁒
667
+ (
668
+ 𝐗
669
+ 𝑖
670
+ ⁒
671
+ 𝑗
672
+ )
673
+ ⁒
674
+ 𝐯
675
+ 𝑗
676
+ βˆ—
677
+ ⁒
678
+ (
679
+ 𝐯
680
+ 𝑗
681
+ βˆ—
682
+ )
683
+ 𝑇
684
+ )
685
+ βˆ’
686
+ 1
687
+
688
+ (6)
689
+
690
+ We can then estimate
691
+ 𝜎
692
+ 𝑖
693
+ 2
694
+ , the reconstruction error for word
695
+ 𝑖
696
+ , with the plug-in estimator:
697
+
698
+
699
+
700
+ 𝜎
701
+ ^
702
+ 𝑖
703
+ 2
704
+ =
705
+
706
+
707
+ 1
708
+ |
709
+ 𝒦
710
+ |
711
+ βˆ’
712
+ 𝐷
713
+ ⁒
714
+ βˆ‘
715
+ 𝑗
716
+ ∈
717
+ 𝒦
718
+ 𝑓
719
+ ⁒
720
+ (
721
+ 𝐗
722
+ 𝑖
723
+ ⁒
724
+ 𝑗
725
+ )
726
+ ⁒
727
+ (
728
+ log
729
+ ⁑
730
+ 𝐗
731
+ 𝑖
732
+ ⁒
733
+ 𝑗
734
+ βˆ’
735
+ 𝑏
736
+ 𝑖
737
+ βˆ—
738
+ βˆ’
739
+ 𝑐
740
+ 𝑗
741
+ βˆ—
742
+ βˆ’
743
+ 𝐰
744
+ 𝑖
745
+ βˆ—
746
+ 𝑇
747
+ ⁒
748
+ 𝐯
749
+ 𝑗
750
+ βˆ—
751
+ )
752
+ 2
753
+
754
+ (7)
755
+ Figure 2:Uncertainty in word embedding locations. Two-dimensional representations of GloVe word embeddings trained on COHA (1900–1999), along with ellipses drawn around 100 draws from the estimated multivariate normal distribution from EquationΒ 5 for a random subset of words. Lower frequency words like β€œrigs” and β€œillumination” have more uncertainty in their estimated positions in the vector space than high frequency words like β€œshe” and β€œlarge.”
756
+ 3.3Estimation
757
+
758
+ The covariance estimator in EquationΒ 6 is only valid for words that co-occur with a greater number of unique context words
759
+ |
760
+ 𝒦
761
+ |
762
+ than the dimensionality of the vectors
763
+ 𝐷
764
+ . A simple way to increase the coverage of the variances in smaller corpora is to reduce the dimensionality of the word vectors. However, when
765
+ |
766
+ 𝒦
767
+ |
768
+ β‰ˆ
769
+ 𝐷
770
+ , numerical problems with computing the inverse in EquationΒ 6 are likely to occur even if it is technically possible to compute an inverse. To address these numerical issues, for each word whose Hessian block
771
+ 𝐇
772
+ 𝑖
773
+ =
774
+ βˆ‘
775
+ 𝑗
776
+ ∈
777
+ 𝒦
778
+ 𝑓
779
+ ⁒
780
+ (
781
+ 𝐗
782
+ 𝑖
783
+ ⁒
784
+ 𝑗
785
+ )
786
+ ⁒
787
+ 𝐯
788
+ 𝑗
789
+ βˆ—
790
+ ⁒
791
+ (
792
+ 𝐯
793
+ 𝑗
794
+ βˆ—
795
+ )
796
+ 𝑇
797
+ has a condition number that implies numerical error in excess of 1e-10 in its inverse, we instead compute the Moore-Penrose pseudo-inverse of
798
+ 𝐇
799
+ 𝑖
800
+ as
801
+ 𝐕
802
+ ⁒
803
+ 𝚲
804
+ +
805
+ ⁒
806
+ 𝐔
807
+ 𝑇
808
+ (Golub and VanΒ Loan, 2013), where
809
+ 𝐇
810
+ 𝑖
811
+ =
812
+ 𝐔
813
+ ⁒
814
+ 𝚲
815
+ ⁒
816
+ 𝐕
817
+ 𝑇
818
+ is the singular value decomposition of
819
+ 𝐇
820
+ 𝑖
821
+ and
822
+ 𝚲
823
+ 𝑗
824
+ ⁒
825
+ 𝑗
826
+ +
827
+ =
828
+ 1
829
+ /
830
+ 𝚲
831
+ 𝑗
832
+ ⁒
833
+ 𝑗
834
+ if
835
+ 𝚲
836
+ 𝑗
837
+ ⁒
838
+ 𝑗
839
+ >
840
+ 1e-3
841
+ Γ—
842
+ max
843
+ 𝑗
844
+ ⁑
845
+ 𝚲
846
+ 𝑗
847
+ ⁒
848
+ 𝑗
849
+ and
850
+ 0
851
+ otherwise. This technique effectively drops dimensions that are predominantly noise in the Hessian block when computing the inverse.
852
+
853
+ 3.4Propagating uncertainty
854
+
855
+ With this derivation in hand, propagating variance to downstream tasks is straightforward. For differentiable test statistics, such as the cosine similarity between two word embeddings, the most computationally efficient approach is to use the delta method for asymptotic variances (vanΒ der Vaart, 2000). Using a first-order Taylor series approximation to the test statistic, the delta method states that if
856
+ 𝑛
857
+ ⁒
858
+ (
859
+ 𝐖
860
+ βˆ’
861
+ 𝐖
862
+ ^
863
+ )
864
+ converges to
865
+ 𝒩
866
+ ⁒
867
+ (
868
+ 𝟎
869
+ ,
870
+ 𝚺
871
+ )
872
+ , then
873
+ 𝑛
874
+ ⁒
875
+ (
876
+ πœ™
877
+ ⁒
878
+ (
879
+ 𝐖
880
+ )
881
+ βˆ’
882
+ πœ™
883
+ ⁒
884
+ (
885
+ 𝐖
886
+ ^
887
+ )
888
+ )
889
+ converges to
890
+ 𝒩
891
+ ⁒
892
+ (
893
+ 𝟎
894
+ ,
895
+ πœ™
896
+ β€²
897
+ ⁒
898
+ (
899
+ 𝐖
900
+ )
901
+ 𝑇
902
+ ⁒
903
+ 𝚺
904
+ ⁒
905
+ πœ™
906
+ β€²
907
+ ⁒
908
+ (
909
+ 𝐖
910
+ )
911
+ )
912
+ , where
913
+ πœ™
914
+ ⁒
915
+ (
916
+ β‹…
917
+ )
918
+ is a differentiable function of
919
+ 𝐖
920
+ and
921
+ πœ™
922
+ β€²
923
+ ⁒
924
+ (
925
+ β‹…
926
+ )
927
+ is its gradient with respect to
928
+ 𝐖
929
+ . If the test statistic only depends on a subset of words in the vocabulary, the computation is quite efficient due to the fact that the gradient will be sparse.
930
+
931
+ For a broader class of test statistics, researchers can repeatedly draw from the estimated multivariate normal distribution in EquationΒ 6 and recalculate the test statistic of interest (Tanner, 1996), which is much more computationally efficient than a bootstrap on the full embedding model. In our code repository (github.com/reglab/glove-v), we provide a tutorial and starter code to apply the GloVe-V framework to any downstream test statistic using this method.
932
+
933
+ Figure 3:Word-level relationship between GloVe-V variances and frequency on COHA (1900–1999). L2-norm of the diagonal of
934
+ 𝚺
935
+ ^
936
+ from EquationΒ 6 (
937
+ π‘₯
938
+ -axis, on a
939
+ log
940
+ 10
941
+ scale) plotted against logged word frequencies (
942
+ 𝑦
943
+ -axis, on a
944
+ log
945
+ 10
946
+ scale) for a subset of 5,000 words randomly sampled in proportion to word frequency. The variances for words colored in orange are computed as discussed in Section 3.3.
947
+ 4Results
948
+ Figure 4:Comparison between document bootstrap and GloVe-V standard errors for cosine similarity. The average standard error of the cosine similarity between 1,600 randomly sampled word pairs (
949
+ 𝑦
950
+ -axis) as a function of the frequency for the word pair (
951
+ π‘₯
952
+ -axis with word frequency ranges in brackets), using the document bootstrap approach and Glove-V using the delta method. The GloVe-V standard errors are more sensitive to word frequency and are more efficient to compute.
953
+
954
+ To build intuition and demonstrate the usefulness of GloVe-V variances, we now provide empirical results using the Corpus of Historical American English (COHA) for the
955
+ 20
956
+ 𝑑
957
+ ⁒
958
+ β„Ž
959
+ century, which contains English-language texts from a balanced set of genres (fiction, non-fiction, magazines, and newspapers) from 1900–1999 (Davies, 2012).6 For all examples, we use 300-dimensional GloVe embeddings using a symmetric context window of 8 words.7 The empirical examples show how GloVe-V variances can move researchers in NLP towards emerging best practices for incorporating hypothesis testing in natural language tasks and downstream analyses (Card etΒ al., 2020).
960
+
961
+ 4.1Uncertainty in word embedding locations
962
+
963
+ We first show that the variances can represent reconstruction error uncertainty in the locations of individual words in vector space due to data sparsity. FigureΒ 2 plots a two-dimensional representation of the word embeddings, with ellipses drawn around 100 draws from the estimated multivariate normal distribution in EquationΒ 5 for a random subset of words. The size of the ellipses reflects the fact that, based on the underlying co-occurrence matrix, we are more certain about the positions of higher frequency words like β€œshe” and β€œlarge” than lower frequency words like β€œillumination” and β€œrigs.” The higher uncertainty for lower frequency words is a structural feature of the estimated covariance matrices themselves. FigureΒ 3 demonstrates this feature by plotting the word-level frequency (
964
+ π‘₯
965
+ -axis, on a
966
+ log
967
+ 10
968
+ scale) against the L2-norm of the diagonal of the estimated
969
+ 𝚺
970
+ ^
971
+ from EquationΒ 6 (
972
+ 𝑦
973
+ -axis, on a
974
+ log
975
+ 10
976
+ scale) for a random subset of words (sampled in proportion to their frequency). The magnitude of the variances for the word embedding parameters decreases smoothly as the word frequency increases. Where
977
+ |
978
+ 𝒦
979
+ |
980
+ β‰ˆ
981
+ 𝐷
982
+ (highlighted in orange in FigureΒ 3), the estimation approach described in SectionΒ 3.3 provides a reasonable estimate of the variance.
983
+
984
+ 4.2Comparison to document bootstrap
985
+
986
+ We now provide intuition for why the GloVe-V variances may in many instances be preferable to the document bootstrap for hypothesis testing on downstream test statistics. The document bootstrap is a computationally intensive approach to capturing word embedding instability that repeatedly resamples documents from the corpus, recomputes the word embeddings, and recalculates the test statistic of interest (Antoniak and Mimno, 2018). To re-purpose this approach in order to conduct a hypothesis test, we must subscribe to the uncertainty framework that the corpus itself is randomly sampled from a hypothetical population of documents. However, as described in SectionΒ 3, this framework does not match the statistical micro-foundations under which the embeddings themselves are estimated, causing a mismatch between the notion of document-level uncertainty and the estimation target of the embeddings.
987
+
988
+ FigureΒ 4 shows that document-level uncertainty can either underestimate or overestimate the variance of a downstream test statistic compared to the reconstruction error uncertainty given by GloVe-V, depending on the distribution of words across documents. A word that is used infrequently but in the same way across many documents may have low document-level uncertainty because each bootstrap sample will yield similar (but sparse) co-occurrence counts for that word, even if the reconstruction error remains high for each bootstrapped estimate. Conversely, a word that is extremely common in only a few documents may have high document-level uncertainty because many bootstrap samples will drop the majority of documents containing that word, even if the reconstruction error is low when all documents are included. Rather than choose one over the other, researchers can use each method for different purposes: tools like the document bootstrap or more computationally efficient analogs (e.g., Brunet etΒ al., 2019) can be used to assess sensitivity of results to particular documents, while GloVe-V can be used to conduct hypothesis tests under a coherent statistical framework, holding the corpus fixed.
989
+
990
+ 5GloVe-V enables principled significance testing
991
+
992
+ We now show how GloVe-V enables statistical significance testing, addressing the increasing recognition for NLP to move beyond point estimates alone (Card etΒ al., 2020; Liao etΒ al., 2021). GloVe-V can also help researchers assess when a corpus is underpowered for specific inferences, as we illustrate below.
993
+
994
+ 5.1Uncertainty in
995
+ π‘˜
996
+ nearest neighbors
997
+ Figure 5:Nearest neighbors with uncertainty. Healthcare occupations (
998
+ 𝑦
999
+ -axis) ranked by their cosine similarity with β€œdoctor” (
1000
+ π‘₯
1001
+ -axis), with the nearest neighbor ranking based on the point estimate above each point, and
1002
+ 95
1003
+ %
1004
+ GloVe-V uncertainty intervals.
1005
+
1006
+ Word similarity, including
1007
+ π‘˜
1008
+ nearest neighbor lists, informs performance evaluation for both embedding models and more sophisticated artificial intelligence systems (e.g. Mikolov etΒ al., 2013a; Levy and Goldberg, 2014a; Linzen, 2016; Borah etΒ al., 2021; Tang etΒ al., 2023; Liu etΒ al., 2023). Using only point estimates to evaluate word similarity, however, leaves the researcher with no sense of which word similarities are inherently less certain because they are based on less co-occurrence data in the underlying corpus. Uncertainty in neighbor rankings is particularly consequential for word similarity tasks, which depend on the ranking of different word pairs, and for word analogy tasks, which are typically solved by finding nearest neighbors in the embedding space.8 As an example of this dilemma, FigureΒ 5 plots the cosine similarity between β€œdoctor” and a list of healthcare occupation words along with GloVe-V uncertainty intervals. Based on the point estimates, we can assign a nearest-neighbor rank to each word by their proximity to β€œdoctor” (printed above the point estimate in FigureΒ 5). However, for the top three neighbors (and between neighbours 4 through 10), we cannot statistically distinguish the ranks (e.g.,
1009
+ 𝑝
1010
+ =
1011
+ 0.10
1012
+ for the difference in the cosine similarity of β€œdoctor" and β€œsurgeon" relative to that of β€œdoctor" and β€œdentist"). Which neighbor is the β€œnearest” is therefore subject to considerable uncertainty that would be invisible without incorporation of the GloVe-V variances.
1013
+
1014
+ 5.2Uncertainty in model performance
1015
+ Figure 6:Accounting for uncertainty in word embedding performance assessments using the SemEval-2012 Task 2 of Jurgens etΒ al. (2012) on COHA 1900–1999. The task measures the degree of relational similarity of word pairs using the Spearman correlation and the MaxDiff choice procedure on a taxonomy that comprises 79 types of relations across 10 different classes (e.g., contrast, part-whole, cause-purpose). We present GloVe-V results on a subset of relations, along with a Random baseline that randomly rates the word pairs in each relation.
1016
+
1017
+ Performance on analogy tasks is a canonical approach to word embedding model evaluation (Mikolov etΒ al., 2013b; Pennington etΒ al., 2014; Levy etΒ al., 2015). Closed-list relational similarity tasks such as SemEval-2012 (Jurgens etΒ al., 2012) are structured so that models can be benchmarked against random baselines, in which pairs of words are randomly related to each other to establish a lower bound on expected performance. FigureΒ 6 presents the performance of GloVe compared to a random benchmark using two evaluation metrics on four relational similarity tasks (see Jurgens etΒ al., 2012, for details on the tasks and metrics). While the point estimates for performance suggest that GloVe outperforms the random baseline on all tasks, adding uncertainty to both point estimates reveals that we can only claim significantly higher performance than random on two of the four relations (
1018
+ 𝑝
1019
+ =
1020
+ 0.08
1021
+ ,
1022
+ 𝑝
1023
+ <
1024
+ 0.001
1025
+ ,
1026
+ 𝑝
1027
+ =
1028
+ 0.02
1029
+ ,
1030
+ 𝑝
1031
+ =
1032
+ 0.08
1033
+ for relations 1–4, respectively, for the MaxDiff metric). The GloVe-V intervals also allow us to distinguish the performance of GloVe across different relational similarity tasks. While we can say that GloVe performs better on β€œContrast” vs.Β β€œCause-Purpose” (
1034
+ 𝑝
1035
+ =
1036
+ 0.03
1037
+ for the MaxDiff metric), we cannot claim that it does better on β€œCause-Purpose” than β€œPart-Whole” (
1038
+ 𝑝
1039
+ =
1040
+ 0.92
1041
+ ), even though the point estimates suggest better performance on the former task.9
1042
+
1043
+ 5.3Uncertainty in word embedding bias
1044
+ Figure 7:Ethnicity and gender bias scores. a) Average Asian bias scores with GloVe-V uncertainty using the cosine bias score of Garg etΒ al. (2018) in COHA 1900–1999 for different Asian surname lists. The gray line and shaded gray area represent the point estimate and 95% GloVe-V uncertainty interval, respectively, of the bias score on a master list of Asian surnames. The points and error bars in blue represent the bias score computed on different subsets of the list, grouped according to the number of times they appear in the corpus. b) Gender bias scores for three types of bias tests using the WEAT effect size of Caliskan etΒ al. (2017) with GloVe-V uncertainty.
1045
+
1046
+ Measurement of societal biases in text is a popular downstream application using word embeddings (e.g., Garg etΒ al., 2018; Lucy etΒ al., 2020; Charlesworth etΒ al., 2022; Matthews etΒ al., 2022; Sevim etΒ al., 2023). These types of studies compare similarity between curated sets of words to test the prevalence of societal biases in text. For example, if a set of female-oriented words is closer to a set of family-oriented words than to career-oriented words, and this relationship is stronger relative to the same comparison using male-oriented words, that is evidence of a gender bias (Bolukbasi etΒ al., 2016; Caliskan etΒ al., 2017). To represent uncertainty in these comparisons, researchers typically use a permutation test or bootstrap on the words included in each set (e.g., Caliskan etΒ al., 2017; Garg etΒ al., 2018), but others have noted that these types of uncertainty measures (which account for uncertainty in word selection) are not designed to account for the sparsity of word co-occurrences that form the basis for the comparisons (Ethayarajh etΒ al., 2019b).
1047
+
1048
+ It is especially important to account for uncertainty due to sparsity in applications where the analysis relies on infrequently occurring words such as surnames, which are often used to measure demographic bias (e.g., Caliskan etΒ al., 2017; Garg etΒ al., 2018; Swinger etΒ al., 2019). Researchers typically drop lower frequency surnames altogether from their analyses (e.g., Garg etΒ al., 2018) because they have no way of representing the higher uncertainty in the embedded positions for lower frequency surnames using only point estimates. But such curation runs the risk of sacrificing the representativeness of the word lists involved and the generalizability of the conclusions (Antoniak and Mimno, 2021). Using a measure of anti-Asian bias in the COHA corpus based on Garg etΒ al. (2018), the left panel of FigureΒ 7 shows how GloVe-V variances can automatically provide information on co-occurrence sparsity for researchers.10 The bias measure on the
1049
+ 𝑦
1050
+ -axis computes the average cosine similarity between a set of Asian surnames and a set of 20 Otherization words,11 relative to a set of White surnames. A more positive bias score indicates that Asian surnames are more closely related to these negative Otherization words compared to White surnames.
1051
+
1052
+ FigureΒ 7 shows that the anti-Asian bias estimate in COHA becomes more positive for more frequently appearing surnames, such that relying only on the most frequent surnames produces an exaggerated result relative to the full set of Asian surnames.12 This is likely due to the fact that the most frequently occurring surnames tend to be from historical figures such as β€œGhandi,” β€œMao,” and β€œMohammed,” which are clearly not representative of the entire class of Asian surnames. Using GloVe-V, low and high frequency words can be seamlessly combined into a single bias interval that represents the combined uncertainty in all the estimated word positions (shown as a gray interval on the plot), without having to drop any surnames and sacrifice generalizability.13
1053
+
1054
+ GloVe-V intervals can also be useful for studying high frequency word lists because they allow researchers to make statistical comparisons between types of bias. The right panel of FigureΒ 7 provides an example of three gender bias queries with GloVe-V intervals for the Word Embedding Association Test (WEAT) effect size, a cosine-similarity-based test (Caliskan etΒ al., 2017): (a) male vs.Β female names and words related to career vs.Β family; (b) male vs.Β female terms and words related to math vs.Β arts; and (c) male vs.Β female terms and words related to science vs.Β arts. While the point estimate for type (a) is higher than those for both (b) and (c), the GloVe-V intervals allow us to reject the null that (a) is equal to (b) (
1055
+ 𝑝
1056
+ <
1057
+ 0.001
1058
+ ), but not the null that (a) is equal to (c) (
1059
+ 𝑝
1060
+ =
1061
+ 0.11
1062
+ ). In this case, the GloVe-V intervals guard against making unsubstantiated claims about which types of bias are strongest.
1063
+
1064
+ 6Conclusion
1065
+
1066
+ In this paper, we have derived, computed, and demonstrated the utility of GloVe-V, a new method to represent uncertainty in word embedding locations using the seminal GloVe model. The GloVe-V variances provide researchers with the ability to easily propagate uncertainty in the word embeddings to downstream test statistics of interest, such as word similarity metrics that are used in both evaluation and analyses involving word embeddings. Unlike methods such as the document and word list bootstrap, the method is computationally efficient even on large corpora and represents uncertainty due to sparsity in the underlying co-occurrence matrix, which is often invisible in downstream analyses that use only the estimated word embeddings.
1067
+
1068
+ As we have shown in SectionΒ 4, incorporating uncertainty into downstream analyses can have consequential impacts on the conclusions researchers draw, and should be a best practice moving forward for studies that use word embeddings to infer semantic meaning. Finally, while outside the scope of the current study, we note that the contextual word and passage representations of transformer large language models are also point estimates and similar questions of embedding uncertainty apply when using such models as well.
1069
+
1070
+ Limitations
1071
+
1072
+ While useful in many applications, the GloVe-V method comes with certain limitations. First, the variances can only be computed for words whose number of context words exceeds the embedding dimensionality. This limitation can easily be minimized by reducing the dimensionality of the vectors for small corpora; for example, variances can be computed for
1073
+ 96
1074
+ %
1075
+ of the word embeddings in the relatively small New York Times Annotated Corpus (NYT) with 50-dimensional vectors, compared to
1076
+ 36
1077
+ %
1078
+ of embeddings with 300-dimensional vectors. Second, researchers need access to the co-occurrence matrix if they wish to compute the variances themselves, since it relies on an empirical estimate of the reconstruction error. Third, the methodology in this paper applies solely to the GloVe embedding model because of its statistical foundations. That said, this model is one of the most-cited word embedding models in current use and has been shown to have better stability and more intuitive geometric properties than competing models (Mimno and Thompson, 2017; Wendlandt etΒ al., 2018).
1079
+
1080
+ Finally, the uncertainty captured by GloVe-V intervals is due to sparsity in the underlying co-occurrence matrix, which is only one of many types of uncertainty one could consider in embedded locations for words. Other types of uncertainty that are held fixed in GloVe-V include instability due to the documents included in the corpus (e.g., Antoniak and Mimno, 2018), uncertainty due to the hyper-parameters of the model (e.g., Borah etΒ al., 2021), and statistical uncertainty in the estimated context vector positions and bias terms, which are treated as constants in the variance computation for computational tractability. Along with the conditional independence assumption on words, treating these terms as constants is necessary to reduce the number of free parameters in the model and allow a tractable variance computation. These sorts of independence assumptions are becoming standard practice to enable computational efficiency for models with a large number of parameters – the same assumption, for example, has been successfully employed to develop a computationally efficient approximation to a document bootstrap (Brunet etΒ al., 2019).
1081
+
1082
+ Despite these limitations, we found by trying a number of other approaches (detailed in AppendixΒ A.4) that GloVe-V strikes a desirable balance between maintaining the model’s probabilistic foundations for enhanced statistical rigor, and preserving computational tractability for practical purposes.
1083
+
1084
+ Acknowledgements
1085
+
1086
+ We thank Rishi Bommasani, Matthew Dahl, Neel Guha, Peter Henderson, Varun Magesh, Joel Niklaus, Derek Ouyang, Dilara Soylu, Faiz Surani, Mirac Suzgun, Lucia Zheng, and participants at the 2024 Stanford Data Science Conference for helpful comments and discussions.
1087
+
1088
+ References
1089
+ Aceves and Evans (2024)
1090
+ ↑
1091
+ Pedro Aceves and JamesΒ A Evans. 2024.Human languages with greater information density have higher communication speed but lower conversation breadth.Nature Human Behaviour, 8:1–13.
1092
+ Antoniak and Mimno (2018)
1093
+ ↑
1094
+ Maria Antoniak and David Mimno. 2018.Evaluating the stability of embedding-based word similarities.Transactions of the Association for Computational Linguistics, 6:107–119.
1095
+ Antoniak and Mimno (2021)
1096
+ ↑
1097
+ Maria Antoniak and David Mimno. 2021.Bad seeds: Evaluating lexical methods for bias measurement.In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1889–1904.
1098
+ Assylbekov and Takhanov (2019)
1099
+ ↑
1100
+ Zhenisbek Assylbekov and Rustem Takhanov. 2019.Context vectors are reflections of word vectors in half the dimensions.Journal of Artificial Intelligence Research, 66:225–242.
1101
+ Blundell etΒ al. (2015)
1102
+ ↑
1103
+ Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. 2015.Weight uncertainty in neural network.In International conference on machine learning, pages 1613–1622. PMLR.
1104
+ Bolukbasi etΒ al. (2016)
1105
+ ↑
1106
+ Tolga Bolukbasi, Kai-Wei Chang, JamesΒ Y Zou, Venkatesh Saligrama, and AdamΒ T Kalai. 2016.Man is to computer programmer as woman is to homemaker? debiasing word embeddings.In Advances in Neural Information Processing Systems, volumeΒ 29. Curran Associates, Inc.
1107
+ Book (2020)
1108
+ ↑
1109
+ JonathanΒ W. Book. 2020.Training bayesian neural networks: A study of improvements to training algorithms.Master thesis.
1110
+ Borah etΒ al. (2021)
1111
+ ↑
1112
+ Angana Borah, ManashΒ Pratim Barman, and Amit Awekar. 2021.Are word embedding methods stable and should we care about it?In Proceedings of the 32nd ACM Conference on Hypertext and social media, pages 45–55.
1113
+ Bowman and Dahl (2021)
1114
+ ↑
1115
+ SamuelΒ R. Bowman and GeorgeΒ E. Dahl. 2021.What will it take to fix benchmarking in natural language understanding?In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, page 4843–4855.
1116
+ Brunet etΒ al. (2019)
1117
+ ↑
1118
+ Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, and Richard Zemel. 2019.Understanding the origins of bias in word embeddings.In International conference on machine learning, pages 803–811. PMLR.
1119
+ Bruni etΒ al. (2012)
1120
+ ↑
1121
+ Elia Bruni, Gemma Boleda, Marco Baroni, and Nam-Khanh Tran. 2012.Distributional semantics in technicolor.In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 136–145.
1122
+ Caliskan etΒ al. (2017)
1123
+ ↑
1124
+ Aylin Caliskan, JoannaΒ J Bryson, and Arvind Narayanan. 2017.Semantics derived automatically from language corpora contain human-like biases.Science, 356(6334):183–186.
1125
+ Card etΒ al. (2020)
1126
+ ↑
1127
+ Dallas Card, Peter Henderson, Urvashi Khandelwal, Robin Jia, Kyle Mahowald, and Dan Jurafsky. 2020.With little power comes great responsibility.In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9263–9274.
1128
+ Charlesworth etΒ al. (2022)
1129
+ ↑
1130
+ TessaΒ ES Charlesworth, Aylin Caliskan, and MahzarinΒ R Banaji. 2022.Historical representations of social groups across 200 years of word embeddings from google books.Proceedings of the National Academy of Sciences, 119(28):e2121798119.
1131
+ Charlesworth and Hatzenbuehler (2024)
1132
+ ↑
1133
+ TessaΒ ES Charlesworth and MarkΒ L Hatzenbuehler. 2024.Mechanisms upholding the persistence of stigma across 100 years of historical text.Scientific Reports, 14(1):11069.
1134
+ Choi (2024)
1135
+ ↑
1136
+ JonathanΒ H Choi. 2024.Measuring clarity in legal text.University of Chicago Law Review, 91:1.
1137
+ Davani etΒ al. (2023)
1138
+ ↑
1139
+ AidaΒ Mostafazadeh Davani, Mohammad Atari, Brendan Kennedy, and Morteza Dehghani. 2023.Hate speech classifiers learn normative social stereotypes.Transactions of the Association for Computational Linguistics, 11:300–319.
1140
+ Davies (2012)
1141
+ ↑
1142
+ Mark Davies. 2012.Expanding horizons in historical linguistics with the 400-million word corpus of historical american english.Corpora, 7(2):121–157.
1143
+ Dror etΒ al. (2020)
1144
+ ↑
1145
+ Rotem Dror, Lotem Peled-Cohen, Segev Shlomov, and Roi Reichart. 2020.Statistical Significance Testing for Natural Language Processing.NumberΒ 45 in Synthesis Lectures on Human Language Technologies. Springer Nature.
1146
+ Ethayarajh etΒ al. (2019a)
1147
+ ↑
1148
+ Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019a.Towards understanding linear word analogies.In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3253–3262, Florence, Italy. Association for Computational Linguistics.
1149
+ Ethayarajh etΒ al. (2019b)
1150
+ ↑
1151
+ Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019b.Understanding undesirable word embedding associations.In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1696–1705, Florence, Italy. Association for Computational Linguistics.
1152
+ Garg etΒ al. (2018)
1153
+ ↑
1154
+ Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018.Word embeddings quantify 100 years of gender and ethnic stereotypes.Proceedings of the National Academy of Sciences, 115(16):E3635–E3644.
1155
+ Golub and VanΒ Loan (2013)
1156
+ ↑
1157
+ GeneΒ H Golub and CharlesΒ F VanΒ Loan. 2013.Matrix Computations.JHU press.
1158
+ Hamilton etΒ al. (2016)
1159
+ ↑
1160
+ WilliamΒ L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016.Diachronic word embeddings reveal statistical laws of semantic change.In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489–1501, Berlin, Germany. Association for Computational Linguistics.
1161
+ Han etΒ al. (2018)
1162
+ ↑
1163
+ Rujun Han, Michael Gill, Arthur Spirling, and Kyunghyun Cho. 2018.Conditional word embedding and hypothesis testing via bayes-by-backprop.In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4890–4895. Association for Computational Linguistics.
1164
+ Jurgens etΒ al. (2012)
1165
+ ↑
1166
+ David Jurgens, Saif Mohammad, Peter Turney, and Keith Holyoak. 2012.SemEval-2012 task 2: Measuring degrees of relational similarity.In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 356–364. Association for Computational Linguistics.
1167
+ Kim etΒ al. (2016)
1168
+ ↑
1169
+ Yea-Seul Kim, Jessica Hullman, Matthew Burgess, and Eytan Adar. 2016.Simplescience: Lexical simplification of scientific terminology.In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1066–1071.
1170
+ Knoche etΒ al. (2019)
1171
+ ↑
1172
+ Markus Knoche, Radomir PopoviΔ‡, Florian Lemmerich, and Markus Strohmaier. 2019.Identifying biases in politically biased wikis through word embeddings.In Proceedings of the 30th ACM Conference on Hypertext and Social Media, HT ’19, pages 253–257. Association for Computing Machinery.Event-place: Hof, Germany.
1173
+ Levy and Goldberg (2014a)
1174
+ ↑
1175
+ Omer Levy and Yoav Goldberg. 2014a.Linguistic regularities in sparse and explicit word representations.In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 171–180. Association for Computational Linguistics.
1176
+ Levy and Goldberg (2014b)
1177
+ ↑
1178
+ Omer Levy and Yoav Goldberg. 2014b.Neural word embedding as implicit matrix factorization.Advances in neural information processing systems, 27.
1179
+ Levy etΒ al. (2015)
1180
+ ↑
1181
+ Omer Levy, Yoav Goldberg, and Ido Dagan. 2015.Improving distributional similarity with lessons learned from word embeddings.Transactions of the Association for Computational Linguistics, 3:211–225.
1182
+ Liao etΒ al. (2021)
1183
+ ↑
1184
+ Thomas Liao, Rohan Taori, InioluwaΒ Deborah Raji, and Ludwig Schmidt. 2021.Are we learning yet? A meta review of evaluation failures across machine learning.In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
1185
+ Linzen (2016)
1186
+ ↑
1187
+ Tal Linzen. 2016.Issues in evaluating semantic spaces using word analogies.In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 13–18. Association for Computational Linguistics.
1188
+ Liu etΒ al. (2023)
1189
+ ↑
1190
+ Han Liu, Yuhao Wu, Shixuan Zhai, BoΒ Yuan, and Ning Zhang. 2023.Riatig: Reliable and imperceptible adversarial text-to-image generation with natural prompts.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20585–20594.
1191
+ Lucy etΒ al. (2020)
1192
+ ↑
1193
+ LiΒ Lucy, Dorottya Demszky, Patricia Bromley, and Dan Jurafsky. 2020.Content analysis of textbooks via natural language processing: Findings on gender, race, and ethnicity in texas us history textbooks.AERA Open, 6(3):2332858420940312.
1194
+ Markovsky (2012)
1195
+ ↑
1196
+ Ivan Markovsky. 2012.Low rank approximation: Algorithms, implementation, applications, volume 906 of Communications and Control Engineering.Springer.
1197
+ Matthews etΒ al. (2022)
1198
+ ↑
1199
+ Sean Matthews, John Hudzina, and Dawn Sepehr. 2022.Gender and racial stereotype detection in legal opinion word embeddings.In Proceedings of the AAAI Conference on Artificial Intelligence, volumeΒ 36, pages 12026–12033.
1200
+ Mikolov etΒ al. (2013a)
1201
+ ↑
1202
+ Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a.Efficient estimation of word representations in vector space.Arxiv:1301.3781, version Number: 3.
1203
+ Mikolov etΒ al. (2013b)
1204
+ ↑
1205
+ TomΓ‘Ε‘ Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b.Linguistic regularities in continuous space word representations.In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746–751.
1206
+ Mimno and Thompson (2017)
1207
+ ↑
1208
+ David Mimno and Laure Thompson. 2017.The strange geometry of skip-gram with negative sampling.In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2873–2878, Copenhagen, Denmark. Association for Computational Linguistics.
1209
+ (41)
1210
+ ↑
1211
+ National Archives at San Francisco.Case files by birth country.
1212
+ Newberry etΒ al. (2017)
1213
+ ↑
1214
+ MitchellΒ G Newberry, ChristopherΒ A Ahern, Robin Clark, and JoshuaΒ B Plotkin. 2017.Detecting evolutionary forces in language change.Nature, 551(7679):223–226.
1215
+ Ng etΒ al. (2015)
1216
+ ↑
1217
+ Reuben Ng, HeatherΒ G Allore, Mark Trentalange, JoanΒ K Monin, and BeccaΒ R Levy. 2015.Increasing negativity of age stereotypes across 200 years: Evidence from a database of 400 million words.PloS one, 10(2):e0117086.
1218
+ Pennington etΒ al. (2014)
1219
+ ↑
1220
+ Jeffrey Pennington, Richard Socher, and ChristopherΒ D Manning. 2014.GloVe: Global vectors for word representation.In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543.
1221
+ Pennington etΒ al. (2023)
1222
+ ↑
1223
+ Jeffrey Pennington, Richard Socher, and ChristopherΒ D Manning. 2023.GloVe: Global vectors for word representation.Github repository: Commit a6f2b94.
1224
+ Romano and Wolf (2017)
1225
+ ↑
1226
+ JosephΒ P Romano and Michael Wolf. 2017.Resurrecting weighted least squares.Journal of Econometrics, 197(1):1–19.
1227
+ Sevim etΒ al. (2023)
1228
+ ↑
1229
+ Nurullah Sevim, Furkan ŞahinuΓ§, and Aykut KoΓ§. 2023.Gender bias in legal corpora and debiasing it.Natural Language Engineering, 29(2):449–482.
1230
+ Swinger etΒ al. (2019)
1231
+ ↑
1232
+ Nathaniel Swinger, Maria De-Arteaga, NeilΒ Thomas HeffernanΒ IV, MarkΒ DM Leiserson, and AdamΒ Tauman Kalai. 2019.What are the biases in my word embedding?In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’19, page 305–311, New York, NY, USA. Association for Computing Machinery.
1233
+ Tang etΒ al. (2023)
1234
+ ↑
1235
+ Jerry Tang, Amanda LeBel, Shailee Jain, and AlexanderΒ G Huth. 2023.Semantic reconstruction of continuous language from non-invasive brain recordings.Nature Neuroscience, 26(5):858–866.
1236
+ Tanner (1996)
1237
+ ↑
1238
+ MartinΒ A. Tanner. 1996.Tools for Statistical Inference: Methods for the Exploration of Posterior Distributions and Likelihood Functions, 3 edition.Springer Series in Statistics. Springer New York.
1239
+ Ulmer etΒ al. (2022)
1240
+ ↑
1241
+ Dennis Ulmer, Elisa Bassignana, Max MΓΌller-Eberstein, Daniel Varab, Mike Zhang, Rob vanΒ der Goot, Christian Hardmeier, and Barbara Plank. 2022.Experimental standards for deep learning in natural language processing research.In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2673–2692, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
1242
+ vanΒ der Vaart (2000)
1243
+ ↑
1244
+ AadΒ W. vanΒ der Vaart. 2000.Asymptotic statistics, volumeΒ 3.Cambridge University Press.
1245
+ Wendlandt etΒ al. (2018)
1246
+ ↑
1247
+ Laura Wendlandt, JonathanΒ K. Kummerfeld, and Rada Mihalcea. 2018.Factors influencing the surprising instability of word embeddings.In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2092–2102, New Orleans, Louisiana. Association for Computational Linguistics.
1248
+ Xiao etΒ al. (2023)
1249
+ ↑
1250
+ YuΒ Xiao, Naomi Baes, Ekaterina Vylomova, and Nick Haslam. 2023.Have the concepts of β€˜anxiety’ and β€˜depression’ been normalized or pathologized? A corpus study of historical semantic change.PLOS one, 18(6):e0288027.
1251
+ Appendix AAppendix
1252
+ A.1Training details
1253
+
1254
+ We trained multiple GloVe models for the experiments presented in Section 4. First, we trained a single model on the unaltered COHA (1900-1999) corpus, which we used for all of the GloVe-V results. Second, we trained 100 GloVe models on document-level bootstrap samples of the COHA (1900-1999) corpus. In both cases, we pre-processed the corpus to lowercase all tokens and drop non-alphabetic characters.
1255
+
1256
+ We trained each 300-dimensional GloVe model (132M parameters for a vocabulary size of approximately 219,000 words) for 80 iterations using the following default hyperparameters of the official GloVe model implementation (Pennington etΒ al., 2023): initial learning rate of 0.05, and
1257
+ 𝛼
1258
+ =
1259
+ 0.75
1260
+ and
1261
+ π‘₯
1262
+ max
1263
+ =
1264
+ 100
1265
+ for the weighting function. The training of each model consumed about 40 minutes on a workstation equipped with an AMD Milan 7543 @ 2.75 GHz processor CPU, using 48 CPU cores.
1266
+
1267
+ A.2Variance Estimator Derivation for Bias Measures Using the Delta Method
1268
+ A.2.1Cosine Similarity Bias
1269
+
1270
+ The cosine similarity bias function of Garg etΒ al. (2018) is the following:
1271
+
1272
+
1273
+
1274
+ 𝑓
1275
+ ⁒
1276
+ (
1277
+ 𝐯
1278
+ 1
1279
+ ,
1280
+ …
1281
+ ,
1282
+ 𝐯
1283
+ π‘˜
1284
+ ,
1285
+ 𝐌
1286
+ 𝐴
1287
+ ,
1288
+ 𝐌
1289
+ π‘Š
1290
+ )
1291
+ =
1292
+
1293
+
1294
+ 1
1295
+ 𝐾
1296
+ ⁒
1297
+ βˆ‘
1298
+ 𝑖
1299
+ cos
1300
+ ⁑
1301
+ (
1302
+ 𝐯
1303
+ 𝑖
1304
+ ,
1305
+ 𝐌
1306
+ 𝐴
1307
+ )
1308
+ βˆ’
1309
+ 1
1310
+ 𝐾
1311
+ ⁒
1312
+ βˆ‘
1313
+ 𝑖
1314
+ cos
1315
+ ⁑
1316
+ (
1317
+ 𝐯
1318
+ 𝑖
1319
+ ,
1320
+ 𝐌
1321
+ π‘Š
1322
+ )
1323
+
1324
+
1325
+ where
1326
+ 𝐯
1327
+ 𝑖
1328
+ is the word vector for otherization word
1329
+ 𝑖
1330
+ ,
1331
+ 𝐌
1332
+ 𝐴
1333
+ is the mean word vector over all Asian surnames (using pre-normalized vectors), and
1334
+ 𝐌
1335
+ π‘Š
1336
+ is the mean word vector over all White surnames (using pre-normalized vectors). There are
1337
+ 𝐾
1338
+ otherization words.
1339
+
1340
+ In general, the partial derivative of the cosine similarity between two vectors
1341
+ 𝐚
1342
+ and
1343
+ 𝐛
1344
+ with respect to
1345
+ 𝐚
1346
+ is:
1347
+
1348
+
1349
+ βˆ‚
1350
+ cos
1351
+ ⁑
1352
+ (
1353
+ 𝐚
1354
+ ,
1355
+ 𝐛
1356
+ )
1357
+ βˆ‚
1358
+ 𝐚
1359
+ =
1360
+ βˆ‚
1361
+ 𝐚
1362
+ 𝑇
1363
+ ⁒
1364
+ 𝐛
1365
+ /
1366
+ βˆ‚
1367
+ 𝐚
1368
+ β€–
1369
+ 𝐛
1370
+ β€–
1371
+ ⁒
1372
+ β€–
1373
+ 𝐚
1374
+ β€–
1375
+ βˆ’
1376
+ cos
1377
+ ⁑
1378
+ (
1379
+ 𝐚
1380
+ ,
1381
+ 𝐛
1382
+ )
1383
+ β€–
1384
+ 𝐚
1385
+ β€–
1386
+ β‹…
1387
+ βˆ‚
1388
+ β€–
1389
+ 𝐚
1390
+ β€–
1391
+ /
1392
+ βˆ‚
1393
+ 𝐚
1394
+
1395
+
1396
+ If we take
1397
+ 𝐚
1398
+ to be an otherization word
1399
+ 𝐯
1400
+ 𝑖
1401
+ , this gives us:
1402
+
1403
+
1404
+
1405
+ βˆ‚
1406
+ cos
1407
+ ⁑
1408
+ (
1409
+ 𝐯
1410
+ 𝑖
1411
+ ,
1412
+ 𝐌
1413
+ 𝐴
1414
+ )
1415
+ βˆ‚
1416
+ 𝐯
1417
+ 𝑖
1418
+ =
1419
+
1420
+
1421
+ 𝐌
1422
+ 𝐴
1423
+ β€–
1424
+ 𝐌
1425
+ 𝐴
1426
+ β€–
1427
+ ⁒
1428
+ β€–
1429
+ 𝐯
1430
+ 𝑖
1431
+ β€–
1432
+ βˆ’
1433
+ cos
1434
+ ⁑
1435
+ (
1436
+ 𝐯
1437
+ 𝑖
1438
+ ,
1439
+ 𝐌
1440
+ 𝐴
1441
+ )
1442
+ β‹…
1443
+ 𝐯
1444
+ 𝑖
1445
+ β€–
1446
+ 𝐯
1447
+ 𝑖
1448
+ β€–
1449
+ 2
1450
+
1451
+
1452
+ If we take
1453
+ 𝐚
1454
+ to be an Asian surname vector
1455
+ 𝐚
1456
+ 𝑗
1457
+ , where
1458
+ 𝐌
1459
+ 𝐴
1460
+ =
1461
+ 1
1462
+ π‘š
1463
+ ⁒
1464
+ βˆ‘
1465
+ 𝑗
1466
+
1467
+ 𝐚
1468
+ 𝑗
1469
+ β€–
1470
+ 𝐚
1471
+ 𝑗
1472
+ β€–
1473
+ , this gives us:
1474
+
1475
+
1476
+
1477
+ βˆ‚
1478
+ cos
1479
+ ⁑
1480
+ (
1481
+ 𝐯
1482
+ 𝑖
1483
+ ,
1484
+ 𝐌
1485
+ 𝐴
1486
+ )
1487
+ βˆ‚
1488
+ 𝐚
1489
+ 𝑗
1490
+ =
1491
+ 1
1492
+ π‘š
1493
+ ⁒
1494
+ (
1495
+ 𝐼
1496
+ β€–
1497
+ 𝐚
1498
+ 𝑗
1499
+ β€–
1500
+ βˆ’
1501
+ 𝐚
1502
+ 𝑗
1503
+ ⁒
1504
+ 𝐚
1505
+ 𝑗
1506
+ 𝑇
1507
+ β€–
1508
+ 𝐚
1509
+ 𝑗
1510
+ β€–
1511
+ 3
1512
+ )
1513
+
1514
+
1515
+ [
1516
+ 𝐯
1517
+ 𝑖
1518
+ β€–
1519
+ 𝐯
1520
+ 𝑖
1521
+ β€–
1522
+ ⁒
1523
+ β€–
1524
+ 𝐌
1525
+ 𝐴
1526
+ β€–
1527
+ βˆ’
1528
+ cos
1529
+ ⁑
1530
+ (
1531
+ 𝐯
1532
+ 𝑖
1533
+ ,
1534
+ 𝐌
1535
+ 𝐴
1536
+ )
1537
+ β‹…
1538
+ 𝐌
1539
+ 𝐴
1540
+ β€–
1541
+ 𝐌
1542
+ 𝐴
1543
+ β€–
1544
+ 2
1545
+ ]
1546
+
1547
+
1548
+ =
1549
+ 1
1550
+ π‘š
1551
+ ⁒
1552
+ β€–
1553
+ 𝐚
1554
+ 𝑗
1555
+ β€–
1556
+ ⁒
1557
+ β€–
1558
+ 𝐌
1559
+ 𝐴
1560
+ β€–
1561
+ ⁒
1562
+ 𝐗
1563
+ π‘Ž
1564
+ 𝑗
1565
+ 𝑇
1566
+ ⁒
1567
+ (
1568
+ 𝐯
1569
+ ~
1570
+ 𝐒
1571
+ βˆ’
1572
+ cos
1573
+ ⁑
1574
+ (
1575
+ 𝐯
1576
+ 𝑖
1577
+ ,
1578
+ 𝐌
1579
+ 𝐴
1580
+ )
1581
+ β‹…
1582
+ 𝐌
1583
+ ~
1584
+ 𝐀
1585
+ )
1586
+
1587
+
1588
+ where
1589
+ 𝐗
1590
+ π‘Ž
1591
+ 𝑗
1592
+ =
1593
+ 𝐈
1594
+ βˆ’
1595
+ 𝐚
1596
+ ~
1597
+ 𝑗
1598
+ ⁒
1599
+ 𝐚
1600
+ ~
1601
+ 𝑗
1602
+ 𝑇
1603
+ .
1604
+
1605
+ So the partial derivatives with respect to each type of vector are the following:
1606
+
1607
+
1608
+
1609
+ βˆ‚
1610
+ 𝑓
1611
+ βˆ‚
1612
+ 𝐯
1613
+ 𝑖
1614
+ =
1615
+ 1
1616
+ 𝐾
1617
+ ⁒
1618
+ β€–
1619
+ 𝐯
1620
+ 𝑖
1621
+ β€–
1622
+
1623
+
1624
+ (
1625
+ [
1626
+ 𝐌
1627
+ ~
1628
+ 𝐴
1629
+ βˆ’
1630
+ 𝐌
1631
+ ~
1632
+ π‘Š
1633
+ ]
1634
+ βˆ’
1635
+ [
1636
+ cos
1637
+ ⁑
1638
+ (
1639
+ 𝐯
1640
+ 𝑖
1641
+ ,
1642
+ 𝐌
1643
+ 𝐴
1644
+ )
1645
+ βˆ’
1646
+ cos
1647
+ ⁑
1648
+ (
1649
+ 𝐯
1650
+ 𝑖
1651
+ ,
1652
+ 𝐌
1653
+ π‘Š
1654
+ )
1655
+ ]
1656
+ β‹…
1657
+ 𝐯
1658
+ ~
1659
+ 𝑖
1660
+ )
1661
+ ≔
1662
+ 𝐝
1663
+ 𝑣
1664
+
1665
+
1666
+ βˆ‚
1667
+ 𝑓
1668
+ βˆ‚
1669
+ 𝐚
1670
+ 𝑗
1671
+ =
1672
+ 1
1673
+ 𝐾
1674
+ ⁒
1675
+ π‘š
1676
+ ⁒
1677
+ β€–
1678
+ 𝐌
1679
+ 𝐴
1680
+ β€–
1681
+ ⁒
1682
+ (
1683
+ 𝐼
1684
+ β€–
1685
+ 𝐚
1686
+ 𝑗
1687
+ β€–
1688
+ βˆ’
1689
+ 𝐚
1690
+ 𝑗
1691
+ ⁒
1692
+ 𝐚
1693
+ 𝑗
1694
+ 𝑇
1695
+ β€–
1696
+ 𝐚
1697
+ 𝑗
1698
+ β€–
1699
+ 3
1700
+ )
1701
+
1702
+
1703
+ βˆ‘
1704
+ 𝑖
1705
+ (
1706
+ 𝐯
1707
+ ~
1708
+ 𝑖
1709
+ βˆ’
1710
+ cos
1711
+ ⁑
1712
+ (
1713
+ 𝐯
1714
+ 𝑖
1715
+ ,
1716
+ 𝐌
1717
+ 𝐴
1718
+ )
1719
+ β‹…
1720
+ 𝐌
1721
+ ~
1722
+ 𝐴
1723
+ )
1724
+
1725
+
1726
+ =
1727
+ 1
1728
+ 𝐾
1729
+ ⁒
1730
+ π‘š
1731
+ ⁒
1732
+ β€–
1733
+ 𝐚
1734
+ 𝑗
1735
+ β€–
1736
+ ⁒
1737
+ β€–
1738
+ 𝐌
1739
+ 𝐴
1740
+ β€–
1741
+ ⁒
1742
+ 𝐗
1743
+ π‘Ž
1744
+ 𝑗
1745
+ 𝑇
1746
+
1747
+
1748
+ βˆ‘
1749
+ 𝑖
1750
+ (
1751
+ 𝐯
1752
+ ~
1753
+ 𝑖
1754
+ βˆ’
1755
+ cos
1756
+ ⁑
1757
+ (
1758
+ 𝐯
1759
+ 𝑖
1760
+ ,
1761
+ 𝐌
1762
+ 𝐴
1763
+ )
1764
+ β‹…
1765
+ 𝐌
1766
+ ~
1767
+ 𝐴
1768
+ )
1769
+ ≔
1770
+ 𝐝
1771
+ π‘Ž
1772
+
1773
+
1774
+ βˆ‚
1775
+ 𝑓
1776
+ βˆ‚
1777
+ 𝐰
1778
+ 𝑗
1779
+ =
1780
+ βˆ’
1781
+ 1
1782
+ 𝐾
1783
+ ⁒
1784
+ 𝑛
1785
+ ⁒
1786
+ β€–
1787
+ 𝐌
1788
+ π‘Š
1789
+ β€–
1790
+ ⁒
1791
+ (
1792
+ 𝐼
1793
+ β€–
1794
+ 𝐰
1795
+ 𝑗
1796
+ β€–
1797
+ βˆ’
1798
+ 𝐰
1799
+ 𝑗
1800
+ ⁒
1801
+ 𝐰
1802
+ 𝑗
1803
+ 𝑇
1804
+ β€–
1805
+ 𝐰
1806
+ 𝑗
1807
+ β€–
1808
+ 3
1809
+ )
1810
+
1811
+
1812
+ βˆ‘
1813
+ 𝑖
1814
+ (
1815
+ 𝐯
1816
+ ~
1817
+ 𝑖
1818
+ βˆ’
1819
+ cos
1820
+ ⁑
1821
+ (
1822
+ 𝐯
1823
+ 𝑖
1824
+ ,
1825
+ 𝐌
1826
+ π‘Š
1827
+ )
1828
+ β‹…
1829
+ 𝐌
1830
+ ~
1831
+ π‘Š
1832
+ )
1833
+
1834
+
1835
+ =
1836
+ βˆ’
1837
+ 1
1838
+ 𝐾
1839
+ ⁒
1840
+ 𝑛
1841
+ ⁒
1842
+ β€–
1843
+ 𝐰
1844
+ 𝑗
1845
+ β€–
1846
+ ⁒
1847
+ β€–
1848
+ 𝐌
1849
+ π‘Š
1850
+ β€–
1851
+ ⁒
1852
+ 𝐗
1853
+ 𝑀
1854
+ 𝑗
1855
+ 𝑇
1856
+
1857
+
1858
+ βˆ‘
1859
+ 𝑖
1860
+ (
1861
+ 𝐯
1862
+ ~
1863
+ 𝑖
1864
+ βˆ’
1865
+ cos
1866
+ ⁑
1867
+ (
1868
+ 𝐯
1869
+ 𝑖
1870
+ ,
1871
+ 𝐌
1872
+ π‘Š
1873
+ )
1874
+ β‹…
1875
+ 𝐌
1876
+ ~
1877
+ π‘Š
1878
+ )
1879
+ ≔
1880
+ 𝐝
1881
+ 𝑀
1882
+
1883
+
1884
+ where
1885
+ 𝐚
1886
+ ~
1887
+ is the normalized version of vector
1888
+ 𝐚
1889
+ .
1890
+
1891
+ A.2.2WEAT effect size
1892
+
1893
+ For two sets of attributes,
1894
+ 𝐕
1895
+ and
1896
+ 𝐙
1897
+ , of equal size (
1898
+ π‘˜
1899
+ ) the WEAT effect size of Caliskan etΒ al. (2017) is the following:
1900
+
1901
+
1902
+
1903
+ 𝑓
1904
+ ⁒
1905
+ (
1906
+ 𝐯
1907
+ 1
1908
+ ,
1909
+ …
1910
+ ,
1911
+ 𝐯
1912
+ π‘˜
1913
+ ,
1914
+ 𝐳
1915
+ 1
1916
+ ,
1917
+ …
1918
+ ,
1919
+ 𝐳
1920
+ π‘˜
1921
+ ,
1922
+ 𝐚
1923
+ 1
1924
+ ,
1925
+ …
1926
+ ,
1927
+ 𝐚
1928
+ 𝐴
1929
+ ,
1930
+ 𝐰
1931
+ 1
1932
+ ,
1933
+ …
1934
+ ,
1935
+ 𝐰
1936
+ π‘Š
1937
+ )
1938
+
1939
+
1940
+ =
1941
+ 1
1942
+ |
1943
+ 𝑉
1944
+ |
1945
+ ⁒
1946
+ βˆ‘
1947
+ 𝑖
1948
+ 𝑠
1949
+ ⁒
1950
+ (
1951
+ 𝐯
1952
+ 𝑖
1953
+ ,
1954
+ 𝐖
1955
+ ,
1956
+ 𝐀
1957
+ )
1958
+ βˆ’
1959
+ 1
1960
+ |
1961
+ 𝑍
1962
+ |
1963
+ ⁒
1964
+ βˆ‘
1965
+ 𝑖
1966
+ 𝑠
1967
+ ⁒
1968
+ (
1969
+ 𝐳
1970
+ 𝑖
1971
+ ,
1972
+ 𝐖
1973
+ ,
1974
+ 𝐀
1975
+ )
1976
+ std. dev
1977
+ π‘₯
1978
+ ∈
1979
+ 𝑉
1980
+ βˆͺ
1981
+ 𝑍
1982
+ ⁒
1983
+ 𝑠
1984
+ ⁒
1985
+ (
1986
+ π‘₯
1987
+ ,
1988
+ π‘Š
1989
+ ,
1990
+ 𝐴
1991
+ )
1992
+ ≔
1993
+ 𝐻
1994
+ 𝐺
1995
+
1996
+
1997
+ where
1998
+
1999
+
2000
+
2001
+ 𝐻
2002
+ =
2003
+ βˆ‘
2004
+ 𝑖
2005
+ (
2006
+ 1
2007
+ π‘Š
2008
+ ⁒
2009
+ βˆ‘
2010
+ 𝑗
2011
+ cos
2012
+ ⁑
2013
+ (
2014
+ 𝑣
2015
+ 𝑖
2016
+ ,
2017
+ 𝑀
2018
+ 𝑗
2019
+ )
2020
+ βˆ’
2021
+ 1
2022
+ 𝐴
2023
+ ⁒
2024
+ βˆ‘
2025
+ 𝑗
2026
+ cos
2027
+ ⁑
2028
+ (
2029
+ 𝑣
2030
+ 𝑖
2031
+ ,
2032
+ π‘Ž
2033
+ 𝑗
2034
+ )
2035
+ )
2036
+
2037
+
2038
+ βˆ’
2039
+ βˆ‘
2040
+ 𝑖
2041
+ (
2042
+ 1
2043
+ π‘Š
2044
+ ⁒
2045
+ βˆ‘
2046
+ 𝑗
2047
+ cos
2048
+ ⁑
2049
+ (
2050
+ 𝑧
2051
+ 𝑖
2052
+ ,
2053
+ 𝑀
2054
+ 𝑗
2055
+ )
2056
+ βˆ’
2057
+ 1
2058
+ 𝐴
2059
+ ⁒
2060
+ βˆ‘
2061
+ 𝑗
2062
+ cos
2063
+ ⁑
2064
+ (
2065
+ 𝑧
2066
+ 𝑖
2067
+ ,
2068
+ π‘Ž
2069
+ 𝑗
2070
+ )
2071
+ )
2072
+
2073
+
2074
+ =
2075
+ βˆ‘
2076
+ 𝑣
2077
+ ∈
2078
+ 𝑉
2079
+ 𝑠
2080
+ ⁒
2081
+ (
2082
+ 𝑣
2083
+ ,
2084
+ π‘Š
2085
+ ,
2086
+ 𝐴
2087
+ )
2088
+ βˆ’
2089
+ βˆ‘
2090
+ 𝑧
2091
+ ∈
2092
+ 𝑍
2093
+ 𝑠
2094
+ ⁒
2095
+ (
2096
+ 𝑧
2097
+ ,
2098
+ π‘Š
2099
+ ,
2100
+ 𝐴
2101
+ )
2102
+
2103
+
2104
+ and
2105
+
2106
+
2107
+
2108
+ 𝐺
2109
+ 2
2110
+ =
2111
+ 1
2112
+ |
2113
+ 𝑉
2114
+ βˆͺ
2115
+ 𝑍
2116
+ |
2117
+ βˆ’
2118
+ 1
2119
+
2120
+
2121
+ βˆ‘
2122
+ π‘₯
2123
+ ∈
2124
+ 𝑉
2125
+ βˆͺ
2126
+ 𝑍
2127
+ (
2128
+ 𝑠
2129
+ ⁒
2130
+ (
2131
+ π‘₯
2132
+ ,
2133
+ π‘Š
2134
+ ,
2135
+ 𝐴
2136
+ )
2137
+ βˆ’
2138
+ 1
2139
+ |
2140
+ 𝑍
2141
+ βˆͺ
2142
+ 𝑉
2143
+ |
2144
+ ⁒
2145
+ βˆ‘
2146
+ 𝑦
2147
+ 𝑠
2148
+ ⁒
2149
+ (
2150
+ 𝑦
2151
+ ,
2152
+ π‘Š
2153
+ ,
2154
+ 𝐴
2155
+ )
2156
+ )
2157
+ 2
2158
+
2159
+
2160
+ ≔
2161
+ 1
2162
+ |
2163
+ 𝑉
2164
+ βˆͺ
2165
+ 𝑍
2166
+ |
2167
+ βˆ’
2168
+ 1
2169
+ ⁒
2170
+ βˆ‘
2171
+ π‘₯
2172
+ ∈
2173
+ 𝑉
2174
+ βˆͺ
2175
+ 𝑍
2176
+ (
2177
+ 𝑠
2178
+ ⁒
2179
+ (
2180
+ π‘₯
2181
+ ,
2182
+ π‘Š
2183
+ ,
2184
+ 𝐴
2185
+ )
2186
+ βˆ’
2187
+ 𝐸
2188
+ )
2189
+ 2
2190
+
2191
+
2192
+ Let
2193
+ 𝑐
2194
+ 𝐚
2195
+ β€²
2196
+ ⁒
2197
+ (
2198
+ 𝐚
2199
+ ,
2200
+ 𝐛
2201
+ )
2202
+ =
2203
+ οΏ½οΏ½
2204
+ cos
2205
+ ⁑
2206
+ (
2207
+ 𝐚
2208
+ ,
2209
+ 𝐛
2210
+ )
2211
+ βˆ‚
2212
+ 𝐚
2213
+ . We first define the following derivatives:
2214
+
2215
+
2216
+
2217
+ βˆ‚
2218
+ 𝑠
2219
+ ⁒
2220
+ (
2221
+ 𝐯
2222
+ 𝑖
2223
+ ,
2224
+ π‘Š
2225
+ ,
2226
+ 𝐴
2227
+ )
2228
+ βˆ‚
2229
+ 𝐯
2230
+ 𝑖
2231
+ =
2232
+ 1
2233
+ π‘Š
2234
+ ⁒
2235
+ βˆ‘
2236
+ 𝑗
2237
+ 𝑐
2238
+ 𝐯
2239
+ 𝑖
2240
+ β€²
2241
+ ⁒
2242
+ (
2243
+ 𝐯
2244
+ 𝑖
2245
+ ,
2246
+ 𝐰
2247
+ 𝑗
2248
+ )
2249
+ βˆ’
2250
+
2251
+
2252
+ 1
2253
+ 𝐴
2254
+ ⁒
2255
+ βˆ‘
2256
+ 𝑗
2257
+ 𝑐
2258
+ 𝐯
2259
+ 𝑖
2260
+ β€²
2261
+ ⁒
2262
+ (
2263
+ 𝐯
2264
+ 𝑖
2265
+ ,
2266
+ 𝐚
2267
+ 𝑗
2268
+ )
2269
+
2270
+
2271
+ βˆ‚
2272
+ 𝑠
2273
+ ⁒
2274
+ (
2275
+ 𝐳
2276
+ 𝑖
2277
+ ,
2278
+ π‘Š
2279
+ ,
2280
+ 𝐴
2281
+ )
2282
+ βˆ‚
2283
+ 𝐳
2284
+ 𝑖
2285
+ =
2286
+ 1
2287
+ π‘Š
2288
+ ⁒
2289
+ βˆ‘
2290
+ 𝑗
2291
+ 𝑐
2292
+ 𝐳
2293
+ 𝑖
2294
+ β€²
2295
+ ⁒
2296
+ (
2297
+ 𝐳
2298
+ 𝑖
2299
+ ,
2300
+ 𝐰
2301
+ 𝑗
2302
+ )
2303
+ βˆ’
2304
+
2305
+
2306
+ 1
2307
+ 𝐴
2308
+ ⁒
2309
+ βˆ‘
2310
+ 𝑗
2311
+ 𝑐
2312
+ 𝐳
2313
+ 𝑖
2314
+ β€²
2315
+ ⁒
2316
+ (
2317
+ 𝐳
2318
+ 𝑖
2319
+ ,
2320
+ 𝐚
2321
+ 𝑗
2322
+ )
2323
+
2324
+
2325
+ βˆ‚
2326
+ 𝑠
2327
+ ⁒
2328
+ (
2329
+ 𝐱
2330
+ ,
2331
+ π‘Š
2332
+ ,
2333
+ 𝐴
2334
+ )
2335
+ βˆ‚
2336
+ 𝐰
2337
+ 𝑗
2338
+ =
2339
+ 1
2340
+ π‘Š
2341
+ ⁒
2342
+ 𝑐
2343
+ 𝐰
2344
+ 𝑗
2345
+ β€²
2346
+ ⁒
2347
+ (
2348
+ 𝐰
2349
+ 𝑗
2350
+ ,
2351
+ 𝐱
2352
+ )
2353
+ ⁒
2354
+ Β , forΒ 
2355
+ ⁒
2356
+ 𝐱
2357
+ ∈
2358
+ {
2359
+ 𝐯
2360
+ 𝑖
2361
+ ,
2362
+ 𝐳
2363
+ 𝑖
2364
+ }
2365
+
2366
+
2367
+ βˆ‚
2368
+ 𝑠
2369
+ ⁒
2370
+ (
2371
+ 𝐱
2372
+ ,
2373
+ π‘Š
2374
+ ,
2375
+ 𝐴
2376
+ )
2377
+ βˆ‚
2378
+ 𝐚
2379
+ 𝑗
2380
+ =
2381
+ βˆ’
2382
+ 1
2383
+ 𝐴
2384
+ ⁒
2385
+ 𝑐
2386
+ 𝐚
2387
+ 𝑗
2388
+ β€²
2389
+ ⁒
2390
+ (
2391
+ 𝐚
2392
+ 𝑗
2393
+ ,
2394
+ 𝐱
2395
+ )
2396
+ ⁒
2397
+ Β , forΒ 
2398
+ ⁒
2399
+ 𝐱
2400
+ ∈
2401
+ {
2402
+ 𝐯
2403
+ 𝑖
2404
+ ,
2405
+ 𝐳
2406
+ 𝑖
2407
+ }
2408
+
2409
+
2410
+ The partial derivatives with respect to each type of vector are the following:
2411
+
2412
+
2413
+
2414
+ βˆ‚
2415
+ 𝑓
2416
+ βˆ‚
2417
+ 𝐯
2418
+ 𝑖
2419
+ =
2420
+
2421
+
2422
+ (
2423
+ 1
2424
+ 𝐺
2425
+ ⁒
2426
+ |
2427
+ 𝑉
2428
+ |
2429
+ βˆ’
2430
+ 𝐻
2431
+ 𝐺
2432
+ 5
2433
+ 2
2434
+ β‹…
2435
+ (
2436
+ 𝑠
2437
+ ⁒
2438
+ (
2439
+ 𝐯
2440
+ 𝑖
2441
+ ,
2442
+ π‘Š
2443
+ ,
2444
+ 𝐴
2445
+ )
2446
+ βˆ’
2447
+ 𝐸
2448
+ )
2449
+ |
2450
+ 𝑉
2451
+ βˆͺ
2452
+ 𝑍
2453
+ |
2454
+ )
2455
+ ⁒
2456
+ βˆ‚
2457
+ 𝑠
2458
+ ⁒
2459
+ (
2460
+ 𝐯
2461
+ 𝑖
2462
+ ,
2463
+ π‘Š
2464
+ ,
2465
+ 𝐴
2466
+ )
2467
+ βˆ‚
2468
+ 𝐯
2469
+ 𝑖
2470
+
2471
+
2472
+ βˆ‚
2473
+ 𝑓
2474
+ βˆ‚
2475
+ 𝐳
2476
+ 𝑖
2477
+ =
2478
+
2479
+
2480
+ (
2481
+ βˆ’
2482
+ 1
2483
+ 𝐺
2484
+ ⁒
2485
+ |
2486
+ 𝑍
2487
+ |
2488
+ βˆ’
2489
+ 𝐻
2490
+ 𝐺
2491
+ 5
2492
+ 2
2493
+ β‹…
2494
+ (
2495
+ 𝑠
2496
+ ⁒
2497
+ (
2498
+ 𝐳
2499
+ 𝑖
2500
+ ,
2501
+ π‘Š
2502
+ ,
2503
+ 𝐴
2504
+ )
2505
+ βˆ’
2506
+ 𝐸
2507
+ )
2508
+ |
2509
+ 𝑉
2510
+ βˆͺ
2511
+ 𝑍
2512
+ |
2513
+ )
2514
+ ⁒
2515
+ βˆ‚
2516
+ 𝑠
2517
+ ⁒
2518
+ (
2519
+ 𝐳
2520
+ 𝑖
2521
+ ,
2522
+ π‘Š
2523
+ ,
2524
+ 𝐴
2525
+ )
2526
+ βˆ‚
2527
+ 𝐳
2528
+ 𝑖
2529
+
2530
+
2531
+ βˆ‚
2532
+ 𝑓
2533
+ βˆ‚
2534
+ 𝐰
2535
+ 𝑗
2536
+ =
2537
+ 1
2538
+ 𝐺
2539
+ β‹…
2540
+
2541
+
2542
+ (
2543
+ 1
2544
+ |
2545
+ 𝑉
2546
+ |
2547
+ ⁒
2548
+ βˆ‘
2549
+ 𝑣
2550
+ βˆ‚
2551
+ 𝑠
2552
+ ⁒
2553
+ (
2554
+ 𝐯
2555
+ 𝑖
2556
+ ,
2557
+ π‘Š
2558
+ ,
2559
+ 𝐴
2560
+ )
2561
+ βˆ‚
2562
+ 𝐰
2563
+ 𝑗
2564
+ βˆ’
2565
+ 1
2566
+ |
2567
+ 𝑍
2568
+ |
2569
+ ⁒
2570
+ βˆ‘
2571
+ 𝑧
2572
+ βˆ‚
2573
+ 𝑠
2574
+ ⁒
2575
+ (
2576
+ 𝐳
2577
+ 𝑖
2578
+ ,
2579
+ π‘Š
2580
+ ,
2581
+ 𝐴
2582
+ )
2583
+ βˆ‚
2584
+ 𝐰
2585
+ 𝑗
2586
+ )
2587
+
2588
+
2589
+ βˆ’
2590
+ 𝐻
2591
+ 𝐺
2592
+ 5
2593
+ 2
2594
+ ⁒
2595
+ 1
2596
+ |
2597
+ 𝑉
2598
+ βˆͺ
2599
+ 𝑍
2600
+ |
2601
+ βˆ’
2602
+ 1
2603
+
2604
+
2605
+ βˆ‘
2606
+ π‘₯
2607
+ ∈
2608
+ 𝑉
2609
+ βˆͺ
2610
+ 𝑍
2611
+ (
2612
+ (
2613
+ 𝑠
2614
+ ⁒
2615
+ (
2616
+ 𝐱
2617
+ ,
2618
+ π‘Š
2619
+ ,
2620
+ 𝐴
2621
+ )
2622
+ βˆ’
2623
+ 𝐸
2624
+ )
2625
+ β‹…
2626
+ (
2627
+ βˆ‚
2628
+ 𝑠
2629
+ ⁒
2630
+ (
2631
+ 𝐱
2632
+ ,
2633
+ π‘Š
2634
+ ,
2635
+ 𝐴
2636
+ )
2637
+ βˆ‚
2638
+ 𝐰
2639
+ 𝑗
2640
+ βˆ’
2641
+ βˆ‚
2642
+ 𝐸
2643
+ βˆ‚
2644
+ 𝐰
2645
+ 𝑗
2646
+ )
2647
+ )
2648
+
2649
+
2650
+ βˆ‚
2651
+ 𝑓
2652
+ βˆ‚
2653
+ 𝐚
2654
+ 𝑗
2655
+ =
2656
+ 1
2657
+ 𝐺
2658
+ β‹…
2659
+
2660
+
2661
+ (
2662
+ 1
2663
+ |
2664
+ 𝑉
2665
+ |
2666
+ ⁒
2667
+ βˆ‘
2668
+ 𝑣
2669
+ βˆ‚
2670
+ 𝑠
2671
+ ⁒
2672
+ (
2673
+ 𝐯
2674
+ 𝑖
2675
+ ,
2676
+ π‘Š
2677
+ ,
2678
+ 𝐴
2679
+ )
2680
+ βˆ‚
2681
+ 𝐚
2682
+ 𝑗
2683
+ βˆ’
2684
+ 1
2685
+ |
2686
+ 𝑍
2687
+ |
2688
+ ⁒
2689
+ βˆ‘
2690
+ 𝑧
2691
+ βˆ‚
2692
+ 𝑠
2693
+ ⁒
2694
+ (
2695
+ 𝐳
2696
+ 𝑖
2697
+ ,
2698
+ π‘Š
2699
+ ,
2700
+ 𝐴
2701
+ )
2702
+ βˆ‚
2703
+ 𝐚
2704
+ 𝑗
2705
+ )
2706
+
2707
+
2708
+ βˆ’
2709
+ 𝐻
2710
+ 𝐺
2711
+ 5
2712
+ 2
2713
+ ⁒
2714
+ 1
2715
+ |
2716
+ 𝑉
2717
+ βˆͺ
2718
+ 𝑍
2719
+ |
2720
+ βˆ’
2721
+ 1
2722
+
2723
+
2724
+ βˆ‘
2725
+ π‘₯
2726
+ ∈
2727
+ 𝑉
2728
+ βˆͺ
2729
+ 𝑍
2730
+ (
2731
+ (
2732
+ 𝑠
2733
+ ⁒
2734
+ (
2735
+ 𝐱
2736
+ ,
2737
+ π‘Š
2738
+ ,
2739
+ 𝐴
2740
+ )
2741
+ βˆ’
2742
+ 𝐸
2743
+ )
2744
+ β‹…
2745
+ (
2746
+ βˆ‚
2747
+ 𝑠
2748
+ ⁒
2749
+ (
2750
+ 𝐱
2751
+ ,
2752
+ π‘Š
2753
+ ,
2754
+ 𝐴
2755
+ )
2756
+ βˆ‚
2757
+ 𝐚
2758
+ 𝑗
2759
+ βˆ’
2760
+ βˆ‚
2761
+ 𝐸
2762
+ βˆ‚
2763
+ 𝐚
2764
+ 𝑗
2765
+ )
2766
+ )
2767
+
2768
+
2769
+ where
2770
+
2771
+
2772
+ βˆ‚
2773
+ 𝐸
2774
+ βˆ‚
2775
+ 𝐰
2776
+ 𝑗
2777
+
2778
+ =
2779
+ 1
2780
+ |
2781
+ 𝑉
2782
+ βˆͺ
2783
+ 𝑍
2784
+ |
2785
+ ⁒
2786
+ βˆ‘
2787
+ π‘₯
2788
+ ∈
2789
+ 𝑉
2790
+ βˆͺ
2791
+ 𝑍
2792
+ βˆ‚
2793
+ 𝑠
2794
+ ⁒
2795
+ (
2796
+ 𝐱
2797
+ ,
2798
+ π‘Š
2799
+ ,
2800
+ 𝐴
2801
+ )
2802
+ βˆ‚
2803
+ 𝐰
2804
+ 𝑗
2805
+
2806
+
2807
+ βˆ‚
2808
+ 𝐸
2809
+ βˆ‚
2810
+ 𝐚
2811
+ 𝑗
2812
+
2813
+ =
2814
+ 1
2815
+ |
2816
+ 𝑉
2817
+ βˆͺ
2818
+ 𝑍
2819
+ |
2820
+ ⁒
2821
+ βˆ‘
2822
+ π‘₯
2823
+ ∈
2824
+ 𝑉
2825
+ βˆͺ
2826
+ 𝑍
2827
+ βˆ‚
2828
+ 𝑠
2829
+ ⁒
2830
+ (
2831
+ 𝐱
2832
+ ,
2833
+ π‘Š
2834
+ ,
2835
+ 𝐴
2836
+ )
2837
+ βˆ‚
2838
+ 𝐚
2839
+ 𝑗
2840
+
2841
+ A.2.3Delta Method
2842
+
2843
+ Using the derivatives computed in the sections above, the variance of the bias calculation is:
2844
+
2845
+
2846
+ var
2847
+ ⁒
2848
+ (
2849
+ β„Ž
2850
+ )
2851
+ =
2852
+ βˆ‘
2853
+ 𝑖
2854
+ (
2855
+ 𝐝
2856
+ 𝑑
2857
+ )
2858
+ 𝑖
2859
+ 𝑇
2860
+ ⁒
2861
+ 𝚺
2862
+ 𝑖
2863
+ ⁒
2864
+ (
2865
+ 𝐝
2866
+ 𝑑
2867
+ )
2868
+ 𝑖
2869
+
2870
+
2871
+ where
2872
+ β„Ž
2873
+ ∈
2874
+ [
2875
+ 𝑓
2876
+ ,
2877
+ 𝑔
2878
+ ]
2879
+ is the bias function,
2880
+ 𝑑
2881
+ is the type of word
2882
+ 𝑖
2883
+ (e.g.,
2884
+ 𝑑
2885
+ =
2886
+ π‘Ž
2887
+ for Asian surnames,
2888
+ 𝑑
2889
+ =
2890
+ 𝑀
2891
+ for White surnames,
2892
+ 𝑑
2893
+ =
2894
+ 𝑣
2895
+ for Otherization words in the case of the cosine similarity metric), and
2896
+ 𝚺
2897
+ 𝑖
2898
+ is the variance-covariance matrix for the parameters of word
2899
+ 𝑖
2900
+ (Equation 6).
2901
+
2902
+ A.3Asian Surname List Generation
2903
+
2904
+ To explore the behavior of anti-Asian bias scores on words with varying frequencies in the COHA (1900–1999) corpus (Section 5.3), we compile a novel Asian surname list with the objective of capturing a broader and more representative set of surnames that would be present in a historic corpus such as COHA.
2905
+
2906
+ We build on two existing and widely used surname lists for ethnic bias measurement. First, a list of 20 Asian last names curated by Garg etΒ al. (2018). This was designed to include the most common surnames in the United States for this ethnicity, as measured by 2000 Census data, as well as the surnames that had higher average frequencies in the Google Books and COHA corpora studied by the authors (largely covering the 1800–1999 period). As a result of this curation process, the list is solely focused on higher frequency last names, and, by primarily comprising Chinese surnames, may not be wholly representative of the Asian ethnicity and of the historic appearances of Asian surnames in this corpus. Second, a list of 200 Asian Pacific Islander surnames collected by Matthews etΒ al. (2022) from the 2010 U.S. decennial census, sampled from last names that had a probability of 90% or larger of belonging to this ethnicity.
2907
+
2908
+ We expand the set of 211 unique Asian last names from the Garg etΒ al. (2018) and Matthews etΒ al. (2022) lists by collecting surnames in immigration arrival cases from the National Archives at San Francisco, California, from 1910–1940 (National Archives at San Francisco,). This data includes over 65,000 cases detailing the country of birth, arrival date, first and last name, gender and date of birth of each person. As ethnicity is not directly reported, we use the country of birth to compile the Asian surname list, and find over 1,300 unique last names belonging to immigrants whose reported birthplace is one the following locations: China, Japan, Korea, Indo-China, India, Hong Kong, Burma, Philippine Islands, Thailand, Malaysia, and Mongolia.14
2909
+
2910
+ A.4Alternative Approaches to Word Embedding Estimation Uncertainty
2911
+
2912
+ We explored several approaches to measuring the reconstruction error of word embeddings, in addition to the probabilistic model for GloVe that we presented in this work.
2913
+
2914
+ A.4.1Implicit matrix factorization in the skip-gram with negative-sampling (SGNS) model
2915
+
2916
+ Per Levy and Goldberg (2014b), the SGNS word embedding model implicitly factorizes a matrix that contains the shifted pointwise mutual information (PMI) of word and context vectors. Let
2917
+ 𝐰
2918
+ be a word vector for word
2919
+ 𝑀
2920
+ ,
2921
+ 𝐜
2922
+ be a context vector for word
2923
+ 𝑐
2924
+ ,
2925
+ 𝑉
2926
+ π‘Š
2927
+ and
2928
+ 𝑉
2929
+ 𝐢
2930
+ be the word and context vocabularies, respectively,
2931
+ π‘˜
2932
+ be the number of negative samples,
2933
+ 𝐷
2934
+ be the collection of word and context pairs in a corpus,
2935
+ #
2936
+ ⁒
2937
+ (
2938
+ 𝑖
2939
+ )
2940
+ be the number of occurrences of word
2941
+ 𝑖
2942
+ in the corpus, and
2943
+ #
2944
+ ⁒
2945
+ (
2946
+ 𝑖
2947
+ ,
2948
+ 𝑗
2949
+ )
2950
+ the number of co-occurrences of the words
2951
+ 𝑖
2952
+ and
2953
+ 𝑗
2954
+ in the corpus. Then, for sufficiently large dimensionality of the embeddings, the optimal vectors according to the SGNS objective are such that:
2955
+
2956
+
2957
+ 𝐰
2958
+ β‹…
2959
+ 𝐜
2960
+
2961
+ =
2962
+ log
2963
+ ⁑
2964
+ (
2965
+ #
2966
+ ⁒
2967
+ (
2968
+ 𝑀
2969
+ ,
2970
+ 𝑐
2971
+ )
2972
+ β‹…
2973
+ |
2974
+ 𝐷
2975
+ |
2976
+ #
2977
+ ⁒
2978
+ (
2979
+ 𝑀
2980
+ )
2981
+ β‹…
2982
+ #
2983
+ ⁒
2984
+ (
2985
+ 𝑐
2986
+ )
2987
+ )
2988
+ βˆ’
2989
+ log
2990
+ ⁑
2991
+ π‘˜
2992
+
2993
+
2994
+ =
2995
+ 𝑃
2996
+ ⁒
2997
+ 𝑀
2998
+ ⁒
2999
+ 𝐼
3000
+ ⁒
3001
+ (
3002
+ 𝑀
3003
+ ,
3004
+ 𝑐
3005
+ )
3006
+ βˆ’
3007
+ log
3008
+ ⁑
3009
+ π‘˜
3010
+
3011
+
3012
+ In this manner, a measure of error in the estimation of the word embedding for word
3013
+ 𝑀
3014
+ could be obtained by comparing the dot product
3015
+ 𝐰
3016
+ β‹…
3017
+ 𝐜
3018
+ and
3019
+ 𝑃
3020
+ ⁒
3021
+ 𝑀
3022
+ ⁒
3023
+ 𝐼
3024
+ ⁒
3025
+ (
3026
+ 𝑀
3027
+ ,
3028
+ 𝑐
3029
+ )
3030
+ across all context words
3031
+ 𝑐
3032
+ appearing with word
3033
+ 𝑀
3034
+ in the corpus. In particular, we explored a word-level measure of estimation error for word
3035
+ 𝑀
3036
+ captured by the median of
3037
+ 𝑑
3038
+ ⁒
3039
+ (
3040
+ 𝑀
3041
+ ,
3042
+ 𝑐
3043
+ )
3044
+ , the context-level percentage of deviation from the optimal value, over all contexts:
3045
+
3046
+
3047
+ 𝑑
3048
+ ⁒
3049
+ (
3050
+ 𝑀
3051
+ ,
3052
+ 𝑐
3053
+ )
3054
+
3055
+ =
3056
+ 𝐰
3057
+ β‹…
3058
+ 𝐜
3059
+ βˆ’
3060
+ (
3061
+ 𝑃
3062
+ ⁒
3063
+ 𝑀
3064
+ ⁒
3065
+ 𝐼
3066
+ ⁒
3067
+ (
3068
+ 𝑀
3069
+ ,
3070
+ 𝑐
3071
+ )
3072
+ βˆ’
3073
+ log
3074
+ ⁑
3075
+ π‘˜
3076
+ )
3077
+ 𝑃
3078
+ ⁒
3079
+ 𝑀
3080
+ ⁒
3081
+ 𝐼
3082
+ ⁒
3083
+ (
3084
+ 𝑀
3085
+ ,
3086
+ 𝑐
3087
+ )
3088
+ βˆ’
3089
+ log
3090
+ ⁑
3091
+ π‘˜
3092
+
3093
+
3094
+ By comparing the distribution of this word-level measure of estimation error over different word lists, we found that higher frequency word lists, such as the set of White surnames of Garg etΒ al. (2018), had much lower estimation errors compared to relatively lower frequency word lists, such as the set of 20 Asian surnames (see FigureΒ 8).
3095
+
3096
+ Figure 8:Distribution of word-level estimation error measures in COHA for different word lists. Using Levy and Goldberg’s findings for the skip-gram with negative-sampling (SGNS) model, we compute a word-level measure of the estimation error in word embeddings.
3097
+
3098
+ Though helpful to assess the estimation quality of word embeddings in a corpus, an important limitation of this method is that it is not a probabilistic model, and so offers no obvious way to perform hypothesis testing in downstream applications accounting for this type of error. For this reason, we sought a different approach that could provide not only point estimates for the embeddings, but accompanying uncertainty measures with a probabilistic foundation.
3099
+
3100
+ A.4.2Bayes by Backprop for the SGNS model
3101
+
3102
+ In order to obtain distributions over the word embeddings, we explored the Bayes-by-Backprop algorithm of Blundell etΒ al. (2015), a variational inference approach that learns a distribution over neural network weights. Han etΒ al. (2018) adapt this method to obtain approximate posterior distributions for SGNS word embeddings, incorporating metadata on document covariates in order to learn conditional distributions for different corpus subsets (e.g., temporal periods or genres) that share structural information across these partitions. Using a Gaussian mixture prior for the parameters of the word and context vectors, this method computes the following conditional posterior distribution
3103
+ 𝐰
3104
+ 𝑀
3105
+ |
3106
+ π‘₯
3107
+ for word vectors and unconditional posterior distribution
3108
+ 𝐰
3109
+ 𝑐
3110
+ for the context vectors:
3111
+
3112
+
3113
+ 𝐰
3114
+ 𝑀
3115
+ |
3116
+ π‘₯
3117
+
3118
+ ∼
3119
+ 𝑁
3120
+ ⁒
3121
+ (
3122
+ 𝑓
3123
+ ⁒
3124
+ (
3125
+ πœ‡
3126
+ 𝑀
3127
+ ,
3128
+ πœ‡
3129
+ π‘₯
3130
+ )
3131
+ ,
3132
+ 𝜎
3133
+ 𝑀
3134
+ |
3135
+ 𝑐
3136
+ )
3137
+
3138
+
3139
+ 𝐰
3140
+ 𝑐
3141
+
3142
+ ∼
3143
+ 𝑁
3144
+ (
3145
+ πœ‡
3146
+ ~
3147
+ 𝑐
3148
+ ,
3149
+ 𝜎
3150
+ ~
3151
+ 𝑐
3152
+ )
3153
+ )
3154
+
3155
+
3156
+ where
3157
+ π‘₯
3158
+ is the subcorpus on which the embedding for word
3159
+ 𝑀
3160
+ is estimated,
3161
+ 𝑓
3162
+ is an affine transformation that combines corpus-level word vectors
3163
+ πœ‡
3164
+ 𝑀
3165
+ and embeddings for each subcorpora
3166
+ οΏ½οΏ½
3167
+ π‘₯
3168
+ , and
3169
+ 𝜎
3170
+ 𝑀
3171
+ and
3172
+ 𝜎
3173
+ ~
3174
+ 𝑐
3175
+ are the diagonal covariance of word and context vectors, respectively, parameterized as
3176
+ 𝜎
3177
+ 𝑀
3178
+ |
3179
+ 𝑐
3180
+ =
3181
+ log
3182
+ ⁑
3183
+ (
3184
+ 1
3185
+ +
3186
+ 𝑒
3187
+ 𝜌
3188
+ 𝑀
3189
+ )
3190
+ and
3191
+ 𝜎
3192
+ ~
3193
+ 𝑐
3194
+ =
3195
+ log
3196
+ ⁑
3197
+ (
3198
+ 1
3199
+ +
3200
+ 𝑒
3201
+ 𝜌
3202
+ ~
3203
+ 𝑐
3204
+ )
3205
+ . The Bayes-by-Backprop algorithm initializes parameters
3206
+ πœ‡
3207
+ 𝑀
3208
+ ,
3209
+ πœ‡
3210
+ π‘₯
3211
+ ,
3212
+ πœ‡
3213
+ ~
3214
+ 𝑐
3215
+ ,
3216
+ 𝜌
3217
+ 𝑀
3218
+ and
3219
+ 𝜌
3220
+ ~
3221
+ 𝑐
3222
+ for all word and context vectors in the vocabulary, and, given
3223
+ (
3224
+ 𝑀
3225
+ ,
3226
+ 𝑐
3227
+ ,
3228
+ π‘₯
3229
+ )
3230
+ triplets, performs sequential updates to these parameters by computing the gradient of the variational approximation to the posterior.
3231
+
3232
+ Figure 9:Word-level relationship between posterior standard deviations and frequency in COHA using the Bayes-by-Backprop approach. Lower-frequency words displayed poor convergence and their posterior standard deviations were effectively unchanged during training.
3233
+
3234
+ Unfortunately, training meaningful embeddings with Bayes-by-Backprop turned out to be challenging. On the COHA corpus, our best performance was a mean accuracy of 0.11 on the Google analogy task (Mikolov etΒ al., 2013a) and a mean Pearson similarity statistic of 0.41 on the MEN similarity task (Bruni etΒ al., 2012), relative to a benchmark of 0.24 and 0.51, respectively, using the pre-trained embeddings of Hamilton etΒ al. (2016) across COHA decades. We explored numerous refinements to improve the training of Bayesian Neural Networks (Book, 2020), including using different weight initialization schemes for separate parameter groups (e.g., Uniform Kaiming scheme), and both uniformly and dynamically re-weighting the Kullback-Leibler divergence component of the cost function, to no greater success.
3235
+
3236
+ In addition to the lower quality of the embedding posterior means, we encountered an important scaling issue in the trained parameters. After training, the posterior means were one to two orders of magnitude smaller than the posterior standard deviations, effectively barring us from drawing meaningful samples from these distributions. An analysis of the relationship between the posterior standard deviations and word-level frequency indicated that there was an inverse relationship between these (similar to what we document in FigureΒ 3 for GloVe-V); however, this relationship only held for higher frequency words. For lower frequency words, the posterior standard deviations were all equivalent and effectively unmodified from their initialization value (see FigureΒ 9). Convergence diagnostics on this model confirmed that these parameters (
3237
+ 𝜌
3238
+ 𝑀
3239
+ and
3240
+ 𝜌
3241
+ ~
3242
+ 𝑐
3243
+ ) were not training correctly. Further work would be required to design priors and parameter-specific weight initialization schemes that lead to proper training of the parameters for this subset of words. Our code for these training runs is made available at the project repository.
3244
+
3245
+ Report Issue
3246
+ Report Issue for Selection
3247
+ Generated by L A T E xml
3248
+ Instructions for reporting errors
3249
+
3250
+ We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
3251
+
3252
+ Click the "Report Issue" button.
3253
+ Open a report feedback form via keyboard, use "Ctrl + ?".
3254
+ Make a text selection and click the "Report Issue for Selection" button near your cursor.
3255
+ You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
3256
+
3257
+ Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
3258
+
3259
+ Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.