ACL-OCL / Base_JSON /prefixL /json /latechclfl /2021.latechclfl-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:58:11.829162Z"
},
"title": "WMDecompose: A Framework for Leveraging the Interpretable Properties of Word Mover's Distance in Sociocultural Analysis",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "Brunila",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "McGill University",
"location": {}
},
"email": "mikael.brunila@gmail.com"
},
{
"first": "Jack",
"middle": [],
"last": "Laviolette",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {}
},
"email": "jack.laviolette@columbia.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Despite the increasing popularity of NLP in the humanities and social sciences, advances in model performance and complexity have been accompanied by concerns about interpretability and explanatory power for sociocultural analysis. One popular model that balances complexity and legibility is Word Mover's Distance (WMD). Ostensibly adapted for its interpretability, WMD has nonetheless been used and further developed in ways which frequently discard its most interpretable aspect: namely, the word-level distances required for translating a set of words into another set of words. To address this apparent gap, we introduce WMDecompose: a model and Python library that 1) decomposes document-level distances into their constituent word-level distances, and 2) subsequently clusters words to induce thematic elements, such that useful lexical information is retained and summarized for analysis. To illustrate its potential in a social scientific context, we apply it to a longitudinal social media corpus to explore the interrelationship between conspiracy theories and conservative American discourses. Finally, because of the full WMD model's high time-complexity, we additionally suggest a method of sampling document pairs from large datasets in a reproducible way, with tight bounds that prevent extrapolation of unreliable results due to poor sampling practices.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Despite the increasing popularity of NLP in the humanities and social sciences, advances in model performance and complexity have been accompanied by concerns about interpretability and explanatory power for sociocultural analysis. One popular model that balances complexity and legibility is Word Mover's Distance (WMD). Ostensibly adapted for its interpretability, WMD has nonetheless been used and further developed in ways which frequently discard its most interpretable aspect: namely, the word-level distances required for translating a set of words into another set of words. To address this apparent gap, we introduce WMDecompose: a model and Python library that 1) decomposes document-level distances into their constituent word-level distances, and 2) subsequently clusters words to induce thematic elements, such that useful lexical information is retained and summarized for analysis. To illustrate its potential in a social scientific context, we apply it to a longitudinal social media corpus to explore the interrelationship between conspiracy theories and conservative American discourses. Finally, because of the full WMD model's high time-complexity, we additionally suggest a method of sampling document pairs from large datasets in a reproducible way, with tight bounds that prevent extrapolation of unreliable results due to poor sampling practices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "1 Introduction: The Paradox of Word Mover's Distance",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The present paper introduces WMDecompose, an iteration of the Word Mover's Distance (WMD) (Kusner et al., 2015) model commonly used for determining the semantic distances between pairs of documents. Leveraging word vectors from models such as word2vec (Mikolov et al., 2013) , fastText (Bojanowski et al., 2016) and GloVe (Pennington et al., 2014) , WMD was presented as a method that was not only hyper-parameter free and thus easy to use, but also highly interpretable (Kusner et al., 2015) . Arguably, this still makes the model a viable alternative for document similarity tasks, despite the recent and rapid rise of contextual and rich embeddings from Transformer-type models (Vaswani et al., 2017 ) such as BERT (Devlin et al., 2019) . However, WMD is computationally expensive, with the full model running at cubic time complexity (Kusner et al., 2015) . This may explain why, despite its initial pitch as an interpretable alternative, WMD has mainly been further developed and applied in ways that focus on decreasing the model's high runtime, while ignoring or undermining the inherent interpretability of the model (Atasu et al., 2017; Werner and Laber, 2020) .",
"cite_spans": [
{
"start": 90,
"end": 111,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 252,
"end": 274,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF26"
},
{
"start": 286,
"end": 311,
"text": "(Bojanowski et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 322,
"end": 347,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF33"
},
{
"start": 471,
"end": 492,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 681,
"end": 702,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF48"
},
{
"start": 718,
"end": 739,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 838,
"end": 859,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 1125,
"end": 1145,
"text": "(Atasu et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 1146,
"end": 1169,
"text": "Werner and Laber, 2020)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To our knowledge, no current WMD implementation provides an out-of-the-box means of retaining word-level information, despite its utility to many research agendas. To confront this paradoxical situation, we introduce a set of methods and Python code for retaining the individual word distances that make WMD interpretable while simultaneously suggesting a simple trick for efficiently estimating the distance between large sets of documents. Specifically, we propose using implementation of the \"relaxed\" WMD (Kusner et al., 2015) with linear time complexity (Atasu et al., 2017) to first estimate the distances between a full set of documents, then using optimal pairing with the Gale-Shapley algorithm (Gale and Shapley, 1962) on this estimate to find the pairs between two sets of documents that minimize the average pairwise distance. Next, full WMD is calculated between each pair, while progressively adding the contributions of individual words to the overall distance between document sets. Words are, in turn, grouped using K-means clustering and vector dimensionality reduction to decompose the distances not only by word, but by thematic cluster. Each cluster is defined by its constituent words, and hence highly interpretable.",
"cite_spans": [
{
"start": 509,
"end": 530,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 559,
"end": 579,
"text": "(Atasu et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 704,
"end": 728,
"text": "(Gale and Shapley, 1962)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Having introduced WMDecompose, we demonstrate its utility in a social scientific context by exploring interrelated trends over time between two social media corpora: r/conspiracy and r/The_Donald, the primary Reddit communities for self-identified conspiracy theorists and Donald Trump supporters, respectively. While this presents only a cursory and exploratory engagement with these thematically complex data, we hope that it demonstrates the analytic potential of WMDecompose for social science and humanities research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Finally, we also provide a complementary analysis using a well-known Yelp review dataset, to act as a sanity check on the validity of our method. This analysis can be found in Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Originally proposed by Kusner et al. (2015) , WMD has become a popular metric of document semantic distance in computational linguistics and related subfields. An innovation and special instance of Earth Mover's Distance (Rubner et al., 1998; Werman, 2008, 2009) , WMD leverages word embeddings to compute the minimum distance required to \"move\" the words from one document to another, providing a measure of document-level semantic (dis)similarity as a sum of the distance required to move individual words from one document to another (see Figure 1 ). Since its introduction, many related algorithms and analytic approaches have been proposed for language engineering tasks (e.g. Atasu et al., 2017; Huang et al., 2016; Ren and Liu, 2018) . More recently, WMD and its many variations have also been applied to socioculural analyses of data ranging from survey response data (Taylor and Stoltz, 2020) , to Ancient Greek literature (P\u00f6ckelmann et al., 2020) , to dyadic conversational dynamics (Nasir et al., 2019) .",
"cite_spans": [
{
"start": 23,
"end": 43,
"text": "Kusner et al. (2015)",
"ref_id": "BIBREF20"
},
{
"start": 221,
"end": 242,
"text": "(Rubner et al., 1998;",
"ref_id": "BIBREF39"
},
{
"start": 243,
"end": 262,
"text": "Werman, 2008, 2009)",
"ref_id": null
},
{
"start": 682,
"end": 701,
"text": "Atasu et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 702,
"end": 721,
"text": "Huang et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 722,
"end": 740,
"text": "Ren and Liu, 2018)",
"ref_id": "BIBREF37"
},
{
"start": 888,
"end": 901,
"text": "Stoltz, 2020)",
"ref_id": "BIBREF45"
},
{
"start": 932,
"end": 957,
"text": "(P\u00f6ckelmann et al., 2020)",
"ref_id": "BIBREF36"
},
{
"start": 994,
"end": 1014,
"text": "(Nasir et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 542,
"end": 550,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Word Mover's Distance",
"sec_num": "2.1"
},
{
"text": "WMD is typically parameterized in the following manner (Kusner et al., 2015; Huang et al., 2016) , using words represented as embeddings produced with algorithms such as word2vec (Mikolov et al., 2013) . Let x i \u2208 R d be the ith embedding in d-dimensional space, drawn from a word embedding matrix X \u2208 R d\u00d7n representing a vocabulary of n words. Let d a and d b be two n-dimensional, normalized bag-of-words (nBOW) vectors for a pair of documents where d a i is the normalized num-ber of times word i occurs in vector d a . WMD then attempts to find a transportation matrix T \u2208 R n\u00d7n that minimizes the total distance required to move all words in the first document to the second document, where T i,j describes how much of the normalized word vector d a i should be transported to the normalized word vector d b j . Formally, WMD returns the minimum distance to move from document d a to document d b , given by summing the product of the optimal \"flow\" T i,j from all words in the two documents with the \"cost\" c(i, j) of moving between each word vector in the two documents:",
"cite_spans": [
{
"start": 55,
"end": 76,
"text": "(Kusner et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 77,
"end": 96,
"text": "Huang et al., 2016)",
"ref_id": "BIBREF17"
},
{
"start": 179,
"end": 201,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Mover's Distance",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "WMD(d a , d b ) = min T\u22650 n i,j=1 T i,j c(i, j)",
"eq_num": "(1)"
}
],
"section": "Word Mover's Distance",
"sec_num": "2.1"
},
{
"text": "Furthermore, the equation is subject to the constraint that the entirety of the \"mass\" of d a should be distributed in the flows to d b and vice versa:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Mover's Distance",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n j=1 T i,j = d a i \u2200i \u2208 {1, ..., n} n i=1 T i,j = d b j \u2200j \u2208 {1, ..., n}",
"eq_num": "(2)"
}
],
"section": "Word Mover's Distance",
"sec_num": "2.1"
},
{
"text": "Even though the original implementation of the WMD uses Euclidean distance for the metric c(i, j), the similarity between word embeddings in general and with WMD in particular (Yokoi et al., 2020) is better captured using the cosine distance 1 :",
"cite_spans": [
{
"start": 176,
"end": 196,
"text": "(Yokoi et al., 2020)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Mover's Distance",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c(i, j) = 1 \u2212 x i \u2022 x j x i x j",
"eq_num": "(3)"
}
],
"section": "Word Mover's Distance",
"sec_num": "2.1"
},
{
"text": "Furthermore, d a and d b can be normalized using other techniques, such as Term frequency-Inverse document frequency (Tf-Idf), combined with L1normalization, which allocate more mass to words that are more common in a specific document than the overall vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Mover's Distance",
"sec_num": "2.1"
},
{
"text": "While there is a well-established literature on solving this linear program using the EMD algorithm, doing so is computationally prohibitive. Kusner et al. (2015) note that \"the best average time complexity of solving the WMD optimization problem scales\" O(V 3 log V ), with V denoting the number of unique words in the corpus. Consequently, calculating the distances for large datasets will often prove insurmountable for WMD. Kusner et al. (2015) suggest working around this issue by calculating an approximation of WMD, where the distance from the two pairs are calculated with relaxed constraints, so that T i,j must contain the mass or flow from the source document, but not the target. This \"relaxed\" WMD or R-WMD is then instead parameterized with only one constraint:",
"cite_spans": [
{
"start": 142,
"end": 162,
"text": "Kusner et al. (2015)",
"ref_id": "BIBREF20"
},
{
"start": 428,
"end": 448,
"text": "Kusner et al. (2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Mover's Distance",
"sec_num": "2.1"
},
{
"text": "R-WMD(d a , d b ) = min T\u22650 n i,j=1 T i,j c(i, j) subject to: n j=1 T i,j = d i \u2200i \u2208 {1, ..., n} (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Mover's Distance",
"sec_num": "2.1"
},
{
"text": "Repeating the process, so that R-WMD is instead calculated from d b to d a , gives us an estimate of the bounds within which the full WMD must be located. While R-WMD gives only an estimate of WMD within certain bounds, it has been shown to be a good approximation of the full WMD (Kusner et al., 2015) . Notably, instead of the cubic time complexity of WMD, R-WMD can be performed with quadratic time complexity O(V 2 ). Furthermore, Atasu et al. (2017) have demonstrated how to calculate the R-WMD so that the time complexity is reduced from quadratic to linear with the Linear Complexity Relaxed WMD (LC-RWMD) algorithm. However, this solution does not retain the distances contributed by individual words to the R-WMD.",
"cite_spans": [
{
"start": 281,
"end": 302,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 435,
"end": 454,
"text": "Atasu et al. (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Mover's Distance",
"sec_num": "2.1"
},
{
"text": "While these and other suggested tricks and improvements (e.g. Tithi and Petrini, 2021; Werner and Laber, 2020; Yokoi et al., 2020) for efficient WMD make the algorithm a feasible and powerful tool for comparing the semantic distance between large sets of documents, they have been introduced with little regard to the algorithm's initial claims to intuitive and interpretable explanations for document (dis)similarities. Consider, for example, Figure 1 , introduced by Kusner et al. (2015) as an example of how the distances between three documents could be decomposed into different parts. Indeed, for sophisticated NLP techniques such as WMD to be maximally useful for sociocultural analysis, the possibility to decompose document-level results into interpretable lexical information is key. For example, if the mean WMD document distances between two longitudinal corpora sampled at some time t 0 and again at some later time t 1 shrink, it might indicate that the corpora have become more similar. However, the change in distance by itself would tell the curious analyst little about the particular lexical phenomena responsible for the semantic changes, and (crucially) how these might relate to extralinguistic social and symbolic processes. In order to tell the full story, fluctuations in distance must be decomposed into interpretable parts.",
"cite_spans": [
{
"start": 87,
"end": 110,
"text": "Werner and Laber, 2020;",
"ref_id": "BIBREF49"
},
{
"start": 111,
"end": 130,
"text": "Yokoi et al., 2020)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [
{
"start": 444,
"end": 452,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "WMD and interpretability",
"sec_num": "2.2"
},
{
"text": "To invoke Danilevsky et al.'s (2020) classification scheme of explanation in NLP, WMD provides explanations that are local (i.e. the distance between any pair of documents can be decomposed to individual words) and self-explanatory (i.e., no post hoc processing of the model outputs is necessary). However, these explanations have, to the best of our knowledge, not been leveraged in applied research with WMD, most likely due to the prohibitive computational cost of calculating the full WMD and the neglect of interpretability in more efficient elaborations of the model. Moreover, applied WMD research has to date generally begun with an a priori interest in the relationship between documents and particular \"concept\" words (Stoltz and Taylor, 2019, e.g.) or predefined topics (Wu and Li, 2017, e.g.) . Although such analyses are perfectly valid, a fully inductive relationship to words of interest can be useful when the analyst does not have or desire strong assumptions about what lexical changes underlie the phenomena of interest.",
"cite_spans": [
{
"start": 10,
"end": 36,
"text": "Danilevsky et al.'s (2020)",
"ref_id": null
},
{
"start": 728,
"end": 759,
"text": "(Stoltz and Taylor, 2019, e.g.)",
"ref_id": null
},
{
"start": 781,
"end": 804,
"text": "(Wu and Li, 2017, e.g.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WMD and interpretability",
"sec_num": "2.2"
},
{
"text": "To these ends of inductive interpretability, we propose what we believe to be a novel analytic pipeline which combines LC-RWMD decomposed at the word level, optional document pairing based on the Gale-Shapley matching algorithm to ensure robustness, and t-SNE (Maaten and Hinton, 2008) reduced word vectors with K-means clustering (Lloyd, 1982; Elkan, 2003) to enhance interpretability by inducing higher-level thematic groupings. To assist future researchers, we additionally provide a Python package and example code notebooks available on Github. (Bracewell, 2021; Hellinger, 2018; Polletta and Callahan, 2019) . Many such studies have been theoretical and/or qualitative (e.g. Barkun, 2017; Stecula and Pickup, 2021) , survey-based (e.g. Federico et al., 2018; Miller et al., 2016; Uscinski et al., 2020) , focused on patterns of \"misinformation\" dissemination on social media (e.g. Benkler et al., 2017; Marwick and Lewis, 2017) , or on the discourse of political elites (e.g., Hornsey et al., 2020; Neville-Shepard, 2019 ). The present paper chooses a different, though not unprecedented, approach of studying large user-generated text corpora from reddit.",
"cite_spans": [
{
"start": 331,
"end": 344,
"text": "(Lloyd, 1982;",
"ref_id": "BIBREF21"
},
{
"start": 345,
"end": 357,
"text": "Elkan, 2003)",
"ref_id": "BIBREF10"
},
{
"start": 550,
"end": 567,
"text": "(Bracewell, 2021;",
"ref_id": "BIBREF6"
},
{
"start": 568,
"end": 584,
"text": "Hellinger, 2018;",
"ref_id": "BIBREF14"
},
{
"start": 585,
"end": 613,
"text": "Polletta and Callahan, 2019)",
"ref_id": "BIBREF35"
},
{
"start": 681,
"end": 694,
"text": "Barkun, 2017;",
"ref_id": "BIBREF1"
},
{
"start": 695,
"end": 720,
"text": "Stecula and Pickup, 2021)",
"ref_id": "BIBREF42"
},
{
"start": 742,
"end": 764,
"text": "Federico et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 765,
"end": 785,
"text": "Miller et al., 2016;",
"ref_id": "BIBREF27"
},
{
"start": 786,
"end": 808,
"text": "Uscinski et al., 2020)",
"ref_id": "BIBREF47"
},
{
"start": 887,
"end": 908,
"text": "Benkler et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 909,
"end": 933,
"text": "Marwick and Lewis, 2017)",
"ref_id": "BIBREF23"
},
{
"start": 983,
"end": 1004,
"text": "Hornsey et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 1005,
"end": 1026,
"text": "Neville-Shepard, 2019",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WMD and interpretability",
"sec_num": "2.2"
},
{
"text": "com. Specifically, we analyze \"self posts\" 3 from the r/The_Donald and r/conspiracy communities (\"subreddits\"); the former was the main subreddit for Donald Trump supporters before being banned in June 2020, while the latter remains the largest subreddit for self-identified conspiracy theorists. Others have examined both 2 https://github.com/maybemkl/ wmdecompose 3 Self posts are forum submissions which contain only text as opposed to links to external sites. We use self posts in lieu of comments, as the latter tend to be shorter and less orderly due to their nested structure. Further, self posts tend to be more regulated by forum moderators, suggesting that they are more likely to reflect the norms of the community. r/The_Donald and its role in alt-right politics (Massachs et al., 2020; Ribeiro et al., 2020; Shepherd, 2020) , as well as conspiracy theorists on the platform (Klein et al., 2019; Phadke et al., 2021; Samory and Mitra, 2018) . Research has indicated a sizable and statistically significant overlap in users between the two subreddits (Massachs et al., 2020; Nithyanand et al., 2017) . However, these observations have been largely based on subreddit cosubscriber networks, and have paid comparatively less attention to large-scale linguistic patterns over time, a gap to which we hope to contribute.",
"cite_spans": [
{
"start": 775,
"end": 798,
"text": "(Massachs et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 799,
"end": 820,
"text": "Ribeiro et al., 2020;",
"ref_id": "BIBREF38"
},
{
"start": 821,
"end": 836,
"text": "Shepherd, 2020)",
"ref_id": "BIBREF41"
},
{
"start": 887,
"end": 907,
"text": "(Klein et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 908,
"end": 928,
"text": "Phadke et al., 2021;",
"ref_id": "BIBREF34"
},
{
"start": 929,
"end": 952,
"text": "Samory and Mitra, 2018)",
"ref_id": "BIBREF40"
},
{
"start": 1062,
"end": 1085,
"text": "(Massachs et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 1086,
"end": 1110,
"text": "Nithyanand et al., 2017)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WMD and interpretability",
"sec_num": "2.2"
},
{
"text": "Data was collected using the Pushshift Reddit dataset publicly available on Google BigQuery (Baumgartner et al., 2020 ). Because we are interested in change over time, we delineate two discontinuous periods of interest: t 0 , defined as the twelve months following the creation of r/The_Donald (July 11, 2015-July 11, 2016), and t 1 , the final twelve months available in the dataset (August 31, 2018-August 31, 2019). These two year-long snapshots, separated by roughly two years, offer a glimpse of the relationship between conspiratorial language and the discourse of self-identified Donald Trump supporters during the Trump presidency.",
"cite_spans": [
{
"start": 92,
"end": 117,
"text": "(Baumgartner et al., 2020",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WMD and interpretability",
"sec_num": "2.2"
},
{
"text": "To reduce computation and run-time, we sample 5,000 posts from each subreddit at t 0 and t 1 for a total of 20,000 posts. Random sampling was restricted to those posts at least 30 words long (to ensure that documents contain adequate lexical information), and having a positive score (2 or greater, as Reddit posts start with a score of 1) to ensure that the lexical content is generally representative of the community. Once sampled, the text is preprocessed using a standard pipeline for NLP applications. The specifics of this pipeline are described in Appendix A. The processed dataset contains 1,509,553 tokens, 596,596 tokens from r/The_Donald and 993,957 from r/conspiracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling and preprocessing",
"sec_num": "3.2"
},
{
"text": "After preprocessing, words were converted to vectors using a fine-tuned word2vec model introduced in (Mikolov et al., 2013) , originally pre-trained on Google News Vectors containing about 300 billion words. Fine-tuning details and hyperparameters are included in Appendix A. Due to the large number of unique words, and to increase the interpretability of the final outputs, words were clustered on the basis of their embedding vectors using t-SNE, an algorithm that is commonly used to reduce the dimensionality of word embeddings while still preserving the original structure of the higher-dimensional form (for uses with WMD, see: Huang et al., 2016; G\u00fclle et al., 2020) . In our case, we reduce the dimensionality of the original word vectors from 300 to two and use K-means on the reduced dimensions to generate 100 clusters of words according to their semantic similarity. Clustering words allows us to examine not only the changing usage of individual words across subreddits, but also of these higherlevel thematic groupings. The number of clusters was chosen heuristically after inspecting both elbow plots and silhouette scores for clusters in the range of 10 to 200. Our method is robust to using raw embeddings as well as other popular reduction techniques before clustering, such as UMAP (McInnes et al., 2020).",
"cite_spans": [
{
"start": 101,
"end": 123,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF26"
},
{
"start": 635,
"end": 654,
"text": "Huang et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 655,
"end": 674,
"text": "G\u00fclle et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding and clustering",
"sec_num": "4.1"
},
{
"text": "We now introduce WMDecompose, the core contribution of this paper, which provides the ability to examine the word-level distances required to move between two sets of documents, such as the self posts in r/conspiracy and r/The_Donald. The comparison of these documents happens, on the one hand, through retaining the word-level distances of moving between pairs of documents from each set and, on the other hand, by clustering words and aggregating their added distances by cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "More generally: Given two sets of documents, S a and S b , the matrix T i,j of flows between all the individual documents in both sets, and clusters for the input word vectors, WMDecompose returns the following information twice, once in terms of movement from the first set to the second and once in terms of movement from the second set to the first: A The aggregate word-level WMD or WMD w for each word w in a vocabulary V when moving all documents from one set of documents S a to another set S b . B The aggregate cluster-level or Cluster Mover's Distance (CMD c ) for each cluster c when moving all documents from S a to S b , with keywords for each c determined by the words with the highest WMD w within the cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "The WMD w gives us a good sense of the most important words separating the two sets. The CMD c allows us to organize this information by cluster, with interpretable cluster keywords that are dynamically ranked, depending on their importance for the particular case at hand. More formally, WMD w is calculated in the following manner. Let w a and w b be two words contained in S a and S b , respectively, and let WMD w a be the total distance accumulated by word w a when moving from S a to S b . Next, if n is the number of documents in S a that contain the word w a , m is the number of words in some document d b that w a is distributed over, t i,j is the vector of flows from w a in document d a i to each word w b j in d b , and c(w a , w b j ) is the cost in terms of cosine distance to move from each w a to each w b j , then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "WMD w a = n i=1 m j=1 t i,j c(w a , w b j )",
"eq_num": "(5)"
}
],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "WMD w b is counted in the same manner, only this time using the flow and cost from words in S b to words in S a .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "The CMD c a , i.e. the CMD c for movement from S a to S b , is then calculated simply by summing over the aggregated distances of all w a \u2208 c, i.e. all words w a belonging to cluster c. If there are p such words in S a for some cluster c, then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "CMD c a = p k=1 WMD w a \u2200w a \u2208 c a",
"eq_num": "(6)"
}
],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "Again, CMD c b is counted in the same manner. Furthermore, the total WMD S a when moving all documents from S a to some pair in S b can consequently be described as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "WMD S a = WMD w a = CMD c a (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "In order to further accentuate the differences between the word-level distances WMD w a and WMD w b , we subtract the WMD w a for each word w a in the vocabulary of S a from the corresponding WMD w b (if it exists) in the other set and vice versa. This way results will not be cluttered by words for which the WMD w is very similar across sets and instead the differences between the two sets will be highlighted. Hence, the final WMD w a with \"difference\" or WMD d w a for word w a is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "WMD d w a = WMD w a \u2212 WMD w b (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "if there exists such a w b that w b = w a . In other cases, the WMD d w a is just equal to the WMD w a . For the rest of the paper, we will use WMD w a as a short-hand when writing about WMD d w a as the model with difference yields far better and more interpretable results and is the only one we will consider in our analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "Finally, the WMD w values can be calculated using WMD with or without relaxed constraints. However, while the LC-RWMD cannot be used, we will next suggest a way in which its speed can be leveraged to yield reproducible and conservative estimates of the WMD w values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WMDecompose",
"sec_num": "4.2"
},
{
"text": "To run WMD on all 50 million pairwise combinations of documents in our corpus (5000 2 at both t 0 and t 1 ) would unfortunately be computationally prohibitive in many instances. With larger document sets, even using R-WMD with quadratic time complexity could be too costly in terms of time and computational resources, while using the LC-RWMD with linear time complexity sacrifices word-level decomposability. One solution would be to take a random sample from both sets of documents. However, this is not always ideal, as we will demonstrate next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gale-Shapley matching for WMD",
"sec_num": "4.3"
},
{
"text": "We first generate 50,000 random pairs of documents (each pair containing one post from each subreddit) at both t 0 and t 1 , yielding a total of 100,000 random pairs. The distribution of distances is displayed in Figure 2 . Although a t-test confirms high significance (t = 13.71), the measured difference in means is very small (0.548 at t 0 , versus 0.545 at t 1 ), and it is difficult to determine to what meaningful extent this reflects increased similarity between user language in r/The_Donald and r/conspiracy over time. We will return to this point later, in Section 5.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 221,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Gale-Shapley matching for WMD",
"sec_num": "4.3"
},
{
"text": "Looking at Figure 2 , it is clear that merely drawing a small random sample from both sets could produce unreliable results. Researchers working with very long documents and/or lacking computational resources might be limited to drawing small samples if running the full WMD model. If asking whether there is any evidence of document-level semantic alignment-i.e., a reduction in mean document pair distance over time-present in our corpus in the first place, using a random sample to offset computational costs could therefore produce misleading results. We therefore introduce a second contribution of WMDecompose: the option of effective and consistent pairing of documents using the Gale-Shapley (GS) stable matching algorithm (Gale and Shapley, 1962) to reduce the computational burden incurred by analysis.",
"cite_spans": [
{
"start": 731,
"end": 755,
"text": "(Gale and Shapley, 1962)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 11,
"end": 19,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Gale-Shapley matching for WMD",
"sec_num": "4.3"
},
{
"text": "Originally introduced through the hypothetical problem of pairing colleges with applicants, the GS algorithm iteratively finds the optimal match between two sets of equal size, given the preferences of all members of the two sets. The optimal match is biased towards the party taking initiative, i.e. the \"suitor\" (Dubins and Freedman, 1981; Iwama and Miyazaki, 2008) . To find the \"preferences\" of documents, we utilize LC-RWMD to first get an approximation of distances between all pairs from two sets of documents. Each document in a set then \"prefers\" the closest document in the other set. Given these preferences, we can use GS to find the pairs which minimize the distance between the two sets. We posit that GS ensures that our document pairs represent a conservative estimate of the distance between the two sets, making our method reproducible while also ensuring the robustness of results. By reducing computational costs, GS pairing is additionally aligned with mounting calls for NLP and other ML researchers to intentfully pursue algorithms which minimize the growing environmental toll of their technologies (Bender et al., 2021; Strubell et al., 2019) .",
"cite_spans": [
{
"start": 314,
"end": 341,
"text": "(Dubins and Freedman, 1981;",
"ref_id": "BIBREF9"
},
{
"start": 342,
"end": 367,
"text": "Iwama and Miyazaki, 2008)",
"ref_id": "BIBREF18"
},
{
"start": 1123,
"end": 1144,
"text": "(Bender et al., 2021;",
"ref_id": "BIBREF3"
},
{
"start": 1145,
"end": 1167,
"text": "Strubell et al., 2019)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gale-Shapley matching for WMD",
"sec_num": "4.3"
},
{
"text": "For our specific case study, we first calculate the LC-RWMD from r/conspiracy to r/The_Donald and vice versa. This computa-tion on 50M total pairs requires less than an hour on a regular laptop. Next, these relaxed distances from each document i in r/conspiracy to all N documents in r/The_Donald are used as the preferences of document i vis-a-vis documents 1, ..., N . We are primarily interested in movement from r/conspiracy to r/The_Donald, hence the former is given the role of \"suitor\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gale-Shapley matching for WMD",
"sec_num": "4.3"
},
{
"text": "Then, the Gale-Shapley algorithm is executed to find the optimal match, i.e., the set of document pairs that is least costly to move between r/conspiracy and r/The_Donald. Effectively, this shows one optimal solution to moving the mass of words from former to the latter, if each document in the first set were to be transferred only to the most similar document in the other set. All these steps are repeated for the documents at t 0 and t 1 . To summarize, by matching pairs of documents which are more similar in semantic space, WMDecompose produces conservative measures of semantic distance between corpora. Thus, in addition to providing a non-random, reproducible means of reducing computing costs while retaining full WMD calculations, GS ensures the robustness of findings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gale-Shapley matching for WMD",
"sec_num": "4.3"
},
{
"text": "Once we have the optimal pairs between r/conspiracy and r/The_Donald, we can run full WMD for the two sets of pairs, i.e. the set at t 0 and the set at t 1 4 . We do this using wmdecompose, a Python package written specifically for this paper with EMD executed using the PyEMD library under the hood Werman, 2008, 2009 ). 5",
"cite_spans": [
{
"start": 300,
"end": 318,
"text": "Werman, 2008, 2009",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploring lexical trends with WMDecompose",
"sec_num": "4.4"
},
{
"text": "After generating document pairs with the Gale-Shapley algorithm, it is worth asking whether mean document distances using GS pairs follow a similar pattern than those generated by random pairs (as in Figure 2 ). As it turns out, they do; both are normally distributed, with pair distances having a mean of 0.478 at t 0 (versus 0.548 with random pairing), and 0.469 at t 1 (versus 0.545 with random pairing). A t-test again shows significance (t = 8.05), but the absolute difference is small enough as to be difficult to interpret at the document level. We can therefore conclude that, when pairing documents on the basis of semantic proximity with GS and when pairing documents randomly, some modest but statistically significant reduction in distance is taking place from t 0 to t 1 . Without further analysis, however, we cannot conclude much else.",
"cite_spans": [],
"ref_spans": [
{
"start": 200,
"end": 208,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Overall document distance",
"sec_num": "5.1"
},
{
"text": "As we have stated throughout, document-level distance measures tell one nothing about the nature and source of that distance. A researcher doing exploratory analysis of a new dataset might look at the relatively small, albeit significant, difference in mean document distance at t 0 and t 1 and conclude, absent any other contextual information, that further analysis is not warranted. However, distances between documents merely represent the distances between their aggregate words. As such, it is quite possible that relative document-level stability is masking a much greater degree of variation between t 0 and t 1 for specific words, and these words might merit further attention. Figure 3 displays the distribution of changes in word cost (i.e., the total distance contributed at t 1 minus the total distance contributed at t 0 ) for each unique word in the corpus. As we can see, despite the longitudinal stability of the corpora at the document level, a great deal of lexical and semantic change over time is nevertheless taking place. Though the majority of words show little change in this regard, many words which significantly distinguished the corpora at t 0 no longer do at t 1 , and vice versa. Our suggestion is that these words might be qualitatively instructive, motivating a closer inspection of the lexical data produced by WMDecompose. We therefore turn to WMDecompose to examine the sorted list of words and clusters which most distinguish our corpora at each time period, displayed in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 687,
"end": 695,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 1509,
"end": 1516,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Identifying distinguishing words with WMDecompose",
"sec_num": "5.2"
},
{
"text": "These top words conform nicely with expectations, with vocabulary directly associated with party and electoral politics characterizing r/The_Donald, and words associated with conspiracy theories characterizing their eponymous subreddit. Further, we see the clear effects of temporality and current events: words associated with the 2016 election cycle (bernie, hillary, cruz, delegate) disappear at t 1 , while more general political terms (democrat, president, liberal) appear; for r/conspiracy, epstein and cia enter the top ten most distinctive words, while isis drops out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying distinguishing words with WMDecompose",
"sec_num": "5.2"
},
{
"text": "We can also note that, though the top word distances are higher for r/The_Donald at each period, the difference greatly shrinks, reflecting both an attenuation of the ritualized, insider language associated with Trump's online following during the 2016 election cycle, as well as an increase in talk related to Trump on r/conspiracy. While these results are perhaps unsurprising, they demonstrate the greater richness of WMDecompose for the comparative analysis of corpora by introducing word-level data. Clustering words similarly allows for the discovery of pervasive thematic differences between corpora. Table 2 displays some example clusters, separated by subreddit and time period.",
"cite_spans": [],
"ref_spans": [
{
"start": 608,
"end": 615,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Identifying distinguishing words with WMDecompose",
"sec_num": "5.2"
},
{
"text": "While WMDecompose can be used to identify distinguishing keywords, it can also be used to uncover fine-grained semantic assimilation. If a word w strongly distinguishes corpus S a from corpus S b at t 0 , but no longer at t 1 , it could represent assim- ilation (S b starts using w and related words more frequently), but it could equally represent S a ceasing to use w (as is often the case with words related to current events). As such, the change in the distance contribution of a word over time must be considered alongside the change in word frequency. We might thus conceive of each word distance change as occupying a position in three-dimensional space, with axes corresponding to 1) change in aggregate word cost, 2) change in frequency in S a , and 3) change in frequency in S b . The same conceptualization can be applied to word clusters as well, wherein a cluster's values are simply the sum of its words' values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inductive discovery of thematic assimilation over time",
"sec_num": "5.3"
},
{
"text": "Given our interest in semantic convergence, cases in which word distance contributions go down because usage ceases are of little interest (though they might interest others). Rather, we are interested in words which contribute less distance, while exhibiting comparable or increased use in both subreddits. Examining words meeting these criteria, and excluding low-frequency outliers, we indeed see many conspiratorial words whose cu- : CC (%) = Cost change from t 0 to t 1 , represented as a percentage, SWiC = Similar words in cluster (i.e., words from the same cluster whose cost also decreased from t 0 to t 1 , while remaining in use in both subreddits).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inductive discovery of thematic assimilation over time",
"sec_num": "5.3"
},
{
"text": "mulative distance contribution shrank from t 0 to t 1 . A selection of such words is presented in Table 3 . These include terms such as hoax, fraudulent, brainwash, threat, jews, surveillance, alex_jones, propaganda, and reality, to name a few. We can then examine other words in these keywords' clusters to get a sense of related words following similar patters of cost reduction and continued usage, as are displayed in the table.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 106,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Inductive discovery of thematic assimilation over time",
"sec_num": "5.3"
},
{
"text": "Of course, such observations are of exploratory nature, intended to demonstrate an inductive starting point for a more rigorous analysis, either within or outside the WMDecompose framework. We do not posit this as statistical proof of any process of assimilation. However, this general framework of comparative and longitudinal analysis allows for a much richer engagement with discursive change due to the robust semantic relationships captured by WMDecompose while still providing simple outputs, e.g., for significance testing. Despite its cursory nature, we hope this short vignette has demonstrated the interpretable potential of WMDecompose, and how it might aid social science and humanities researchers with qualitative thematic discovery in large corpora, while also providing a computable metric for quantitative models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inductive discovery of thematic assimilation over time",
"sec_num": "5.3"
},
{
"text": "We have introduced a novel iteration of WMD, one which departs from the WMD variations which, despite their high performance on a variety of tasks, do away with one of the key contributions of the original model: the inherent and decomposable relationship between document-level semantic distances and the lexical semantics that underlie them. In so doing, we heed a recent, but hopefully longlived, call for ML and particularly NLP researchers to prioritize models which are interpretable, providing explanatory value at the level of social meaning. We hope that our Python package might aid future researchers similarly interested in interpretable computational approaches. Furthermore, through the Gale-Shapley algorithm, we propose an approach for combining interpretability with environmental sustainability. Down the line, this framework could also be expanded to support dynamic embeddings from models such as BERT, which due to their technical opacity have yet to be widely adapted in sociocultural analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our vignette presented how one might use WMDecompose in a comparative and exploratory context. That is, we imagine this as somewhat of an inductive starting point-in this case, for an analysis of the relationship between Trumpist and conspiracy theorist online communities-revealing trends related to one's particular research question that can be pursued with more focused and rigorous analysis, be it qualitative, discourse analytic, or statistical. However, because of the rich, word-level quantitative measures WMDecompose provides, future research might attempt to employ it in, for example, causal models seeking to estimate the effect of a treatment condition on particular lexical features, time series analyses, and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The raw text data from the Pushshift dataset was processed as follows, all conducted in Python.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Text preprocessing and word2vec fine-tuning details",
"sec_num": null
},
{
"text": "Posts were first lowercased, and URLs were removed via regular expressions; stopwords were removed and remaining words were lemmatized, both using spaCy (Honnibal et al., 2020); markdown text and other special characters were removed using custom regex functions; finally, words occurring fewer than 20 times in the corpus (i.e., less than once every 1,000 documents) were removed. Once processed, the text was embedded with a word2vec model that was fine-tuned on a corpus of 533K self posts from these subreddits as well as other conspiracy and conservative subreddits, with preprocessing following the same steps as above. Fine-tuning was done using the skipgram implementation of word2vec, with negative sampling, a context window of 10 tokens, over four epochs and with a learning rate of 0.01. The model was phrased in a similar manner to the recommendations in Mikolov et al. (2013) , such that frequently co-occurring bigrams and trigrams were encoded as single lexical entities (e.g., president_trump). Further, each document was represented using Tf-Idf and L1-normalization, in order to prevent frequent words from crowding out information from less common but more salient words.",
"cite_spans": [
{
"start": 868,
"end": 889,
"text": "Mikolov et al. (2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Text preprocessing and word2vec fine-tuning details",
"sec_num": null
},
{
"text": "To further demonstrate the qualities of WMDecompose, we offer an additional case study using a set of reviews from Yelp. This dataset is intuitive and well-known, and should hence complement our main analysis, as it requires less domain-knowledge for illustrating the basic qualities of WMDecompose. To facilitate comparison with earlier work on the WMD, we here rely on Euclidean distances, the metric which was used in the original WMD paper (Kusner et al., 2015) , even though cosine distances are arguably preferable when working with semantic similarity tasks and therefore used in the main analysis of the paper. For the analysis in this appendix, we look at trends that are highly predicable, in order to provide a \"sanity check\" to ensure that our model was behaving as expected. Consequently, we looked at the WMD w and CMD c from positive and negative reviews to see whether the distance would, as expected, be composed mainly of different polarized sentiment words, such as good and bad.",
"cite_spans": [
{
"start": 444,
"end": 465,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B Sanity Check with Yelp Reviews",
"sec_num": null
},
{
"text": "For this analysis, we use the latest version of the Yelp review dataset, accessed in late July 2021 6 . The data was filtered to only include reviews written in the cities of Atlanta and Portland, two geographically and demographically distinct cities, after December 2017. We further filtered the data to only include reviews for establishments that were labeled with the categories Restaurant or Health Medical. This way we wanted to show that category-specific trends would also emerge among positive and negative keywords, as well as explore whether any trends specific for the two cities would appear in the results. Reviews were then further filtered so that only highly positive (5 stars) or highly negative (1 star) were included. These were labeled positive and negative, respectively. Reviews were sampled from this pool, so that 2000 reviews were selected from both Atlanta and Portland, 500 negative and 500 positive for both restaurants and health services, for a total of 4000 reviews.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Sanity Check with Yelp Reviews",
"sec_num": null
},
{
"text": "For the purposes of this analysis, we fine-tuned the same pretrained word2vec model that was used for the main analysis, with the same hyperparamters, but using the filtered Yelp dataset. A series of preprocessing steps were also conducted, including removal of URLs and stopwords and phrasing as proposed by Mikolov et al. (2013) . 6 https://www.yelp.com/dataset/",
"cite_spans": [
{
"start": 309,
"end": 330,
"text": "Mikolov et al. (2013)",
"ref_id": "BIBREF26"
},
{
"start": 333,
"end": 334,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B.1 Preprocessing and word vectors",
"sec_num": null
},
{
"text": "After preprocessing, clusters were detected in the word vectors using K-means. Through inspecting Silhouette scores and elbow plots, the number of clusters was chosen to be 100. Further, instead of performing the analysis on the full set of pairs, we first ran LC-RWMD to find the R-WMD between all pairs and then the Gale-Shapley (GS) algorithm to find the set of optimal pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 Clustering and Gale-Shapley",
"sec_num": null
},
{
"text": "Next, the pairs from the GS matching were analysed using WMDecompose. The words that added the most distance (with difference) when moving from the positive set to the negative and vice versa can be seen in Table 4 . As expected, the top set of WMD w word-level distances for the positive documents is dominated by positive sentiment words, with great, best and amazing on top. However, other trends also appear, such as words specific to restaurants (delicious) or health services (massage). Interestingly enough, the word portland appears as a high expense when moving from the positive to the negative set, indicating that positive reviewers located in that city might be more likely to mention it by name.",
"cite_spans": [],
"ref_spans": [
{
"start": 207,
"end": 214,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "B.3 Results from WMDecompose",
"sec_num": null
},
{
"text": "On the side of WMD w word-level distances, the top words include negative sentiment words such as rude, worst and terrible when moving from the negative to the positive reviews, but other patterns are also visible. Words about time, such as hour, minute and the word money, contribute heavily to the semantic distance from negative to positive reviews, as do verbs often associated with verbal commands, such as told and said. Here, words related to the cities or categories of the reviews do not make the top 12 cut, although category specific words are present when looking at the top 50 (including insurance, manager and doctor).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.3 Results from WMDecompose",
"sec_num": null
},
{
"text": "Switching focus to the CMD c clusters displayed in Table 5, we see some of the top words from Table 4 , now organized by cluster. While there is some overlap in clusters when moving from positive to negative and vice versa (clusters 51, 92, 56, 26, 78) , the keywords that define the clusters are different, as they are determined using the top ten WMD w values of each word in the cluster. In Table 5 , we also see how category specific words, such as those in cluster 47 related to spa and massage services, add a significant distance when moving from positive to negative reviews.",
"cite_spans": [
{
"start": 224,
"end": 253,
"text": "(clusters 51, 92, 56, 26, 78)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 51,
"end": 102,
"text": "Table 5, we see some of the top words from Table 4",
"ref_id": "TABREF5"
},
{
"start": 395,
"end": 403,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "B.3 Results from WMDecompose",
"sec_num": null
},
{
"text": "The use of the word \"distance\" here can be slightly confusing, as it is not quite the same as the \"distance\" in Word Mover's Distance. For the latter, the distance between two documents is composed of the cosine distance and the \"mass\" of the nBOW representation of the documents to be moved to one another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Relaxed WMD can also be used to further speed up the calculations. While using R-WMD gives similar results as the full WMD, we focus here on the full WMD results as this is the baseline we are looking to establish.5 https://github.com/laszukdawid/PyEMD",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Manolis Sifalakis, Haralampos Pozidis, Vasileios Vasileiadis, Michail Vlachos, Cesar Berrospi, and Abdel Labbi",
"authors": [
{
"first": "Kubilay",
"middle": [],
"last": "Atasu",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Parnell",
"suffix": ""
},
{
"first": "Celestine",
"middle": [],
"last": "D\u00fcnner",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Big Data (Big Data)",
"volume": "",
"issue": "",
"pages": "889--896",
"other_ids": {
"DOI": [
"10.1109/BigData.2017.8258005"
]
},
"num": null,
"urls": [],
"raw_text": "Kubilay Atasu, Thomas Parnell, Celestine D\u00fcnner, Manolis Sifalakis, Haralampos Pozidis, Vasileios Vasileiadis, Michail Vlachos, Cesar Berrospi, and Abdel Labbi. 2017. Linear-complexity relaxed word Mover's distance with GPU acceleration. In 2017 IEEE International Conference on Big Data (Big Data), pages 889-896.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "President Trump and the \"Fringe",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Barkun",
"suffix": ""
}
],
"year": 2017,
"venue": "Terrorism and Political Violence",
"volume": "29",
"issue": "3",
"pages": "437--443",
"other_ids": {
"DOI": [
"10.1080/09546553.2017.1313649"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Barkun. 2017. President Trump and the \"Fringe\". Terrorism and Political Violence, 29(3):437-443. Publisher: Routledge _eprint: https://doi.org/10.1080/09546553.2017.1313649.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Gebru",
"suffix": ""
},
{
"first": "Angelina",
"middle": [],
"last": "Mcmillan-Major",
"suffix": ""
},
{
"first": "Shmargaret",
"middle": [],
"last": "Shmitchell",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21",
"volume": "",
"issue": "",
"pages": "610--623",
"other_ids": {
"DOI": [
"10.1145/3442188.3445922"
]
},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Mod- els Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Trans- parency, FAccT '21, pages 610-623, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Study: Breitbart-led right-wing media ecosystem altered broader media agenda",
"authors": [
{
"first": "Yochai",
"middle": [],
"last": "Benkler",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Faris",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Zuckerman",
"suffix": ""
}
],
"year": 2017,
"venue": "Columbia Journalism Review",
"volume": "3",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yochai Benkler, Robert Faris, Hal Roberts, and Ethan Zuckerman. 2017. Study: Breitbart-led right-wing media ecosystem altered broader media agenda. Columbia Journalism Review, 3(2).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Enriching Word Vectors with Subword Information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.04606"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching Word Vectors with Subword Information. arXiv:1607.04606 [cs].",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Gender, Populism, and the QAnon Conspiracy Movement",
"authors": [
{
"first": "Lorna",
"middle": [],
"last": "Bracewell",
"suffix": ""
}
],
"year": 2021,
"venue": "Frontiers in Sociology",
"volume": "5",
"issue": "",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.3389/fsoc.2020.615727"
]
},
"num": null,
"urls": [],
"raw_text": "Lorna Bracewell. 2021. Gender, Populism, and the QAnon Conspiracy Movement. Frontiers in Sociol- ogy, 5:1-14.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A Survey of the State of Explainable AI for Natural Language Processing",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Danilevsky",
"suffix": ""
},
{
"first": "Ranit",
"middle": [],
"last": "Kun Qian",
"suffix": ""
},
{
"first": "Yannis",
"middle": [],
"last": "Aharonov",
"suffix": ""
},
{
"first": "Ban",
"middle": [],
"last": "Katsis",
"suffix": ""
},
{
"first": "Prithviraj",
"middle": [],
"last": "Kawas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "447--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A Sur- vey of the State of Explainable AI for Natural Lan- guage Processing. In Proceedings of the 1st Con- ference of the Asia-Pacific Chapter of the Associa- tion for Computational Linguistics and the 10th In- ternational Joint Conference on Natural Language Processing, pages 447-459, Suzhou, China. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Machiavelli and the Gale-Shapley Algorithm",
"authors": [
{
"first": "L",
"middle": [
"E"
],
"last": "Dubins",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Freedman",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "88",
"issue": "",
"pages": "485--494",
"other_ids": {
"DOI": [
"10.2307/2321753"
]
},
"num": null,
"urls": [],
"raw_text": "L. E. Dubins and D. A. Freedman. 1981. Machiavelli and the Gale-Shapley Algorithm. The American Mathematical Monthly, 88(7):485-494. Publisher: Mathematical Association of America.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Using the triangle inequality to accelerate k-means",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Elkan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Twentieth International Conference on International Conference on Machine Learning, ICML'03",
"volume": "",
"issue": "",
"pages": "147--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Elkan. 2003. Using the triangle inequality to accelerate k-means. In Proceedings of the Twenti- eth International Conference on International Con- ference on Machine Learning, ICML'03, pages 147- 153, Washington, DC, USA. AAAI Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The role of system identity threat in conspiracy theory endorsement",
"authors": [
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Federico",
"suffix": ""
},
{
"first": "Allison",
"middle": [
"L"
],
"last": "Williams",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"A"
],
"last": "Vitriol",
"suffix": ""
}
],
"year": 2018,
"venue": "European Journal of Social Psychology",
"volume": "48",
"issue": "7",
"pages": "927--938",
"other_ids": {
"DOI": [
"10.1002/ejsp.2495"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher M. Federico, Allison L. Williams, and Joseph A. Vitriol. 2018. The role of system identity threat in conspiracy theory endorsement. European Journal of Social Psychology, 48(7):927-938.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "College Admissions and the Stability of Marriage. The American Mathematical Monthly",
"authors": [
{
"first": "D",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "L",
"middle": [
"S"
],
"last": "Shapley",
"suffix": ""
}
],
"year": 1962,
"venue": "",
"volume": "69",
"issue": "",
"pages": "9--15",
"other_ids": {
"DOI": [
"10.1080/00029890.1962.11989827"
]
},
"num": null,
"urls": [],
"raw_text": "D. Gale and L. S. Shapley. 1962. College Admissions and the Stability of Marriage. The American Mathe- matical Monthly, 69(1):9-15.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Topic Modeling on User Stories using Word Mover's Distance",
"authors": [
{
"first": "K",
"middle": [
"J"
],
"last": "G\u00fclle",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Ford",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ebel",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Brokhausen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Vogelsang",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Seventh International Workshop on Artificial Intelligence for Requirements Engineering",
"volume": "",
"issue": "",
"pages": "52--60",
"other_ids": {
"DOI": [
"10.1109/AIRE51212.2020.00015"
]
},
"num": null,
"urls": [],
"raw_text": "K. J. G\u00fclle, N. Ford, P. Ebel, F. Brokhausen, and A. Vo- gelsang. 2020. Topic Modeling on User Stories us- ing Word Mover's Distance. In 2020 IEEE Seventh International Workshop on Artificial Intelligence for Requirements Engineering (AIRE), pages 52-60.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Conspiracies and Conspiracy Theories in the Age of Trump",
"authors": [
{
"first": "C",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hellinger",
"suffix": ""
}
],
"year": 2018,
"venue": "Google-Books-ID: U3hvDwAAQBAJ",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel C. Hellinger. 2018. Conspiracies and Conspir- acy Theories in the Age of Trump. Springer. Google- Books-ID: U3hvDwAAQBAJ.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "spaCy: Industrial-strength Natural Language Processing in Python",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Donald Trump and vaccination: The effect of political identity, conspiracist ideation and presidential tweets on vaccine hesitancy",
"authors": [
{
"first": "Matthew",
"middle": [
"J"
],
"last": "Hornsey",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Finlayson",
"suffix": ""
},
{
"first": "Gabrielle",
"middle": [],
"last": "Chatwood",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"T"
],
"last": "Begeny",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Experimental Social Psychology",
"volume": "88",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.jesp.2019.103947"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew J. Hornsey, Matthew Finlayson, Gabrielle Chatwood, and Christopher T. Begeny. 2020. Don- ald Trump and vaccination: The effect of politi- cal identity, conspiracist ideation and presidential tweets on vaccine hesitancy. Journal of Experimen- tal Social Psychology, 88:103947.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Supervised Word Mover's Distance",
"authors": [
{
"first": "Gao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chuan",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Matt",
"middle": [
"J"
],
"last": "Kusner",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "4862--4870",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao Huang, Chuan Guo, Matt J. Kusner, Yu Sun, Fei Sha, and Kilian Q. Weinberger. 2016. Supervised Word Mover's Distance. Advances in Neural Infor- mation Processing Systems, 29:4862-4870.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Survey of the Stable Marriage Problem and Its Variants",
"authors": [
{
"first": "Kazuo",
"middle": [],
"last": "Iwama",
"suffix": ""
},
{
"first": "Shuichi",
"middle": [],
"last": "Miyazaki",
"suffix": ""
}
],
"year": 2008,
"venue": "International Conference on Informatics Education and Research for Knowledge-Circulating Society",
"volume": "",
"issue": "",
"pages": "131--136",
"other_ids": {
"DOI": [
"10.1109/ICKS.2008.7"
]
},
"num": null,
"urls": [],
"raw_text": "Kazuo Iwama and Shuichi Miyazaki. 2008. A Sur- vey of the Stable Marriage Problem and Its Variants. In International Conference on Informatics Educa- tion and Research for Knowledge-Circulating Soci- ety (icks 2008), pages 131-136.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Pathways to conspiracy: The social and linguistic precursors of involvement in Reddit's conspiracy theory forum",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clutton",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"G"
],
"last": "Dunn",
"suffix": ""
}
],
"year": 2019,
"venue": "PLOS ONE",
"volume": "14",
"issue": "11",
"pages": "",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0225098"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Klein, Peter Clutton, and Adam G. Dunn. 2019. Pathways to conspiracy: The social and linguistic precursors of involvement in Reddit's conspiracy theory forum. PLOS ONE, 14(11):e0225098. Pub- lisher: Public Library of Science.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "From Word Embeddings To Document Distances",
"authors": [
{
"first": "J",
"middle": [],
"last": "Matt",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Kusner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Kilian Q",
"middle": [],
"last": "Nicholas I Kolkin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning",
"volume": "37",
"issue": "",
"pages": "957--966",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt J Kusner, Yu Sun, Nicholas I Kolkin, and Kilian Q Weinberger. 2015. From Word Embeddings To Doc- ument Distances. Proceedings of the 32nd Inter- national Conference on Machine Learning, 37:957- 966.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Least squares quantization in PCM",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lloyd",
"suffix": ""
}
],
"year": 1982,
"venue": "Conference Name: IEEE Transactions on Information Theory",
"volume": "28",
"issue": "",
"pages": "129--137",
"other_ids": {
"DOI": [
"10.1109/TIT.1982.1056489"
]
},
"num": null,
"urls": [],
"raw_text": "S. Lloyd. 1982. Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2):129-137. Conference Name: IEEE Transac- tions on Information Theory.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Visualizing Data using t-SNE",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "86",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing Data using t-SNE. Journal of Machine Learning Research, 9(86):2579-2605.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Media Manipulation and Disinformation Online. Data & Society Research Institute",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Marwick",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alice Marwick and Rebecca Lewis. 2017. Media Ma- nipulation and Disinformation Online. Data & Soci- ety Research Institute, New York.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Roots of Trumpism: Homophily and Social Feedback in Donald Trump Support on Reddit",
"authors": [
{
"first": "Joan",
"middle": [],
"last": "Massachs",
"suffix": ""
},
{
"first": "Corrado",
"middle": [],
"last": "Monti",
"suffix": ""
},
{
"first": "Gianmarco",
"middle": [],
"last": "De Francisci",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Morales",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bonchi",
"suffix": ""
}
],
"year": 2020,
"venue": "12th ACM Conference on Web Science, WebSci '20",
"volume": "",
"issue": "",
"pages": "49--58",
"other_ids": {
"DOI": [
"10.1145/3394231.3397894"
]
},
"num": null,
"urls": [],
"raw_text": "Joan Massachs, Corrado Monti, Gianmarco De Fran- cisci Morales, and Francesco Bonchi. 2020. Roots of Trumpism: Homophily and Social Feedback in Donald Trump Support on Reddit. In 12th ACM Conference on Web Science, WebSci '20, pages 49- 58, New York, NY, USA. Association for Comput- ing Machinery.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction",
"authors": [
{
"first": "Leland",
"middle": [],
"last": "Mcinnes",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Healy",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Melville",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.03426"
]
},
"num": null,
"urls": [],
"raw_text": "Leland McInnes, John Healy, and James Melville. 2020. UMAP: Uniform Manifold Approxima- tion and Projection for Dimension Reduction. arXiv:1802.03426 [cs, stat]. ArXiv: 1802.03426.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Efficient Estimation of Word Representations in Vector Space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Repre- sentations in Vector Space. arXiv:1301.3781 [cs].",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Conspiracy Endorsement as Motivated Reasoning: The Moderating Roles of Political Knowledge and Trust",
"authors": [
{
"first": "Joanne",
"middle": [
"M"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Kyle",
"middle": [
"L"
],
"last": "Saunders",
"suffix": ""
},
{
"first": "Christina",
"middle": [
"E"
],
"last": "Farhart",
"suffix": ""
}
],
"year": 2016,
"venue": "American Journal of Political Science",
"volume": "60",
"issue": "4",
"pages": "824--844",
"other_ids": {
"DOI": [
"10.1111/ajps.12234"
]
},
"num": null,
"urls": [],
"raw_text": "Joanne M. Miller, Kyle L. Saunders, and Christina E. Farhart. 2016. Conspiracy Endorsement as Moti- vated Reasoning: The Moderating Roles of Political Knowledge and Trust. American Journal of Political Science, 60(4):824-844.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Modeling Interpersonal Linguistic Coordination in Conversations using Word Mover's Distance",
"authors": [
{
"first": "Md",
"middle": [],
"last": "Nasir",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Sandeep Nallan Chakravarthula",
"suffix": ""
},
{
"first": "David",
"middle": [
"C"
],
"last": "Baucom",
"suffix": ""
},
{
"first": "Panayiotis",
"middle": [],
"last": "Atkins",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "Georgiou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.06002"
]
},
"num": null,
"urls": [],
"raw_text": "Md Nasir, Sandeep Nallan Chakravarthula, Brian Bau- com, David C. Atkins, Panayiotis Georgiou, and Shrikanth Narayanan. 2019. Modeling Interper- sonal Linguistic Coordination in Conversations us- ing Word Mover's Distance. arXiv:1904.06002 [cs].",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Post-presumption argumentation and the post-truth world: on the conspiracy rhetoric of Donald Trump",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Neville-Shepard",
"suffix": ""
}
],
"year": 2019,
"venue": "Argumentation and Advocacy",
"volume": "55",
"issue": "3",
"pages": "175--193",
"other_ids": {
"DOI": [
"10.1080/10511431.2019.1603027"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Neville-Shepard. 2019. Post-presumption argumentation and the post-truth world: on the conspiracy rhetoric of Donald Trump. Argumentation and Advocacy, 55(3):175-193. Publisher: Routledge _eprint: https://doi.org/10.1080/10511431.2019.1603027.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Online Political Discourse in the Trump Era",
"authors": [
{
"first": "Rishab",
"middle": [],
"last": "Nithyanand",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Schaffner",
"suffix": ""
},
{
"first": "Phillipa",
"middle": [],
"last": "Gill",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.05303[cs].ArXiv:1711.05303"
]
},
"num": null,
"urls": [],
"raw_text": "Rishab Nithyanand, Brian Schaffner, and Phillipa Gill. 2017. Online Political Discourse in the Trump Era. arXiv:1711.05303 [cs]. ArXiv: 1711.05303.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A Linear Time Histogram Metric for Improved SIFT Matching",
"authors": [
{
"first": "Ofir",
"middle": [],
"last": "Pele",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Werman",
"suffix": ""
}
],
"year": 2008,
"venue": "Computer Vision -ECCV",
"volume": "",
"issue": "",
"pages": "495--508",
"other_ids": {
"DOI": [
"10.1007/978-3-540-88690-7_37"
]
},
"num": null,
"urls": [],
"raw_text": "Ofir Pele and Michael Werman. 2008. A Linear Time Histogram Metric for Improved SIFT Matching. In Computer Vision -ECCV 2008, Lecture Notes in Computer Science, pages 495-508, Berlin, Heidel- berg. Springer.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Fast and robust Earth Mover's Distances",
"authors": [
{
"first": "Ofir",
"middle": [],
"last": "Pele",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Werman",
"suffix": ""
}
],
"year": 2009,
"venue": "2009 IEEE 12th International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "2380--7504",
"other_ids": {
"DOI": [
"10.1109/ICCV.2009.5459199"
]
},
"num": null,
"urls": [],
"raw_text": "Ofir Pele and Michael Werman. 2009. Fast and robust Earth Mover's Distances. In 2009 IEEE 12th In- ternational Conference on Computer Vision, pages 460-467. ISSN: 2380-7504.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Glove: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "What Makes People Join Conspiracy Communities? Role of Social Factors in Conspiracy Engagement",
"authors": [
{
"first": "Mattia",
"middle": [],
"last": "Shruti Phadke",
"suffix": ""
},
{
"first": "Tanushree",
"middle": [],
"last": "Samory",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitra",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the ACM on Human-Computer Interaction",
"volume": "4",
"issue": "CSCW3",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3432922"
]
},
"num": null,
"urls": [],
"raw_text": "Shruti Phadke, Mattia Samory, and Tanushree Mitra. 2021. What Makes People Join Conspiracy Com- munities? Role of Social Factors in Conspiracy Engagement. Proceedings of the ACM on Human- Computer Interaction, 4(CSCW3):223:1-223:30.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Deep Stories, Nostalgia Narratives, and Fake News: Storytelling in the Trump Era",
"authors": [
{
"first": "Francesca",
"middle": [],
"last": "Polletta",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Callahan",
"suffix": ""
}
],
"year": 2019,
"venue": "Politics of Meaning/Meaning of Politics: Cultural Sociology of the 2016 U.S. Presidential Election",
"volume": "",
"issue": "",
"pages": "55--73",
"other_ids": {
"DOI": [
"10.1007/978-3-319-95945-0_4"
]
},
"num": null,
"urls": [],
"raw_text": "Francesca Polletta and Jessica Callahan. 2019. Deep Stories, Nostalgia Narratives, and Fake News: Sto- rytelling in the Trump Era. In Jason L. Mast and Jeffrey C. Alexander, editors, Politics of Mean- ing/Meaning of Politics: Cultural Sociology of the 2016 U.S. Presidential Election, Cultural Sociol- ogy, pages 55-73. Springer International Publishing, Cham.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Fast paraphrase extraction in Ancient Greek literature. it -Information Technology",
"authors": [
{
"first": "Marcus",
"middle": [],
"last": "P\u00f6ckelmann",
"suffix": ""
},
{
"first": "Janis",
"middle": [],
"last": "D\u00e4hne",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Molitor",
"suffix": ""
}
],
"year": 2020,
"venue": "Publisher: De Gruyter Oldenbourg Section: it -Information Technology",
"volume": "62",
"issue": "",
"pages": "75--89",
"other_ids": {
"DOI": [
"10.1515/itit-2019-0042"
]
},
"num": null,
"urls": [],
"raw_text": "Marcus P\u00f6ckelmann, Janis D\u00e4hne, J\u00f6rg Ritter, and Paul Molitor. 2020. Fast paraphrase extraction in An- cient Greek literature. it -Information Technology, 62(2):75-89. Publisher: De Gruyter Oldenbourg Section: it -Information Technology.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Emotion computing using Word Mover's Distance features based on Ren_cecps",
"authors": [
{
"first": "Fuji",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "PLOS ONE",
"volume": "13",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0194136"
]
},
"num": null,
"urls": [],
"raw_text": "Fuji Ren and Ning Liu. 2018. Emotion computing using Word Mover's Distance features based on Ren_cecps. PLOS ONE, 13(4):e0194136. Pub- lisher: Public Library of Science.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Does Platform Migration Compromise Content Moderation?",
"authors": [
{
"first": "Shagun",
"middle": [],
"last": "Manoel Horta Ribeiro",
"suffix": ""
},
{
"first": "Savvas",
"middle": [],
"last": "Jhaver",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Zannettou",
"suffix": ""
},
{
"first": "Emiliano",
"middle": [],
"last": "Blackburn",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "De Cristofaro",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Stringhini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "West",
"suffix": ""
}
],
"year": 2020,
"venue": "Evidence from r/The_donald and r/Incels",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.10397[cs].ArXiv:2010.10397"
]
},
"num": null,
"urls": [],
"raw_text": "Manoel Horta Ribeiro, Shagun Jhaver, Savvas Zannet- tou, Jeremy Blackburn, Emiliano De Cristofaro, Gi- anluca Stringhini, and Robert West. 2020. Does Platform Migration Compromise Content Modera- tion? Evidence from r/The_donald and r/Incels. arXiv:2010.10397 [cs]. ArXiv: 2010.10397.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "A metric for distributions with applications to image databases",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Rubner",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tomasi",
"suffix": ""
},
{
"first": "L",
"middle": [
"J"
],
"last": "Guibas",
"suffix": ""
}
],
"year": 1998,
"venue": "Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)",
"volume": "",
"issue": "",
"pages": "59--66",
"other_ids": {
"DOI": [
"10.1109/ICCV.1998.710701"
]
},
"num": null,
"urls": [],
"raw_text": "Y. Rubner, C. Tomasi, and L.J. Guibas. 1998. A metric for distributions with applications to image databases. In Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), pages 59-66.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Conspiracies Online: User Discussions in a Conspiracy Community Following Dramatic Events",
"authors": [
{
"first": "Mattia",
"middle": [],
"last": "Samory",
"suffix": ""
},
{
"first": "Tanushree",
"middle": [],
"last": "Mitra",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mattia Samory and Tanushree Mitra. 2018. Conspira- cies Online: User Discussions in a Conspiracy Com- munity Following Dramatic Events. Proceedings of the International AAAI Conference on Web and So- cial Media, 12(1). Number: 1.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Gaming Reddit's Algorithm: r/the_donald, Amplification, and the Rhetoric of Sorting",
"authors": [
{
"first": "Ryan",
"middle": [
"P"
],
"last": "Shepherd",
"suffix": ""
}
],
"year": 2020,
"venue": "Computers and Composition",
"volume": "56",
"issue": "",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.1016/j.compcom.2020.102572"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan P. Shepherd. 2020. Gaming Reddit's Algorithm: r/the_donald, Amplification, and the Rhetoric of Sorting. Computers and Composition, 56:1-14.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "How populism and conservative media fuel conspiracy beliefs about COVID-19 and what it means for COVID-19 behaviors",
"authors": [
{
"first": "A",
"middle": [],
"last": "Dominik",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Stecula",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pickup",
"suffix": ""
}
],
"year": 2021,
"venue": "Research & Politics",
"volume": "8",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1177/2053168021993979"
]
},
"num": null,
"urls": [],
"raw_text": "Dominik A. Stecula and Mark Pickup. 2021. How populism and conservative media fuel conspir- acy beliefs about COVID-19 and what it means for COVID-19 behaviors. Research & Politics, 8(1):2053168021993979. Publisher: SAGE Publi- cations Ltd.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Concept Mover's Distance: measuring concept engagement via word embeddings in texts",
"authors": [
{
"first": "Dustin",
"middle": [
"S"
],
"last": "Stoltz",
"suffix": ""
},
{
"first": "Marshall",
"middle": [
"A"
],
"last": "Taylor",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Computational Social Science",
"volume": "2",
"issue": "2",
"pages": "293--313",
"other_ids": {
"DOI": [
"10.1007/s42001-019-00048-6"
]
},
"num": null,
"urls": [],
"raw_text": "Dustin S. Stoltz and Marshall A. Taylor. 2019. Concept Mover's Distance: measuring concept engagement via word embeddings in texts. Journal of Computa- tional Social Science, 2(2):293-313.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Energy and Policy Considerations for Deep Learning in NLP",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Ananya",
"middle": [],
"last": "Ganesh",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.02243"
]
},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and Policy Considerations for Deep Learning in NLP. arXiv:1906.02243 [cs].",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Concept Class Analysis: A Method for Identifying Cultural Schemas in Texts",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marshall",
"suffix": ""
},
{
"first": "Dustin",
"middle": [
"S"
],
"last": "Taylor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stoltz",
"suffix": ""
}
],
"year": 2020,
"venue": "Sociological Science",
"volume": "7",
"issue": "",
"pages": "544--569",
"other_ids": {
"DOI": [
"10.15195/v7.a23"
]
},
"num": null,
"urls": [],
"raw_text": "Marshall A. Taylor and Dustin S. Stoltz. 2020. Concept Class Analysis: A Method for Identifying Cultural Schemas in Texts. Sociological Science, 7:544-569.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "An Efficient Shared-memory Parallel Sinkhorn-Knopp Algorithm to Compute the Word Mover's Distance",
"authors": [
{
"first": "Jahan",
"middle": [],
"last": "Jesmin",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Tithi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Petrini",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.06727"
]
},
"num": null,
"urls": [],
"raw_text": "Jesmin Jahan Tithi and Fabrizio Petrini. 2021. An Efficient Shared-memory Parallel Sinkhorn-Knopp Algorithm to Compute the Word Mover's Distance. arXiv:2005.06727 [cs, stat]. ArXiv: 2005.06727.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Why do people believe COVID-19 conspiracy theories?",
"authors": [
{
"first": "Joseph",
"middle": [
"E"
],
"last": "Uscinski",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"M"
],
"last": "Enders",
"suffix": ""
},
{
"first": "Casey",
"middle": [],
"last": "Klofstad",
"suffix": ""
},
{
"first": "Michelle",
"middle": [],
"last": "Seelig",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Funchion",
"suffix": ""
},
{
"first": "Caleb",
"middle": [],
"last": "Everett",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Wuchty",
"suffix": ""
},
{
"first": "Kamal",
"middle": [],
"last": "Premaratne",
"suffix": ""
},
{
"first": "Manohar",
"middle": [],
"last": "Murthi",
"suffix": ""
}
],
"year": 2020,
"venue": "Harvard Kennedy School Misinformation Review",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.37016/mr-2020-015"
]
},
"num": null,
"urls": [],
"raw_text": "Joseph E. Uscinski, Adam M. Enders, Casey Klofs- tad, Michelle Seelig, John Funchion, Caleb Everett, Stefan Wuchty, Kamal Premaratne, and Manohar Murthi. 2020. Why do people believe COVID-19 conspiracy theories? Harvard Kennedy School Mis- information Review, 1(3).",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Attention is All you Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. Advances in Neural Information Process- ing Systems, 30.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Speeding up Word Mover's Distance and Its Variants via Properties of Distances Between Embeddings",
"authors": [
{
"first": "Matheus",
"middle": [],
"last": "Werner",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [],
"last": "Laber",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "2020",
"issue": "",
"pages": "2204--2211",
"other_ids": {
"DOI": [
"10.3233/FAIA200346"
]
},
"num": null,
"urls": [],
"raw_text": "Matheus Werner and Eduardo Laber. 2020. Speeding up Word Mover's Distance and Its Variants via Prop- erties of Distances Between Embeddings. ECAI 2020, pages 2204-2211. Publisher: IOS Press.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Topic mover's distance based document classification",
"authors": [
{
"first": "X",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE 17th International Conference on Communication Technology (ICCT)",
"volume": "",
"issue": "",
"pages": "2576--7828",
"other_ids": {
"DOI": [
"10.1109/ICCT.2017.8359979"
]
},
"num": null,
"urls": [],
"raw_text": "X. Wu and H. Li. 2017. Topic mover's distance based document classification. In 2017 IEEE 17th Inter- national Conference on Communication Technology (ICCT), pages 1998-2002. ISSN: 2576-7828.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Word Rotator's Distance",
"authors": [
{
"first": "Sho",
"middle": [],
"last": "Yokoi",
"suffix": ""
},
{
"first": "Ryo",
"middle": [],
"last": "Takahashi",
"suffix": ""
},
{
"first": "Reina",
"middle": [],
"last": "Akama",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2944--2960",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.236"
]
},
"num": null,
"urls": [],
"raw_text": "Sho Yokoi, Ryo Takahashi, Reina Akama, Jun Suzuki, and Kentaro Inui. 2020. Word Rotator's Distance. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2944-2960, Online. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "An illustration of WMD in action byKusner et al. (2015). (Top:) The components of the WMD metric between a query D 0 and two sentences D 1 , D 2 (with equal BOW distance). The arrows represent flow between two words and are labeled with their distance contribution. (Bottom:) The flow between two sentences D 3 and D 0 with different numbers of words. This mismatch causes the WMD to move words to multiple similar words.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "corpus: conspiracy theories and American conservatism online Several researchers have identified conspiratorial thinking as a feature of the (post-)Trump era of American conservative politics",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Distribution of WMD distances for random document pairs (one from each subreddit) at t 0 and t 1 .",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "The distribution of word distance changes over time, calculated as the aggregate distance each word contributed to t 1 document pairs minus the aggregate distance it added to t 0 document pairs. Despite the general stability of document-level distances, many individual words show considerable differences in distance contributed from t 0 to t 1 .",
"uris": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>Word</td><td colspan=\"2\">CC (%) SWiC</td></tr><tr><td>hoax</td><td>-99.2</td><td>discredit, truth, bogus</td></tr><tr><td>fraudulent</td><td>-98.5</td><td>theft, conspire, scam</td></tr><tr><td>brainwash</td><td>-95.3</td><td>teacher, public_school</td></tr><tr><td>threat</td><td>-67.0</td><td>mitigate, deter</td></tr><tr><td>jews</td><td>-60.4</td><td>jewish, zionist</td></tr><tr><td colspan=\"2\">surveillance -49.6</td><td>nsa, agency, agent</td></tr><tr><td>alex_jones</td><td>-39.0</td><td>infowars, interview</td></tr><tr><td colspan=\"2\">propaganda -33.8</td><td>false_flag, agenda</td></tr><tr><td>reality</td><td>-23.9</td><td>veil, perceive</td></tr></table>",
"text": "A selection of clusters produced by WMDecompose. Clustering facilitates inductive thematic discovery by grouping semantically similar words.",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table/>",
"text": "",
"num": null,
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table><tr><td>: Decomposed word-level WMD w distances for</td></tr><tr><td>moving from the full set of positive reviews (left) the</td></tr><tr><td>the full set of negative reviews and vice versa (right).</td></tr><tr><td>Only the 12 words w that contributed the most are</td></tr><tr><td>shown.</td></tr></table>",
"text": "",
"num": null,
"type_str": "table"
},
"TABREF7": {
"html": null,
"content": "<table/>",
"text": "The CMD c for the top 10 c when moving from the positive review set to the negative (top) and vice versa (bottom). The keywords and their order for each cluster are determined by the WMD w of each word.",
"num": null,
"type_str": "table"
}
}
}
}