ACL-OCL / Base_JSON /prefixS /json /starsem /2020.starsem-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:40:10.390478Z"
},
"title": "Topology of Word Embeddings: Singularities Reflect Polysemy",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Jakubowski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heinrich Heine University D\u00fcsseldorf",
"location": {}
},
"email": "jakubowskialexander@gmail.com"
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heinrich Heine University",
"location": {
"settlement": "D\u00fcsseldorf"
}
},
"email": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Zibrowius",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heinrich Heine University",
"location": {
"settlement": "D\u00fcsseldorf"
}
},
"email": "marcus.zibrowius@cantab.net"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The manifold hypothesis suggests that word vectors live on a submanifold within their ambient vector space. We argue that we should, more accurately, expect them to live on a pinched manifold: a singular quotient of a manifold obtained by identifying some of its points. The identified, singular points correspond to polysemous words, i.e. words with multiple meanings. Our point of view suggests that monosemous and polysemous words can be distinguished based on the topology of their neighbourhoods. We present two kinds of empirical evidence to support this point of view: (1) We introduce a topological measure of polysemy based on persistent homology that correlates well with the actual number of meanings of a word. (2) We propose a simple, topologically motivated solution to the SemEval-2010 task on Word Sense Induction & Disambiguation that produces competitive results.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The manifold hypothesis suggests that word vectors live on a submanifold within their ambient vector space. We argue that we should, more accurately, expect them to live on a pinched manifold: a singular quotient of a manifold obtained by identifying some of its points. The identified, singular points correspond to polysemous words, i.e. words with multiple meanings. Our point of view suggests that monosemous and polysemous words can be distinguished based on the topology of their neighbourhoods. We present two kinds of empirical evidence to support this point of view: (1) We introduce a topological measure of polysemy based on persistent homology that correlates well with the actual number of meanings of a word. (2) We propose a simple, topologically motivated solution to the SemEval-2010 task on Word Sense Induction & Disambiguation that produces competitive results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Static word embeddings attempt to represent words by vectors in a high-dimensional vector space R n in such a way that words of similar meaning are represented by (cosine) similar vectors, and vice versa. According to the manifold hypothesis, we should expect these vectors to lie within a lowerdimensional word space W, a subspace of R n that resembles a manifold. To what extent this hypothesis is true in this and other contexts is the subject of ongoing research (Fefferman et al., 2016) . In this paper, we argue and demonstrate that for the word space W, polysemy is a principal obstruction to any strict interpretation of the manifold hypothesis.",
"cite_spans": [
{
"start": 467,
"end": 491,
"text": "(Fefferman et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "That polysemy presents a serious obstacle to the creation of adequate word vector representations This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http: //creativecommons.org/licenses/by/4.0/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "is clear from the outset. Take, for example, a polysemous word like \"mole\". We would want the vectors representing \"birthmark\" and \"counterspy\" to be similar to the vector of \"mole\", but not similar to each other. This is impossible. In order for similarity of vectors to accurately encode similarity in meaning, we need vectors representing meanings, not words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Let us therefore hypothesize a space of meanings M that accurately represents all possible meanings and their similarities. Our argument is a simple topological observation based on the relationship between this space M and the word space W. For an idealised language, where there is a bijection between meanings and words, these two spaces would agree. For a natural language, however, multiple points of M get identified with a single point of W. This process corresponds to a topological construction that we refer to as pinching (see Figure 2 ). It is easy to see that a space resulting from pinching cannot be a manifold. Thus, even if the space of meanings M satisfies the manifold hypothesis perfectly, the pinched space W cannot satisfy the hypothesis near polysemous words. 1 Based on this intuition, and using tools from Topological Data Analysis, we introduce a measure for the polysemy of a word based on its vector embedding. Our experiments show that this topological polysemy (TPS) correlates well with the actual number of meanings that a word has. In addition, we present an approach to the SemEval-2010 task on Word Sense Induction & Disambiguation (task 14) . This approach is independent of TPS, but based on the same ideas. Despite its simplicity, it is almost on par with the best performing algorithm within the 1 It may appear that a similar complication arises from synonyms, multiple words with a single meaning. However, synonyms are irrelevant for our analysis; see the discussion at the end of Section 3.1. 2010 challenge, and outperforms far more complicated approaches. We see these experimental results as strong evidence that our interpretation of the word space W as a pinched manifold is more adequate than a more na\u00efve view of W as an actual manifold.",
"cite_spans": [
{
"start": 783,
"end": 784,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 538,
"end": 546,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u223c = \u223c = (a) (b) (c) \u223c = \u223c = (d) (e) (f) \u223c = (g) (h) (i) (j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A space, for us, is a topological space. Readers unfamiliar with the notion may simply think of metric spaces, or indeed of subspaces of euclidean space R n . Two such spaces are considered equivalent, or homeomorphic, if they can be deformed into each other. We will not make this precise here, but we hope that Figure 1 , in which homeomorphic spaces are connected by the symbol \" \u223c =\", gives a clear intuition. As flexible as this notion may appear, the deformations considered do keep certain properties of a space invariant. Crucially, homeomorphic spaces always have the same number of connected components and the same number of holes. Topo-; Figure 2 : The effect of pinching on the torus (example (g) from Figure 1 ): before (left) and after the identification of three marked points to a singular point (right) logists have developed a myriad of more subtle invariants that allow us to decide whether two spaces are homeomorphic. We refer to Hatcher (2002) for an introduction into this vast field.",
"cite_spans": [
{
"start": 813,
"end": 820,
"text": "(right)",
"ref_id": null
},
{
"start": 952,
"end": 966,
"text": "Hatcher (2002)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 313,
"end": 321,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 650,
"end": 658,
"text": "Figure 2",
"ref_id": null
},
{
"start": 715,
"end": 723,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Background 2.1 Topology",
"sec_num": "2"
},
{
"text": "Two kinds of spaces will be important for us: manifolds and pinched manifolds. A (topological) manifold is a space in which each point has a neighbourhood homeomorphic to an open ball of R d for some d (cf. Hatcher, 2002, \u00a7 3. 3). 2 We call d the local dimension of the manifold at that point. The spaces (a), (b) and (c) in Figure 1 are manifolds since each point has a neighbourhood homeomorphic to an open interval in R 1 , and so are the spaces (d), (e), (f). The spaces (g) and (h) are manifolds because each point has a neighbourhood homeomorphic to an open disk in R 2 . Space (i), on the other hand, is not a manifold, because the point of intersection has no neighbourhood homeomorphic to an open ball of any dimension, and neither is space (j), because the manifold condition is violated at all boundary points. The spaces \"without corners\", i.e. examples (a), (c), (d), (g) and (h) in Figure 1 are not only topological manifolds but even differentiable manifolds, but this distinction will be of no importance to us.",
"cite_spans": [
{
"start": 207,
"end": 226,
"text": "Hatcher, 2002, \u00a7 3.",
"ref_id": null
},
{
"start": 231,
"end": 232,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 325,
"end": 333,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 896,
"end": 904,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Background 2.1 Topology",
"sec_num": "2"
},
{
"text": "By a pinched manifold, we will mean a space obtained from a manifold by marking a finite number of points in different colours, and identifying (\"glueing together\") all points of the same colour, as illustrated in Figure 2 . In a pinched manifold, the neighbourhoods of most points still look like open balls, but the neighbourhoods of the identified points look like several balls glued together at their centres. We will call these identified points singular points.",
"cite_spans": [],
"ref_spans": [
{
"start": 214,
"end": 222,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Background 2.1 Topology",
"sec_num": "2"
},
{
"text": "Singular points can thus easily be distinguished from non-singular points by the topology of their neighbourhoods. More precisely, we can distin- guish them by counting the number of connected components of their punctured neighbourhoods: neighbourhoods of a point from which the point itself has been removed. The punctured neighbourhood of a non-singular point is a single punctured ball, and thus is connected, at least in dimensions d \u2265 2. The punctured neighbourhood of a singular point obtained by identifying k > 1 points, on the other hand, is a disjoint union of k punctured balls, and thus has several connected components. Thus, in dimensions d \u2265 2, a point is singular if and only if small punctured neighbourhoods of it have more than one connected component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Topology",
"sec_num": "2"
},
{
"text": "Puncturing the neighbourhood, i.e. removing the centre, is crucial for this distinction. The unpunctured neighbourhoods of singular and non-singular points are not distinguishable by the usual topological invariants. (In technical terms, the neighbourhoods of both types of points are contractible.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Topology",
"sec_num": "2"
},
{
"text": "Topological data analysis (TDA) is an instrument for extracting topological information from a point cloud, that is a finite set of vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topological data analysis",
"sec_num": "2.2"
},
{
"text": "W 0 = {p 1 , p 2 , . . . , p N } \u2282 R n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topological data analysis",
"sec_num": "2.2"
},
{
"text": "The point cloud itself is trivial from a topological point of view. The fundamental assumption of TDA is that the vectors of W 0 are not randomly distributed but instead are sampled from some underlying space W \u2282 R n which -unlike the point cloud itself -is topologically interesting. A human immediately recognises that the points in Figure 3 have been sampled from a circle. TDA provides algorithms that encode this intuition, and extend it to higher dimensions.",
"cite_spans": [],
"ref_spans": [
{
"start": 335,
"end": 343,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Topological data analysis",
"sec_num": "2.2"
},
{
"text": "One such algorithm is persistent homology. The basic idea is to replace W 0 with the union W r of all open balls of a certain radius r centred at the points of W 0 . As we vary this radius, we obtain a sequence of spaces, starting for r = 0 with the point cloud itself and ending at some high value of r with a space in which all balls are merged into a single big blob. We compute certain topological invariants, the so-called Betti numbers b i , for each space W r . The Betti number b i counts certain idimensional features of the space. For example, b 0 is the number of connected components and b 1 is the \"number of holes\"; both are equal to 1 for the two spaces in the lower half of Figure 3 . The radii at which different i-dimensional features appear and disappear can be summarized into a multiset and visualized as a two-dimensional persistence diagram D as in Figure 4 . Each dot in this diagram encodes the life span of a distinct feature: its horizontal coordinate is the smallest radius r at which the feature is present in W r , its vertical coordinate is the largest radius r at which it is present. Points that lie far off the diagonal correspond to features that persist across a wide range of values of r, and are hence likely to reflect features of the underlying space W. For technical reasons, every point on the diagonal is also included in the persistence diagram D with infinite multiplicity.",
"cite_spans": [],
"ref_spans": [
{
"start": 690,
"end": 698,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 872,
"end": 880,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Topological data analysis",
"sec_num": "2.2"
},
{
"text": "The Wasserstein distance provides a notion of distance between two such persistence diagrams, and hence a measure of similarity between different point clouds and their underlying spaces. For two diagrams D andD it is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topological data analysis",
"sec_num": "2.2"
},
{
"text": "W (D,D) := inf \u03b7:D\u2192D x\u2208D x \u2212 \u03b7(x) \u221e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topological data analysis",
"sec_num": "2.2"
},
{
"text": "where \u03b7 runs over all bijections between the two diagrams. As all points on the diagonal are included in both diagrams, such bijections always exist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topological data analysis",
"sec_num": "2.2"
},
{
"text": "The computation of persistent homology can be restricted to a range of degrees i. In this paper, we will concentrate on persistent homology in degree i = 0, which is essentially a systematic application of single-linkage clustering. Computations in higher dimensions quickly become very expensive. For an in-depth discussion of the concepts mentioned in this section we recommend (Edelsbrunner and Harer, 2010) .",
"cite_spans": [
{
"start": 380,
"end": 410,
"text": "(Edelsbrunner and Harer, 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topological data analysis",
"sec_num": "2.2"
},
{
"text": "The distributional hypothesis states that \"the meaning of words lies in their use\" (Wittgenstein, 1953) . This provides the basis for distributional semantics, a data driven study of word meanings. Words are modelled as vectors in such a way that (cosine) similarity of vectors corresponds to similarity in the distributions of the corresponding words in natural language, and hence to semantic similarity. In the most na\u00efve approaches, the dimension of these vectors corresponds to the number of distinct words in the language. More sophisticated implementations in which word vectors are real-valued but of significantly smaller dimension are popularly known as word vector embeddings. They have proven important for various tasks of natural language processing (Collobert et al., 2011; Lubis et al., 2020) .",
"cite_spans": [
{
"start": 83,
"end": 103,
"text": "(Wittgenstein, 1953)",
"ref_id": "BIBREF30"
},
{
"start": 764,
"end": 788,
"text": "(Collobert et al., 2011;",
"ref_id": "BIBREF4"
},
{
"start": 789,
"end": 808,
"text": "Lubis et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word vector embeddings",
"sec_num": "2.3"
},
{
"text": "Early word vector embeddings were constructed in latent semantic analysis using singular value decomposition. Neural methods were introduced by Bengio et al. (2003) , and popularised by the algorithms word2vec (Mikolov et al., 2013a,b) and GloVe (Pennington et al., 2014) . Our method of choice in this paper is fastText (Bojanowski et al., 2017) , which can produce high-quality embeddings from relatively small corpora. All of these methods produce static embeddings: they assign to each word a single, context-independent vector.",
"cite_spans": [
{
"start": 144,
"end": 164,
"text": "Bengio et al. (2003)",
"ref_id": "BIBREF2"
},
{
"start": 210,
"end": 235,
"text": "(Mikolov et al., 2013a,b)",
"ref_id": null
},
{
"start": 246,
"end": 271,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 321,
"end": 346,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word vector embeddings",
"sec_num": "2.3"
},
{
"text": "There is, of course, a lot of existing and ongoing research to overcome the difficulties inher-ent in adequately representing polysemous words. One way to address polysemy is to produce multiple, context-dependent embeddings for the same word. The deep learning approaches mentioned above are amenable to this by incorporating heuristics (Huang et al., 2012) or non-parametric clustering (Neelakantan et al., 2014) . More recently, transformer based models that exploit massive datasets have been used to produce contextualised word embeddings. Examples of these are CoVe (Mc-Cann et al., 2017) , ELMo (Peters et al., 2018) , and BERT (Devlin et al., 2019) and its variants ERNIE (Sun et al., 2019) and RoBERTa (Liu et al., 2019) . Alternative approaches address polysemy by training multi-lingual word embeddings on multi-lingual corpora (Dufter et al., 2018; Heyman et al., 2019) .",
"cite_spans": [
{
"start": 338,
"end": 358,
"text": "(Huang et al., 2012)",
"ref_id": "BIBREF14"
},
{
"start": 388,
"end": 414,
"text": "(Neelakantan et al., 2014)",
"ref_id": "BIBREF24"
},
{
"start": 572,
"end": 594,
"text": "(Mc-Cann et al., 2017)",
"ref_id": null
},
{
"start": 602,
"end": 623,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 635,
"end": 656,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 680,
"end": 698,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 711,
"end": 729,
"text": "(Liu et al., 2019)",
"ref_id": null
},
{
"start": 839,
"end": 860,
"text": "(Dufter et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 861,
"end": 881,
"text": "Heyman et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word vector embeddings",
"sec_num": "2.3"
},
{
"text": "As the problem of polysemy is, at least partially, resolved in all of these more advanced approaches, we would expect the phenomenon studied in this paper to be less pronounced in the embeddings they produce. We therefore concentrate exclusively on mono-lingual static embeddings. Our analysis will not require any data beyond such an embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word vector embeddings",
"sec_num": "2.3"
},
{
"text": "3 The topology of the word space",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word vector embeddings",
"sec_num": "2.3"
},
{
"text": "In order to explain the apparent efficiency of machine learning, the manifold hypothesis postulates that, in general, real world data tends to live on a small-dimensional submanifold of the vector space in which it is represented (Bengio et al., 2013; Fefferman et al., 2016) . For word vector embeddings, the ambient space R n typically has dimension n in the range 50 \u2264 n \u2264 300. The hypothesis states that word vectors in fact lie on, or are densely distributed around, a submanifold W \u2282 R n of much smaller dimension. What this hypothetical word space W might look like is an intriguing question. Work of Arora et al. (2018) suggests a dimension of W as low as five. It is easy to imagine even smaller subspaces of W, like a line segment connecting \"cold\", \"cool\", \"lukewarm\", \"warm\" and \"hot\", or a circle connecting \"north\", \"east\", \"south\", \"west\". But the global structure seems mysterious.",
"cite_spans": [
{
"start": 230,
"end": 251,
"text": "(Bengio et al., 2013;",
"ref_id": "BIBREF1"
},
{
"start": 252,
"end": 275,
"text": "Fefferman et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 608,
"end": 627,
"text": "Arora et al. (2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The word space as a pinched manifold",
"sec_num": "3.1"
},
{
"text": "The manifold hypothesis has two parts: (1) that W is of small dimension, and (2) that W is a manifold. 3 It is the second statement that we would The best we can expect is that this might be true for some space of meanings M -a space that parametrizes all possible meanings that words of a given language may assume -from which W is obtained by identifying multiple meanings to a single word. This identification process is precisely the pinching construction discussed in Section 2.1. For example, we should expect the neighbourhood of the polysemous word \"mole\" in W to be obtained from the neighbourhoods of its different meanings in M, all glued together as in Figure 5 . Thus, even if we optimistically hypothesize the space of meanings M to be a manifold, the word space W cannot be: it is at best a pinched manifold. It is this hypothesis that we will pursue in the following. (If M has more complicated local structure, then a fortiori so does W.)",
"cite_spans": [
{
"start": 103,
"end": 104,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 665,
"end": 673,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "The word space as a pinched manifold",
"sec_num": "3.1"
},
{
"text": "The presence of synonyms in a language has no bearing on this analysis. To explain this, we need to temporarily distinguish carefully between a word w and its associated word vector v w . The word space W should more precisely be called space of word vectors, since this is the space in which the vectors v w live, not the words themselves. Under a given word vector embedding, synonyms w and w may get mapped to the same word vector v w = v w . However, this does not affect the relation of the of dimension extends in a straight-forward manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The word space as a pinched manifold",
"sec_num": "3.1"
},
{
"text": "word vector embedding / / W Figure 6 : The relation of the space of meanings M, the space of word vectors W and the set of words of a language. Synonyms may get identified under a given word vector embedding, symbolized here by the horizontal map. Multiple meanings get identified to a single word vector under the vertical map space of meanings M to the space of word vectors W in any way. The situation is summarized in Figure 6 . 4 With this discussion out of the way, we will from now on again simplify our terminology by identifying words with their associated vectors, and refer to W as word space.",
"cite_spans": [
{
"start": 433,
"end": 434,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 28,
"end": 36,
"text": "Figure 6",
"ref_id": null
},
{
"start": 422,
"end": 430,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "M {words}",
"sec_num": null
},
{
"text": "As explained at the end of Section 2.1, we can distinguish a singular point of a pinched manifold from a non-singular point by counting the connected components of a small punctured neighbourhood of the point. What is more, the number of these components reflects the number of points that were glued together in the pinching process. Thus, according to our view of the word space W as a pinched quotient of the manifold of meanings M, the number of different meanings of a word should be reflected by the number of components of a punctured neighbourhood of the word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A topological measure of polysemy",
"sec_num": "3.2"
},
{
"text": "Of course, the relevant number of components is not directly visible from the discrete point cloud formed by the word vectors. Rather, the components can only be estimated by some form of clustering. In this section, we describe a measure of the number of components based on degree zero persistent homology, as introduced in Section 2.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A topological measure of polysemy",
"sec_num": "3.2"
},
{
"text": "Fix a word vector embedding, a target word w, and a neighbourhood size n. As already indicated, we will abuse language by identifying a word with its vector under the embedding in the following. The topological polysemy TPS n (w) of w with respect to our fixed word vector embedding and our chosen neighbourhood size n is the Wasserstein norm of a normalized punctured neighbourhood of w. That is, TPS n (w) is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A topological measure of polysemy",
"sec_num": "3.2"
},
{
"text": "1. Normalize all word vectors v to have L 2 -norm v = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A topological measure of polysemy",
"sec_num": "3.2"
},
{
"text": "2. Consider the punctured neighbourhood N n (w) consisting of the n closest neighbours of w, excluding w itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A topological measure of polysemy",
"sec_num": "3.2"
},
{
"text": "3. Pass to the normalized punctured neighbourhood N n (w) by translating w to lie at the origin and projecting all vectors to the unit sphere:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A topological measure of polysemy",
"sec_num": "3.2"
},
{
"text": "N n (w) := v \u2212 w v \u2212 w v \u2208 N n (w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A topological measure of polysemy",
"sec_num": "3.2"
},
{
"text": "4. Compute the degree zero persistence diagram of N n (w).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A topological measure of polysemy",
"sec_num": "3.2"
},
{
"text": "5. TPS n (w) is the Wasserstein norm of this persistence diagram, i.e. the Wasserstein distance between the computed and the empty persistence diagram.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A topological measure of polysemy",
"sec_num": "3.2"
},
{
"text": "Step 1 is included because word embeddings are trained only on cosine similarity; the length of each vector has no apparent meaning. The normalization allows us to compute directly with difference vectors between word vectors of high cosine similarity. The normalization in Step 3 is included because we have fixed the cardinality n of the neighbourhood, not its diameter. Without any normalization in this step, we would be measuring mostly the density of the word cloud around w. The normalization by projection onto the unit sphere may seem somewhat radical, but it is topologically motivated: the topological invariants we use cannot distinguish a punctured ball from its boundary sphere. (In technical terms, the punctured ball and its boundary are homotopy equivalent; cf. Hatcher (2002) , Chapter 0.)",
"cite_spans": [
{
"start": 779,
"end": 793,
"text": "Hatcher (2002)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The general normalization in",
"sec_num": null
},
{
"text": "We present two pieces of empirical evidence that support our view of the word space as a pinched manifold. The experiments in Sections 4.2 and 4.3 show that the topological polysemy defined above correlates with the actual number of meanings of a word. In Section 4.4, we describe a simple approach to the SemEval-2010 task on word sense induction based on our topological intuition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical evidence",
"sec_num": "4"
},
{
"text": "\"Don't forget the Tatun Mountains, which shelter the town. In the old days , Tanshui folk who cultivated farms on the slopes had to walk for an hour to get to their crops. These days you can take a local mini-bus.\" Figure 7 : An exemplary context of an instance of the target word \"cultivate\"",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 223,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Empirical evidence",
"sec_num": "4"
},
{
"text": "All experiments are based on data provided with the SemEval-2010 task on Word Sense Induction & Disambiguation . The task is as follows: Assign a total of 8 915 instances, extracted from various sources including CNN and ABC, of 100 different polysemous target words (50 nouns and 50 verbs) to clusters based on their context, such that instances with different meanings get mapped to different clusters and instances with the same meaning get mapped to the same cluster. A context is simply a paragraph of text that the target word appears in. Figure 7 shows an exemplary context for an instance of the target word \"cultivate\". Note that labels are only provided for a test set; this is an unsupervised learning task. The training set provided comprises 65 M occurrences of 127 151 different words. We use this corpus to train our own vector representations using the python module for fastText (Bojanowski et al., 2017) . For the computation of the persistence diagrams and the Wasserstein distance we use the GUDHI library (The GUDHI Project, 2020).",
"cite_spans": [
{
"start": 896,
"end": 921,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 545,
"end": 553,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1"
},
{
"text": "The SemEval data set includes a gold standard for the 100 target words. The number of clusters in this gold standard is equal to the number of true meanings of each word, as perceived by humans. Figure 8 shows our measure of polysemy TPS 50 (w) for the 100 target words w plotted against these cluster counts. Correlation coefficients between the gold standard and TPS n (w) for varying neighbourhood sizes n are displayed in the first column of Table 1 . We found the highest correlation for n = 50, equal to 0.424. Neighbourhoods consisting of just ten or less words are clearly too small to capture multiple meanings. On the other hand, for high values of n, the neighbourhoods become too large to adequately reflect the local structure of the word space around Table 1 : Correlations between TPS n (w) and the number of meanings of w according to the SemEval gold standard (Section 4.2), the number of WordNet synsets (Section 4.3), and the frequency of w in the SemEval training corpus. The last line indicates the number of words on which the correlation is computed. The gray entry is not statistically significant, but all other entries are (p-value < 10 \u22123 ) the target word. This is likewise unsurprising: recall from Section 2.1 that the manifold condition is a local condition around each point. Larger neighbourhoods of a point on a manifold can be arbitrarily complicated, and so can larger neighbourhoods of singular points on a pinched manifold.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 203,
"text": "Figure 8",
"ref_id": "FIGREF4"
},
{
"start": 446,
"end": 453,
"text": "Table 1",
"ref_id": null
},
{
"start": 765,
"end": 772,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Correlation of TPS with the SemEval gold standard",
"sec_num": "4.2"
},
{
"text": "The third column of Table 1 shows that TPS n (w) does not correlate with the frequency of the words in the SemEval corpus. This is important, as frequency itself correlates with polysemy. The absence of correlation between TPS n (w) and frequency strengthens our assertion that TPS n (w) indeed measures polysemy. Table 2 : Some examples of words and their corresponding cluster count in the SemEval gold standard and WordNet as well as their TPS-measure for n = 50",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 1",
"ref_id": null
},
{
"start": 314,
"end": 321,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Correlation of TPS with the SemEval gold standard",
"sec_num": "4.2"
},
{
"text": "The correlation with the gold standard is a good indication of the validity of our method, but it is based on just 100 samples. The number of meanings, as perceived by humans, of a much larger set of words can be extracted from WordNet (Fellbaum, 1998) , specifically the number of synsets associated with each word. Of course, we cannot expect the correlation between our topological polysemy and these numbers of synsets to be as high as for the SemEval gold standard. Firstly, we have trained our fastText vectors specifically on the SemEval training set, which does not capture the breadth of WordNet, and which does not comprise enough data to yield adequate embeddings for non-target words. Secondly, WordNet captures distinctions in meaning far more granular that one could hope to detect within the, say, 50 closest neighbours of a word. Nonetheless, plotting TPS 50 (w) against the number of synsets for all 62 049 words of the Sem-Eval corpus that have a WordNet entry indicates a clear trend, see Figure 9 . Correlation coefficients for varying n are included in Table 1 .",
"cite_spans": [
{
"start": 236,
"end": 252,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 1008,
"end": 1016,
"text": "Figure 9",
"ref_id": "FIGREF5"
},
{
"start": 1074,
"end": 1081,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Correlation of TPS with WordNet synsets",
"sec_num": "4.3"
},
{
"text": "Our hypothesis that the word space is a manifold pinched at polysemous words also suggests the following, direct approach to the SemEval-2010 task itself, which we call Overlap with Punctured Neighbourhood (OPN). Fix a neighbourhood size n. In a first step, we cluster punctured neighbourhoods of size n of the 100 target words using a common clustering algorithm like k-means or dbscan (Ester et al., 1996) . The different clusters of the neighbourhood cloud obtained in this way are taken to represent different meanings of the target word. In a second step, we assign a given instance of the target word to the cluster of the neighbourhood cloud that has the highest relative word overlap with the context of that instance.",
"cite_spans": [
{
"start": 387,
"end": 407,
"text": "(Ester et al., 1996)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The SemEval task",
"sec_num": "4.4"
},
{
"text": "For clustering with dbscan, we found that the best results are achieved with parameter values Eps = 0.09 and MinPts = 2 and large neighbourhood sizes n. The k-means clustering algorithm requires the number k of clusters aimed for as a parameter. We experimented both with fixed values of k and with a word-dependent variable value k(w), predicted using TPS as follows. Define the TPS-percentile %(w) of a target word w as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The SemEval task",
"sec_num": "4.4"
},
{
"text": "%(w) := TPS 50 (w) \u2212 TPS min TPS max \u2212 TPS min \u2022 100 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The SemEval task",
"sec_num": "4.4"
},
{
"text": "where TPS min and TPS max denote the minimum and maximum values that TPS 50 (\u2022) assumes on all target words, respectively, and where \u2022 denotes rounding to the next largest integer. Thus, the percentile is an integer between 0 and 100 that reflects how large TPS 50 (w) is in comparison to all other target words. The expected number of clusters k(w) is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The SemEval task",
"sec_num": "4.4"
},
{
"text": "k(w) := \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 2 if %(w) \u2264 1 %(w) + 1 if 1 < %(w) < 100 100 if %(w) = 100",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The SemEval task",
"sec_num": "4.4"
},
{
"text": "Thus, the predicted number of clusters k(w) varies between 2 and 100.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The SemEval task",
"sec_num": "4.4"
},
{
"text": "The performance is commonly measured by two scores, the F-score and the V-measure, which capture to what extent a clustering agrees with the gold standard clustering. Since both scores are important, we rank different approaches based on the product of these scores. This automatically discounts the performance of trivial approaches: MFS, which assigns each occurrence to the same cluster, and 1cl1inst, which assigns each occurrence to its own cluster. Table 3 shows the results for OPN with different clustering algorithms and different parameters. For comparison, the table moreover includes the best performing models of the Sem-Eval task, as well as some other models published since. Our best performing set-up (OPN with dbscan, n = 5000) achieves the second best results, outperforming much more complex methods. Note that, unlike Arora et al. (2018) and Mu et al. (2017) , we do not use any additional data to train embeddings.",
"cite_spans": [
{
"start": 839,
"end": 858,
"text": "Arora et al. (2018)",
"ref_id": "BIBREF0"
},
{
"start": 863,
"end": 879,
"text": "Mu et al. (2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 455,
"end": 462,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The SemEval task",
"sec_num": "4.4"
},
{
"text": "For OPN with k-means clustering, we found that k = 30 gives the best results among possible fixed values for k. As Table 3 shows, the performance of our method with the TPS-informed variable value k(w) is better than the performance with this fixed value. This provides further evidence to our claim that TPS is positively correlated with the true number of meanings. A comparison of the performance of OPN with dbscan and of OPN with k-means indicates that the size n of the neighbourhood to be considered for clustering needs to be an order of magnitude larger when we do not incorporate any information from TPS. Our interpretation is that TPS witnesses the disturbance that an additional meaning causes in a small neighbourhood of a word, even when no word related to that meaning is present in the neighbourhood.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The SemEval task",
"sec_num": "4.4"
},
{
"text": "In Table 4 , we single out the three best performing and the three worst performing target words with our best performing model and give the associated scores as an illustration.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "The SemEval task",
"sec_num": "4.4"
},
{
"text": "In this work, we challenge the manifold hypothesis for static word vector embeddings and experimentally show that it is more accurate and helpful to view the space of word embeddings as a pinched manifold. We introduce a topological measure of polysemy that correlates well with the number of meanings of a word according to the gold standard of the SemEval-2010 task on Word Sense Induction & Disambiguation. We also produce a surprisingly simple, but topologically motivated solution to the task itself that achieves highly competitive results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We stress that our measure of polysemy, TPS, is computed solely on the topology of the point cloud consisting of the vectors of a fixed word vector embedding. Of course, any solution to the described (Mu et al., 2016) k = 5 0.145 0.441 0.0639 OPN with k-means n = 500, k = k(w) 0.165 0.356 0.0588 KSU KDD (Elshamy et al., 2010) 0.157 0.369 0.0579 OPN with k-means n = 500, k = 30 0.161 0.352 0.0567 (Arora et al., 2018) k = 5 0.115 0.464 0.0533 (Mu et al., 2016) k = 2 0.073 0.571 0.0417 OPN with dbscan n = 500 0.070 0.571 0.0400 (Arora et al., 2018) k = 2 0.061 0.586 0.0357 1cl1inst 0.317 0.090 0.0285 MFS 0.000 0.634 0.0000 Table 3 : Performance of different methods on task 14 of SemEval-2010. According to our ranking by product of V-measure and F-score, the algorithms UoY and KSU KDD were the strongest contenders in the initial challenge. The algorithms MFS and 1cl1inst in the last two rows are trivial baseline algorithms Table 4 : The performance of our best solution to SemEval on the three best performing vs. the three worst performing words, as evaluated according to the product of F-score and V-measure SemEval task will also predict, in particular, the number of meanings of the target words. However, these predictions rely on access to the underlying corpus, or at least parts thereof. Similarly, the first step (clustering) of our solution to the SemEval task is performed directly on the word vectors, without recourse to any corpus. This is in sharp contrast with early clustering approaches to word sense disambiguation such as (Sch\u00fctze, 1998 ) (which of course had to rely on far less sophisticated word vector embeddings than are now available).",
"cite_spans": [
{
"start": 200,
"end": 217,
"text": "(Mu et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 305,
"end": 327,
"text": "(Elshamy et al., 2010)",
"ref_id": "BIBREF8"
},
{
"start": 399,
"end": 419,
"text": "(Arora et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 445,
"end": 462,
"text": "(Mu et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 531,
"end": 551,
"text": "(Arora et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 1553,
"end": 1567,
"text": "(Sch\u00fctze, 1998",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 628,
"end": 635,
"text": "Table 3",
"ref_id": null
},
{
"start": 933,
"end": 940,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "A number of avenues could be pursued to further improve the results presented here. To allow a fair comparison with other solutions to the SemEval task, we have used word vector embeddings trained on a fairly small corpus. We have used only degree zero persistent homology. Our method of taking the Wasserstein norm of a persistance diagram is rather crude. The elimination of noise from the embeddings could also improve the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We conjecture that other NLP tasks that also rely, implicitly or explicitly, on the manifold hypothesis could similarly benefit from a more refined topological analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Manifolds are moreover required to be Hausdorff, a technical condition that all metric spaces satisfy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It may not be evident what the correct notion of \"dimension\" should be for arbitrary subspaces. However, there are much larger classes of spaces than manifolds to which the notion",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is of course debatable whether the equation vw = v w would really hold for any pair of synonyms in practice. It seems more likely that the vectors vw and v w would simply lie very close together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Claudius Zibrowius for Figures 1, 2 and 5 and Peter Arndt, Michael Heck and Carel van Niekerk for helpful discussions. The results of this publication are part of the project DYMO, which has received funding from the European Research Council under the grant agreement no. STG2018 804636. Computational resources were provided by Google Cloud.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Linear algebraic structure of word senses, with applications to polysemy",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yuanzhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Risteski",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "0",
"pages": "483--495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2018. Linear algebraic struc- ture of word senses, with applications to polysemy. Transactions of the Association for Computational Linguistics, 6(0):483-495.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Representation learning: A review and new perspectives",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
}
],
"year": 2013,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "35",
"issue": "8",
"pages": "1798--1828",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Ana- lysis and Machine Intelligence, 35(8):1798-1828.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Jauvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of machine learning research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research, 3(Feb):1137-1155.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associ- ation for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of machine learning research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Mi- chael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research, 12(Aug):2493-2537.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Embedding learning through multilingual concept induction",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Dufter",
"suffix": ""
},
{
"first": "Mengjie",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Schmitt",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1520--1530",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1141"
]
},
"num": null,
"urls": [],
"raw_text": "Philipp Dufter, Mengjie Zhao, Martin Schmitt, Alexan- der Fraser, and Hinrich Sch\u00fctze. 2018. Embedding learning through multilingual concept induction. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1520-1530, Melbourne, Aus- tralia. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Computational Topology: An Introduction",
"authors": [
{
"first": "Herbert",
"middle": [],
"last": "Edelsbrunner",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Harer",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-3-540-33259-6_7"
]
},
"num": null,
"urls": [],
"raw_text": "Herbert Edelsbrunner and John Harer. 2010. Computa- tional Topology: An Introduction. American Math- ematical Society.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Ksu kdd: Word sense induction by clustering in topic space",
"authors": [
{
"first": "Wesam",
"middle": [],
"last": "Elshamy",
"suffix": ""
},
{
"first": "Doina",
"middle": [],
"last": "Caragea",
"suffix": ""
},
{
"first": "William",
"middle": [
"H"
],
"last": "Hsu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval '10",
"volume": "",
"issue": "",
"pages": "367--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wesam Elshamy, Doina Caragea, and William H. Hsu. 2010. Ksu kdd: Word sense induction by clustering in topic space. In Proceedings of the 5th Interna- tional Workshop on Semantic Evaluation, SemEval '10, pages 367-370, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A density-based algorithm for discovering clusters in large spatial databases with noise",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Ester",
"suffix": ""
},
{
"first": "Hans-Peter",
"middle": [],
"last": "Kriegel",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Sander",
"suffix": ""
},
{
"first": "Xiaowei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Second International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "226--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Ester, Hans-Peter Kriegel, J\u00f6rg Sander, and Xi- aowei. Xu. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Min- ing, pages 226-231. Institute for Computer Science, University of Munich.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Testing the manifold hypothesis",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Fefferman",
"suffix": ""
},
{
"first": "Sanjoy",
"middle": [],
"last": "Mitter",
"suffix": ""
},
{
"first": "Hariharan",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2016,
"venue": "J. Amer. Math. Soc",
"volume": "29",
"issue": "4",
"pages": "983--1049",
"other_ids": {
"DOI": [
"10.1090/jams/852"
]
},
"num": null,
"urls": [],
"raw_text": "Charles Fefferman, Sanjoy Mitter, and Hariharan Narayanan. 2016. Testing the manifold hypothesis. J. Amer. Math. Soc., 29(4):983-1049.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA. https://wordnet.princeton.edu/.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Algebraic topology",
"authors": [
{
"first": "Allen",
"middle": [],
"last": "Hatcher",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allen Hatcher. 2002. Algebraic topology. Cambridge University Press, Cambridge.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning unsupervised multilingual word embeddings with incremental multilingual hubs",
"authors": [
{
"first": "Geert",
"middle": [],
"last": "Heyman",
"suffix": ""
},
{
"first": "Bregt",
"middle": [],
"last": "Verreet",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1890--1902",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1188"
]
},
"num": null,
"urls": [],
"raw_text": "Geert Heyman, Bregt Verreet, Ivan Vuli\u0107, and Marie- Francine Moens. 2019. Learning unsupervised mul- tilingual word embeddings with incremental multi- lingual hubs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1890-1902, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving word representations via global context and multiple word prototypes",
"authors": [
{
"first": "Eric",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric H. Huang, Richard Socher, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Association for Computational Lin- guistics (ACL), Jeju, Republic of Korea.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Uoy: Graphs of unambiguous vertices for word sense induction and disambiguation",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Korkontzelos",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval '10",
"volume": "",
"issue": "",
"pages": "355--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ioannis Korkontzelos and Suresh Manandhar. 2010. Uoy: Graphs of unambiguous vertices for word sense induction and disambiguation. In Proceedings of the 5th International Workshop on Semantic Eval- uation, SemEval '10, page 355-358, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Carel van Niekerk, and Milica Gasic",
"authors": [
{
"first": "Nurul",
"middle": [],
"last": "Lubis",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "41",
"issue": "",
"pages": "28--44",
"other_ids": {
"DOI": [
"10.1609/aimag.v41i3.5322"
]
},
"num": null,
"urls": [],
"raw_text": "Nurul Lubis, Michael Heck, Carel van Niekerk, and Milica Gasic. 2020. Adaptable conversational ma- chines. AI Magazine, 41(3):28-44.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semeval-2010 task 14: Word sense induction & disambiguation",
"authors": [
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ioannis",
"suffix": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Klapaftis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dligach",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sameer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pradhan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval '10",
"volume": "",
"issue": "",
"pages": "63--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suresh Manandhar, Ioannis P. Klapaftis, Dmitriy Dligach, and Sameer S. Pradhan. 2010. Semeval- 2010 task 14: Word sense induction & disambigu- ation. In Proceedings of the 5th International Work- shop on Semantic Evaluation, SemEval '10, pages 63-68, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learned in translation: Contextualized word vectors",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6294--6305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In Advances in Neural In- formation Processing Systems, pages 6294-6305.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represent- ations in vector space.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Distributed Representations of Words and Phrases and their Compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013b. Distributed Representa- tions of Words and Phrases and their Compositional- ity. In Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Geometry of polysemy",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "Suma",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Pramod",
"middle": [],
"last": "Viswanath",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2016. Geometry of polysemy.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Representing sentences as low-rank subspaces",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "Suma",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Pramod",
"middle": [],
"last": "Viswanath",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "629--634",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2099"
]
},
"num": null,
"urls": [],
"raw_text": "Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2017. Representing sentences as low-rank subspaces. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 629-634, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Efficient nonparametric estimation of multiple embeddings per word in vector space",
"authors": [
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Jeevan",
"middle": [],
"last": "Shankar",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1059--1069",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1113"
]
},
"num": null,
"urls": [],
"raw_text": "Arvind Neelakantan, Jeevan Shankar, Alexandre Pas- sos, and Andrew McCallum. 2014. Efficient non- parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1059-1069, Doha, Qatar. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proc. of NAACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Automatic word sense discrimination",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "1",
"pages": "97--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze. 1998. Automatic word sense discrim- ination. Computational Linguistics, 24(1):97-123.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "ERNIE: Enhanced representation through knowledge integration",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Danxiang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09223"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. ERNIE: Enhanced repres- entation through knowledge integration. arXiv pre- print arXiv:1904.09223.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The GUDHI Project. 2020. GUDHI User and Reference Manual",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The GUDHI Project. 2020. GUDHI User and Ref- erence Manual, 3.2.0 edition. GUDHI Editorial Board.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Philosophische Untersuchungen",
"authors": [
{
"first": "Ludwig Josef Johann",
"middle": [],
"last": "Wittgenstein",
"suffix": ""
}
],
"year": 1953,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ludwig Josef Johann Wittgenstein. 1953. Philosophis- che Untersuchungen. \u00a743.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Some subspaces of R 3 : various deformations of an open line segment (a, b, c), deformations of a circle (d, e, f), a torus (g) and a deformation of the torus (h), two intersecting line segments (i), and a surface with a figure eight as boundary (j)",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Points noisily sampled from the unit circle (top left) and the corresponding spaces W r for different radii r",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "An example of a persistence diagram, summarizing the persistent homology of some point cloud W 0 in degrees i = 0, 1, 2 and 3. Each dot encodes the life span of a distinct feature. Features of different degrees are displayed in different colours, as indicated in the lower right corner. For the computations in this paper, we will focus on the degree zero features, i.e. on connected components, indicated in red. As all of these are already present in the point cloud W 0 , they all have horizontal coordinate equal to zero. Their vertical coordinates are the radii at which different components merge",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "An idealized picture of the word space W near \"mole\": four regions of the meaning manifold are glued together to a single word like to challenge. We argue that, in the vicinity of polysemous words, W cannot possibly have the structure of a manifold, i.e. it cannot resemble an open ball of any dimension.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "The topological polysemy TPS 50 (w) plotted against the number of clusters in the SemEval gold standard, for the 100 SemEval target words w",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF5": {
"text": "The topological polysemy TPS 50 (w) plotted against the number of synsets in WordNet, for all 62 049 words w in the SemEval corpus that have a WordNet entry",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"text": "TPS n vs. TPS n vs. TPS n vs.",
"content": "<table><tr><td/><td>GS</td><td colspan=\"2\">synsets frequency</td></tr><tr><td>10</td><td>-0.001</td><td>0.122</td><td>0.002</td></tr><tr><td>40</td><td>0.411</td><td>0.096</td><td>-0.003</td></tr><tr><td>50</td><td>0.424</td><td>0.085</td><td>-0.006</td></tr><tr><td>60</td><td>0.414</td><td>0.076</td><td>-0.008</td></tr><tr><td>100</td><td>0.333</td><td>0.055</td><td>-0.013</td></tr><tr><td>sample size</td><td>100</td><td>62 049</td><td>127 151</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}