| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:14:32.717432Z" |
| }, |
| "title": "Fine-tuning Distributional Semantic Models for Closely-Related Languages", |
| "authors": [ |
| { |
| "first": "Kushagra", |
| "middle": [], |
| "last": "Bhatia", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Macquarie Group Limited", |
| "institution": "", |
| "location": { |
| "settlement": "Gurugram" |
| } |
| }, |
| "email": "kushagra.bhatia@macquarie.com" |
| }, |
| { |
| "first": "Divyanshu", |
| "middle": [], |
| "last": "Aggarwal", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "divyanshuggrwl@gmail.com" |
| }, |
| { |
| "first": "Ashwini", |
| "middle": [], |
| "last": "Vaidya", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "avaidya@hss.iitd.ac.in" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper we compare the performance of three models: SGNS (skip-gram negative sampling) and augmented versions of SVD (singular value decomposition) and PPMI (Positive Pointwise Mutual Information) on a word similarity task. We particularly focus on the role of hyperparameter tuning for Hindi based on recommendations made in previous work (on English). Our results show that there are language specific preferences for these hyperparameters. We extend the best settings for Hindi to a set of related languages: Punjabi, Gujarati and Marathi with favourable results. We also find that a suitably tuned SVD model outperforms SGNS for most of our languages and is also more robust in a low-resource setting.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper we compare the performance of three models: SGNS (skip-gram negative sampling) and augmented versions of SVD (singular value decomposition) and PPMI (Positive Pointwise Mutual Information) on a word similarity task. We particularly focus on the role of hyperparameter tuning for Hindi based on recommendations made in previous work (on English). Our results show that there are language specific preferences for these hyperparameters. We extend the best settings for Hindi to a set of related languages: Punjabi, Gujarati and Marathi with favourable results. We also find that a suitably tuned SVD model outperforms SGNS for most of our languages and is also more robust in a low-resource setting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The development of word embedding models in NLP has led to improved performance on a range of lexical semantic tasks (Baroni et al., 2014) . The SGNS (skip-gram negative sampling) training method has been shown to outperform previously used count-based models such as PPMI (Positive Point based Mutual Information) and SVD (truncated Singular Value Decomposition).", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 138, |
| "text": "(Baroni et al., 2014)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we look at the task of word similarity in Hindi and related languages Punjabi, Gujarati and Marathi. Specifically, we experiment with hyperparameter tuning for SGNS, SVD and PPMI for Hindi and then ask whether the same hyperparameters can be extended and applied to typol related languages. We make use of the hyperparameters formulated in Levy et al. (2015) to tune all three models. We find that a suitably tuned SVD model outperforms SGNS. This result differs from Levy et al. (2015) which shows that fine-tuned SGNS and SVD models perform at par. We find that our Hindi SVD results are better than multilingual fast-Text (Grave et al., 2018) and the recently released IndicNLP Suite (Kakwani et al., 2020) .", |
| "cite_spans": [ |
| { |
| "start": 355, |
| "end": 373, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 483, |
| "end": 501, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 640, |
| "end": 660, |
| "text": "(Grave et al., 2018)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 702, |
| "end": 724, |
| "text": "(Kakwani et al., 2020)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We hypothesize that hyperparameters are sensitive to linguistic properties. If this is true, then we should find similar results in languages that are typologically related to Hindi. Our results suggest that these hyperparameter settings for Hindi can be extended to typologically-related languages. Indeed, the results show that adapting the hyperparameters from Hindi is more advantageous as compared to the default settings or the settings recommended for English in Levy et al. (2015) .", |
| "cite_spans": [ |
| { |
| "start": 470, |
| "end": 488, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "While reasonably large resources and datasets exist for the major Indian languages, Joshi et al. (2020) have shown that more than half of the Indo-Aryan languages represented in Wikipedia can be classified as having 'poor resource availability'. Given these limitations, a model that is able to generate representations that are robust in the face of less data can be advantageous. In our experiments, we take successively smaller corpus slices from our Hindi, Marathi, Gujarati and Punjabi corpora in order to test our models' performance. We find that SVD is more robust compared to other models in a low-resource setting. Sahlgren and Lenci (2016) have investigated the effects of data size on distributional semantic models and their results on the robustness of SVD correspond with our findings for Hindi and related languages.", |
| "cite_spans": [ |
| { |
| "start": 84, |
| "end": 103, |
| "text": "Joshi et al. (2020)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 625, |
| "end": 650, |
| "text": "Sahlgren and Lenci (2016)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we first describe the creation of a new word similarity dataset for Hindi, which addresses some of the limitations of existing evaluation datasets. We then discuss the hyperparameter settings suggested in Levy et al. (2015) and describe our results for Hindi and related languages on the entire corpus as well as on smaller corpus sizes. We conclude with a summary of our findings.", |
| "cite_spans": [ |
| { |
| "start": 220, |
| "end": 238, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In order to evaluate our distributed semantic models, we carry out an intrinsic evaluation of the models based on word similarity. For the languages in our study, the currently available word similarity datasets include translated versions of English WordSim-353 (WS-353) (Akhtar et al., 2017).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Similarity Dataset", |
| "sec_num": "2" |
| }, |
| { |
| "text": "However, we note that WordSim-353 (Finkelstein et al., 2001) has been criticized for conflating association and similarity in its word-pair annotation guidelines (Hill et al., 2015) . As a consequence, WS-353 measures association rather than similarity. In addition to this, we observe that the Hindi version of WS-353 consists of numerous transliterations from English. In order to develop a more robust evaluation dataset for our word experiments, we created Hin-RG63, the Hindi version for English RG-65 (Rubenstein and Goodenough, 1965) . This dataset carefully dissociates similarity and relatedness in its annotation guidelines and has been used as a benchmark for SemEval tasks (Camacho-Collados et al., 2015 ). While we were unable to create RG-65 translations for the other languages included in this paper, we plan to extend the work done for Hindi to more languages in the future.", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 60, |
| "text": "WordSim-353 (Finkelstein et al., 2001)", |
| "ref_id": null |
| }, |
| { |
| "start": 162, |
| "end": 181, |
| "text": "(Hill et al., 2015)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 507, |
| "end": 540, |
| "text": "(Rubenstein and Goodenough, 1965)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 685, |
| "end": 715, |
| "text": "(Camacho-Collados et al., 2015", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Similarity Dataset", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Two native speakers of Hindi, who were also bilinguals provided the translation of words in English RG-65 to Hindi. A third translator moderated any disagreements. It is noteworthy that two of the word pairs from English RG-65 did not have any suitable distinct translation and hence were not included in the final dataset. 16 Hindi native speakers were presented with the similarity scoring guidelines given in Jurgens et al. (2014) . The annotators were presented with a practice session consisting of sample word pairs before rating the actual Hindi word pairs in the dataset. Next, the annotators were asked to score each pair on a scale of 0 to 4. To present more flexibility, scoring with a step of 0.5 was permitted.", |
| "cite_spans": [ |
| { |
| "start": 412, |
| "end": 433, |
| "text": "Jurgens et al. (2014)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Similarity Dataset", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We computed inter-annotator agreement using pairwise correlation between individual annotators' ratings. Pearson and Spearman correlation coefficients are used to assess the linear correlation and monotonic relationship respectively. We report an average pairwise Pearson correlation of 0.814, and average pairwise Spearman correlation of 0.805 for Hin RG-63, our version of English RG-65. We make the dataset available for public use. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Similarity Dataset", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3 Comparing SVD, PPMI and SGNS Baroni et al. (2014) 's paper compared word embedding models or 'context-predicting models' like SGNS with 'context-counting' models like SVD on various lexical semantic benchmarks. Their results showed the superiority of context-predicting models. In a follow-up to this result, Levy et al. (2015) demonstrated that suitably augmented and tuned PMI (Pointwise Mutual Information) and SVD (Singular Value Decomposition) i.e contextcounting models can perform at par with word embedding models. In fact, insights from word embeddings can be used to augment count-based models, resulting in only very small differences in performance.", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 51, |
| "text": "Baroni et al. (2014)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 311, |
| "end": 329, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Similarity Dataset", |
| "sec_num": "2" |
| }, |
| { |
| "text": "These system design changes formulated as a set of transferable hyperparameters in Levy et al. (2015) were applied either at the pre-processing or post-processing stage, modifying the word vectors generated from these methods. The following section expands upon these hyperparameters", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 101, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Similarity Dataset", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A major contribution of the Levy et al. 2015study was the formulation of hyperparameters for contextcounting models that are inspired by context predicting models. Such adaptations are feasible due to an overlap between the mathematical objectives of the two, which improve the performance of the traditional methods. The authors used a large English corpus with 1.5 billion tokens for their experiments and inferred that the differences between the two families of models are trivial.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyperparameters", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "All three methods, viz. SGNS, SVD and PPMI output the word vector representation. Following the same nomenclature for hyperparameters as Levy et al. (2015) , we summarize the hyperparameters used in our experiments in Table 1 .", |
| "cite_spans": [ |
| { |
| "start": 137, |
| "end": 155, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 218, |
| "end": 225, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Hyperparameters", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For context-predicting models, cds is the smoothing factor to which the context count is raised in the unigram distribution for negative sampling. In Levy et al. (2015) cds has been adapted for PPMI and SVD. We examine values for cds= 0.75, otherwise the standard unigram sampling distribution is followed (cds = 1).", |
| "cite_spans": [ |
| { |
| "start": 150, |
| "end": 168, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyperparameters", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The hyperparameter neg denotes the number of negative samples for context-predicting models. This translates to the amount that the PPMI matrix is shifted for context-counting (PPMI and SVD) models. Eigenvalue weighting (eig) represents the exponent to the eigenvalue matrix in the word vector representation equation, obtained after factorization of the PPMI matrix. The values eig = 0 and 0.5 lead to the symmetric versions of SVD, with the prior version completely removing the eigenvalue matrix from the representation. Pennington et al. (2014) introduce the concept of context vector addition (w + c) to the word vector output by the model. Following the same idea, we check whether such an addition at post processing is beneficial for SGNS and SVD methods. The window size (win) is the range in which the context words are chosen on both sides of the analyzed word.", |
| "cite_spans": [ |
| { |
| "start": 524, |
| "end": 548, |
| "text": "Pennington et al. (2014)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyperparameters", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The model training was performed using publicly available monolingual corpora. For Hindi we used HindMonoCorp (Bojar et al., 2014) and for the other languages viz. Marathi, Gujarati and Punjabi, IndicCorp (Kakwani et al., 2020) is used. The text is pre-processed by removing punctuation, followed by normalization using the Indic NLP Library 2 . The statistics for each corpora is shown in Table 2 with the vocabulary size calculated after ignoring words appearing less than 100 times. We vary the size of corpus used for training the methods. For the experiments in a low-resource setting, corpus slices are created by randomly sampling a fraction of sentences from the entire corpus (4.3.2). Akhtar et al. (2017) . We further evaluate the Hindi models on our very own Hin-RG65. The average Spearman correlation between the vector cosine similarity and the human rating of the word-pairs is used to rank the word representations.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 130, |
| "text": "(Bojar et al., 2014)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 205, |
| "end": 227, |
| "text": "(Kakwani et al., 2020)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 694, |
| "end": 714, |
| "text": "Akhtar et al. (2017)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 390, |
| "end": 397, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We study the impact of different hyperparameters on the performance, by evaluating the models trained on the complete HindMonoCorp with different configrations. A few pre-processing hyperparameters viz. deletion of rare words prior to creation of context window, dynamic context weighting and subsampling were only analyzed in the preliminary stage of experiments. These hyperparameters did not have much impact on the performance, and were not investigated further. We summarize the advantageous configurations of the hyperparameters shown in Table 1 , along with the observed differences from Levy et al. (2015)'s recommendations for English. Levy et al. (2015) advocated the use of cds = 0.75 for all 3 models: SGNS, PPMI and SVD. However, we do not observe a persistent trend for context distribution smoothing (cds). Although PPMI shows slight improvement with cds = 0.75, SVD and SGNS do not show any preferences.", |
| "cite_spans": [ |
| { |
| "start": 645, |
| "end": 663, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 544, |
| "end": 551, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Hyperparameter tuning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For English it was observed that SVD performed better with a shorter window (win = 2), whereas SGNS did not show any preferences for win. In contrast, we observe a tendency of both the methods towards a larger window size (win = 5). Such a trend may be attributed to the difference in linguistic properties and morphology of the two languages. PPMI however performed best with win = 2. Levy et al. (2015) 's results for English, show that a value of neg as 5 or 15 was equally beneficial for SGNS. For our work on Hindi, it showed a clear preference for neg = 15. Any value below this was not beneficial. We think this may be due to the relatively higher vocabulary-to-token ratio of the Hindi training corpus (as compared to English). Similarly, in the case of the context vector w + c, Levy et al. (2015) are equivocal about its impact, but we found that addition of a context vector (w + c) always yielded an improvement in performance of SGNS.", |
| "cite_spans": [ |
| { |
| "start": 386, |
| "end": 404, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 788, |
| "end": 806, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyperparameter tuning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In concordance with Levy et al. 2015, we observe substantial gains for SVD when the eigenvalue matrix is removed from the word vector equation (i.e. eig = 0), over eig = 1 (the default value) or 0.5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyperparameter tuning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "After filtering out the top performing set of hyperparameters on the Hindi corpus, we validate whether our inferences hold on the other three inspected Indo-Aryan languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyperparameter tuning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For all four languages, we trained a 500dimensional representation for all the models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Evaluation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We investigate the performance gains when training with optimal configuration. With this aim, we evaluate our techniques trained on the complete corpus. Finally, in order to analyse the trends for low-resource scenarios, we report the performance of the fine-tuned models when trained with varying corpus sizes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Evaluation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "It should be noted that the hyperparameter search is independently carried out on the evaluation set for Hindi. This gives us an upper limit on the method's performance and highlights the importance of suitable hyperparameters. In a real setting however, we would require a dedicated development set for tuning the models. For Gujarati, Marathi and Punjabi, we are not tuning the hyper- Table 4 : Performance (Spearman correlation) of models trained on the complete monolingual corpus with hyperparameters as recommended in (Levy et al., 2015) parameters over the evaluation set and are simply adapting to the recommended configurations obtained from Hindi.", |
| "cite_spans": [ |
| { |
| "start": 524, |
| "end": 543, |
| "text": "(Levy et al., 2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 387, |
| "end": 394, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training and Evaluation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We train the investigated distributional semantic methods on the complete corpora for each language. Table 3 and Table 4 report the performance of models trained with default hyperparameters and configuration recommended for English respectively. On comparing the performance with models trained on optimal hyperparameters for Hindi, we see a superior performance for each model across all the Indo-Aryan languages (Table 5 ). This result demonstrates that hyperparameter configurations can be adapted well for closely-related languages. We further compare our fine-tuned models with two pre-trained word embedding models, namely fastText (FT-WC), trained on Common Crawl and Wikipedia (Grave et al., 2018) and IndicFastText (I-FT) (Kakwani et al., 2020) . We note that for Hindi, IndicFastText is trained on a larger dataset than the previously released fastText. Table 5 shows the evaluation results for all five models. Out of the three studied models, SVD consistently shows superior performance over all four languages. On comparison with FastText and IndicFastText, our modified SVD either outperforms both or is on par. For Hindi, since the tuning is exhaustive we observe the three tuned models to achieve a higher score than the FastText models on both Hin-WS235 and Hin-RG63. Here, it should also be noted that the training corpus used to achieve this score was almost half the size of the one used by IndicFast-Text, further supporting our rationale of fine-tuning. Our SVD model with adapted hyperparameters for the other Indo-Aryan languages performs on par with the fastText models, even outperforms Indic-FastText for Punjabi. We also confirmed that the current dimensionality settings did not affect our results for SGNS, as experiments with lower dimensionalities of 200 and 300 showed negligible gains for SGNS.", |
| "cite_spans": [ |
| { |
| "start": 686, |
| "end": 706, |
| "text": "(Grave et al., 2018)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 732, |
| "end": 754, |
| "text": "(Kakwani et al., 2020)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 101, |
| "end": 108, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 113, |
| "end": 120, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 415, |
| "end": 423, |
| "text": "(Table 5", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 865, |
| "end": 872, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Full Corpus", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "We analyze the performance of all three models with varying data sizes for each language. We would like to experiment with other Indo-Aryan languages which are truly low-resource, but evaluation datasets only exist for a fraction of available languages. Hence, we decided to experiment with the same languages using different sizes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Low-resource setting", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "The best correlation score of each method after training on different slices of corpora is shown in Figure 1 . We infer from the graphs that SVD is quite robust with respect to data size and even with a fraction of data i.e. in case of low-resource scenario, does not show a considerable dip in performance. For Punjabi, there is a substantial gain in performance when using SVD across all data sizes. On the other hand, SGNS and SVD are almost similar for Gujarati, and SGNS may improve with more data.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 100, |
| "end": 108, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Low-resource setting", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "For Hindi, careful tuning of SVD and PPMI even on a fifth of the complete corpus attains a performance on par with FastText and IndicFastText.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Low-resource setting", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "Our work shows that careful hyperparameter tuning can go a long way in improving the performance of distributed semantic models. Interestingly, we find that the general recommendations in Levy et al. (2015) for hyperparameter settings work well only for English word representations. For Hindi, some of their recommendations hold, whereas others do not. Moreover, the hyperparameter settings for Hindi carry over well to related languages and there is performance improvement compared to the default or English-specific settings. This seems to suggest that language specific differences are playing a role with respect to hyperparameter settings.", |
| "cite_spans": [ |
| { |
| "start": 188, |
| "end": 206, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Perhaps the most interesting result is that modified SVD is more robust than SGNS, both across languages and data sizes. This contrasts with the idea that the differences between the architectures of particular distributional semantic models are trivial so long as they are trained in a similar fashion (Levy et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 303, |
| "end": 322, |
| "text": "(Levy et al., 2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We also note that the problem of building word embeddings for low-resource languages has been addressed using cross-lingual representations (Ruder et al., 2019) . These rely on alignments between low-resource and better-resourced languages. Techniques such as cognate detection have been used to improve these alignments (Sharoff, 2020) . Newer contextualized word embedding models can make these alignments internally, allowing for cross lingual transfer (Conneau et al., 2020) . In this paper we have chosen to focus on monolingual word embeddings and leave the exploration of cross-lingual representations for future work.", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 160, |
| "text": "(Ruder et al., 2019)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 321, |
| "end": 336, |
| "text": "(Sharoff, 2020)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 456, |
| "end": 478, |
| "text": "(Conneau et al., 2020)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary", |
| "sec_num": "5" |
| }, |
| { |
| "text": "https://github.com/ashwinivd/ similarity_hindi", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://pypi.org/project/ indic-nlp-library/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors acknowledge the Summer Fellowship Research programme (SFRP-2020) of CEP, IIT Delhi, which enabled Kushagra Bhatia and Divyanshu Aggarwal to pursue research in IIT Delhi.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Unsupervised morphological expansion of small datasets for improving word embeddings", |
| "authors": [ |
| { |
| "first": "Arihant", |
| "middle": [], |
| "last": "Syed Sarfaraz Akhtar", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 18th International Conference on Computational Linguistics and Intelligent Text Processing (CICLING)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Syed Sarfaraz Akhtar, Arihant Gupta, Avijit Vaj- payee, Arjit Srivastava, and Manish Shrivastava. 2017. Unsupervised morphological expansion of small datasets for improving word embeddings. In Proceedings of the 18th International Conference on Computational Linguistics and Intelligent Text Pro- cessing (CICLING).", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgiana", |
| "middle": [], |
| "last": "Dinu", |
| "suffix": "" |
| }, |
| { |
| "first": "Germ\u00e1n", |
| "middle": [], |
| "last": "Kruszewski", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "238--247", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/P14-1023" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 238-247, Baltimore, Maryland. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Hindencorp-hindi-english and hindionly corpus for machine translation", |
| "authors": [ |
| { |
| "first": "Ondrej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "Vojtech", |
| "middle": [], |
| "last": "Diatka", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Rychl\u1ef3", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Stran\u00e1k", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "3550--3555", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ondrej Bojar, Vojtech Diatka, Pavel Rychl\u1ef3, Pavel Stran\u00e1k, V\u00edt Suchomel, Ales Tamchyna, and Daniel Zeman. 2014. Hindencorp-hindi-english and hindi- only corpus for machine translation. In LREC, pages 3550-3555.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A framework for the construction of monolingual and cross-lingual word similarity datasets", |
| "authors": [ |
| { |
| "first": "Jos\u00e9", |
| "middle": [], |
| "last": "Camacho-Collados", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Taher Pilehvar", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing", |
| "volume": "2", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jos\u00e9 Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. A framework for the construction of monolingual and cross-lingual word similarity datasets. In Proceedings of the 53rd an- nual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 2: Short pa- pers), pages 1-7.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Emerging crosslingual structure in pretrained language models", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Shijie", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Haoran", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "6022--6034", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.acl-main.536" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Emerging cross- lingual structure in pretrained language models. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6022- 6034, Online. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Placing search in context: The concept revisited", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Finkelstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Yossi", |
| "middle": [], |
| "last": "Matias", |
| "suffix": "" |
| }, |
| { |
| "first": "Ehud", |
| "middle": [], |
| "last": "Rivlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zach", |
| "middle": [], |
| "last": "Solan", |
| "suffix": "" |
| }, |
| { |
| "first": "Gadi", |
| "middle": [], |
| "last": "Wolfman", |
| "suffix": "" |
| }, |
| { |
| "first": "Eytan", |
| "middle": [], |
| "last": "Ruppin", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 10th international conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "406--414", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th inter- national conference on World Wide Web, pages 406- 414.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Learning word vectors for 157 languages", |
| "authors": [ |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Prakhar", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the International Conference on Language Re- sources and Evaluation (LREC 2018).", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computational Linguistics", |
| "volume": "41", |
| "issue": "4", |
| "pages": "665--695", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The state and fate of linguistic diversity and inclusion in the nlp world", |
| "authors": [ |
| { |
| "first": "Pratik", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastin", |
| "middle": [], |
| "last": "Santy", |
| "suffix": "" |
| }, |
| { |
| "first": "Amar", |
| "middle": [], |
| "last": "Budhiraja", |
| "suffix": "" |
| }, |
| { |
| "first": "Kalika", |
| "middle": [], |
| "last": "Bali", |
| "suffix": "" |
| }, |
| { |
| "first": "Monojit", |
| "middle": [], |
| "last": "Choudhury", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the nlp world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Semeval-2014 task 3: Crosslevel semantic similarity", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Jurgens", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Taher Pilehvar", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "SemEval@ COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "17--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Jurgens, Mohammad Taher Pilehvar, and Roberto Navigli. 2014. Semeval-2014 task 3: Cross- level semantic similarity. In SemEval@ COLING, pages 17-26.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages", |
| "authors": [ |
| { |
| "first": "Divyanshu", |
| "middle": [], |
| "last": "Kakwani", |
| "suffix": "" |
| }, |
| { |
| "first": "Anoop", |
| "middle": [], |
| "last": "Kunchukuttan", |
| "suffix": "" |
| }, |
| { |
| "first": "Satish", |
| "middle": [], |
| "last": "Golla", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "C" |
| ], |
| "last": "Gokul", |
| "suffix": "" |
| }, |
| { |
| "first": "Avik", |
| "middle": [], |
| "last": "Bhattacharyya", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Mitesh", |
| "suffix": "" |
| }, |
| { |
| "first": "Pratyush", |
| "middle": [], |
| "last": "Khapra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Findings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for In- dian Languages. In Findings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Improving distributional similarity with lessons learned from word embeddings", |
| "authors": [ |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "211--225", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics, 3:211-225.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Contextual correlates of synonymy", |
| "authors": [ |
| { |
| "first": "Herbert", |
| "middle": [], |
| "last": "Rubenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Goodenough", |
| "suffix": "" |
| } |
| ], |
| "year": 1965, |
| "venue": "Communications of the ACM", |
| "volume": "8", |
| "issue": "10", |
| "pages": "627--633", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communica- tions of the ACM, 8(10):627-633.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A survey of cross-lingual word embedding models", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Ruder", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Vuli\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "J. Artif. Int. Res", |
| "volume": "65", |
| "issue": "1", |
| "pages": "569--630", |
| "other_ids": { |
| "DOI": [ |
| "10.1613/jair.1.11640" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2019. A survey of cross-lingual word embedding models. J. Artif. Int. Res., 65(1):569-630.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The effects of data size and frequency range on distributional semantic models", |
| "authors": [ |
| { |
| "first": "Magnus", |
| "middle": [], |
| "last": "Sahlgren", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Lenci", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "975--980", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D16-1099" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Magnus Sahlgren and Alessandro Lenci. 2016. The ef- fects of data size and frequency range on distribu- tional semantic models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 975-980, Austin, Texas. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Finding next of kin: Crosslingual embedding spaces for related languages", |
| "authors": [ |
| { |
| "first": "Serge", |
| "middle": [], |
| "last": "Sharoff", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Natural Language Engineering", |
| "volume": "26", |
| "issue": "2", |
| "pages": "163--182", |
| "other_ids": { |
| "DOI": [ |
| "10.1017/S1351324919000354" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Serge Sharoff. 2020. Finding next of kin: Cross- lingual embedding spaces for related languages. Natural Language Engineering, 26(2):163-182.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "Performance of PPMI, SVD and SGNS on WS-235 for Hindi,Gujarati, Punjabi and Marathi. Performance for each model is reported for varying sizes of the corpus.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "text": "Hyperparameter search space studied in our work. \u2020 Default settings. The hyperparameter w + c is used for SVD and SGNS, not PPMI. eig is only used for SVD.", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "content": "<table><tr><td>Values in million (M)</td></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "Statistics of corpora used for model training.", |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "content": "<table><tr><td colspan=\"2\">Language PPMI SVD SGNS</td></tr><tr><td>WS-235</td><td/></tr><tr><td>Hindi</td><td>0.566 0.609 0.550</td></tr><tr><td>Gujarati</td><td>0.343 0.483 0.461</td></tr><tr><td>Punjabi</td><td>0.259 0.342 0.210</td></tr><tr><td>Marathi</td><td>0.361 0.421 0.369</td></tr><tr><td>Hin-RG63</td><td/></tr><tr><td>Hindi</td><td>0.703 0.704 0.548</td></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "Performance (Spearman correlation) of models trained on the complete monolingual corpus with default hyperparameters.", |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "content": "<table><tr><td>: Performance (Spearman correlation) of mod-</td></tr><tr><td>els trained on the complete monolingual corpus with</td></tr><tr><td>optimal hyperparameters for Hindi. FT-WC is fastText</td></tr><tr><td>trained on Wikipedia and Common Crawl; I-FT is In-</td></tr><tr><td>dicFastText.</td></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "", |
| "type_str": "table" |
| } |
| } |
| } |
| } |