ACL-OCL / Base_JSON /prefixW /json /wnut /2020.wnut-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:35:32.274366Z"
},
"title": "Annotation Efficient Language Identification from Weak Labels",
"authors": [
{
"first": "Shriphani",
"middle": [],
"last": "Palakodety",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ashiqur",
"middle": [
"R"
],
"last": "Khudabukhsh",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "India is home to several languages with more than 30m speakers. These languages exhibit significant presence on social media platforms. However, several of these widely-used languages are under-addressed by current Natural Language Processing (NLP) models and resources. User generated social media content in these languages is also typically authored in the Roman script as opposed to the traditional native script further contributing to resource scarcity. In this paper, we leverage a minimally supervised NLP technique to obtain weak language labels from a large-scale Indian social media corpus leading to a robust and annotation-efficient languageidentification technique spanning nine Romanized Indian languages. In fast-spreading pandemic situations such as the current COVID-19 situation, information processing objectives might be heavily tilted towards under-served languages in densely populated regions. We release our models to facilitate downstream analyses in these low-resource languages 1. Experiments across multiple social media corpora demonstrate the model's robustness and provide several interesting insights on Indian language usage patterns on social media. We release an annotated data set of 1,000 comments in ten Romanized languages as a social media evaluation benchmark 1 .",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "India is home to several languages with more than 30m speakers. These languages exhibit significant presence on social media platforms. However, several of these widely-used languages are under-addressed by current Natural Language Processing (NLP) models and resources. User generated social media content in these languages is also typically authored in the Roman script as opposed to the traditional native script further contributing to resource scarcity. In this paper, we leverage a minimally supervised NLP technique to obtain weak language labels from a large-scale Indian social media corpus leading to a robust and annotation-efficient languageidentification technique spanning nine Romanized Indian languages. In fast-spreading pandemic situations such as the current COVID-19 situation, information processing objectives might be heavily tilted towards under-served languages in densely populated regions. We release our models to facilitate downstream analyses in these low-resource languages 1. Experiments across multiple social media corpora demonstrate the model's robustness and provide several interesting insights on Indian language usage patterns on social media. We release an annotated data set of 1,000 comments in ten Romanized languages as a social media evaluation benchmark 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Much of the current NLP research focuses on a handful of world languages (e.g., English, French, Spanish etc.). They enjoy substantially larger computational linguistic resources as compared to their low-resource counterparts (e.g., Bengali, Odia etc.). However, in the midst of global-scale events like the ongoing COVID-19 pandemic, demand for linguistic resources might get recalibrated; information processing objectives might be heavily tilted towards under-served languages that are prevalent in many densely populated regions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on language identification in noisy, social media settings -a basic and highly critical linguistic resource prerequisite for downstream analysis in a multilingual environment. Our solution extends support for nine major Indian languages (see , Table 1 ) spanning the native tongues of 85% of India's population (Census, 2011) . These under-resourced languages are heavily used in several densely populated travel hubs and on social media. User generated web content in these languages is typically authored in the Roman script as opposed to the traditional native script leading to scarcer linguistic resources (Virga and Khudanpur, 2003; Choudhury et al., 2010; Barman et al., 2014; Palakodety et al., 2020a) . Existing large-scale language identification tools prioritize the languages' native scripts (e.g., (FastText; Google)) over the Romanized variants. Our solution focuses on the these Romanized variants and is integrated with a widely used existing languageidentification system (FastText) supporting 355 languages. We release our open-source language identification system 1 . to facilitate Indian social media analysis.",
"cite_spans": [
{
"start": 335,
"end": 349,
"text": "(Census, 2011)",
"ref_id": null
},
{
"start": 635,
"end": 662,
"text": "(Virga and Khudanpur, 2003;",
"ref_id": "BIBREF37"
},
{
"start": 663,
"end": 686,
"text": "Choudhury et al., 2010;",
"ref_id": "BIBREF7"
},
{
"start": 687,
"end": 707,
"text": "Barman et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 708,
"end": 733,
"text": "Palakodety et al., 2020a)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 266,
"end": 275,
"text": ", Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Annotator availability is a major concern that may constrain data acquisition efforts in low resource settings (Joshi et al., 2019) . Our proposed solution is extremely annotation efficient; it utilizes a recent result (Palakodety et al., 2020a) to automatically group a multilingual corpus into largely monolingual clusters that can be extracted with minimal supervision. Using a mere 260 annotated short documents (YouTube video comments), we assign weak labels to a data set of 2.8 million comments spanning the aforementioned languages. Our model performs favorably when compared against an existing commercial solution.",
"cite_spans": [
{
"start": 111,
"end": 131,
"text": "(Joshi et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 219,
"end": 245,
"text": "(Palakodety et al., 2020a)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While census data and surveys can provide useful information about linguistic diversity and spread, analyses of user-generated multi-lingual corpora can complement these surveys with additional useful insights of their own. We conduct a focused analysis to explore if the (estimated) distribution of web-usage of Hindi across different Indian states aligns with common knowledge. Our analysis indicates that Hindi's web-presence is considerably higher in a cluster of North Indian states referred to as the Hindi belt (Jaffrelot, 2000) as compared to the South Indian states. We further analyze similar research questions concerning the relative usage of the Roman script and the native script for Hindi. We finally conclude with a small exploratory study on our method's effectiveness in detecting languages with trace presence in multiple corpora and outline some of the possible utilities. Contributions: Our main contributions of the paper are the following:",
"cite_spans": [
{
"start": 518,
"end": 535,
"text": "(Jaffrelot, 2000)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Resource: We release an important linguistic resource to detect nine heavily-spoken Indic languages expressed in Roman script.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Method: We propose an annotation efficient method to construct this language identifier and demonstrate extensibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Linguistic: We conduct a web-scale analysis of Hindi usage shedding light on multilinguality, geographic spread, and usage patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Social: We outline how our tool can detect trace presence of other languages that can aid in constructing data sets for humanitarian challenges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to construct our language identification system, we would require a web-scale Indian social media data set that (i) has considerable presence of the nine languages we are interested in, and (ii) captures a representative fraction of the Indian web users. To achieve this two-fold goal, we consider a data set introduced in Palakodety et al. (2020b) to analyze the 2019 Indian General Election. The data set consists of comments on YouTube videos hosted by popular news outlets in India. Overall, the corpus consists of 6,182,868 comments on 130,067 videos by 1,518,077 users posted in a 100 day period leading up to the 2019 Indian General Election.",
"cite_spans": [
{
"start": 332,
"end": 357,
"text": "Palakodety et al. (2020b)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set: YouTube Video Comments",
"sec_num": "2"
},
{
"text": "Why YouTube? As of January 2020, YouTube is the second-most popular social media platform in the world drawing 2 billion active users (Statista, 2020) . YouTube is the most popular social media platform in India with 265 million monthly active users (225 million on mobile), accounting for 80% of the population with internet access (Hindustan-Times, 2019; YourStory, 2018) . YouTube video comments have been used as data sources to analyze recent important events (Palakodety et al., 2020a,c; Sarkar et al., 2020; Cinelli et al., 2020) . Why this data set? The data set considers two highly popular YouTube news channels for each of the 12 Indian states that contribute 20 or more seats in the lower house of the parliament. State boundaries in India were drawn along linguistic lines (Dewen, 2010) . The dominant regional language in the Hindi belt (Jaffrelot, 2000) is Hindi, and the other states feature a unique dominant language written in either the Latin alphabet (in informal settings) or a native script. All the nine languages we focused on (listed in Table 1 ), are the dominant language in one or more of these 12 states. The regional news networks considered provide coverage in the dominant regional language. Hence, the data set exhibits strong presence of all the nine regional languages we are interested in. In addition to these 24 regional news channels, the data set considers YouTube channels for 14 highly popular national news outlets (listed in the Appendix). Overall, this implies 38 YouTube channels (24 regional, 14 national) with an average subscriber count of 3,338,628.",
"cite_spans": [
{
"start": 134,
"end": 150,
"text": "(Statista, 2020)",
"ref_id": "BIBREF35"
},
{
"start": 333,
"end": 356,
"text": "(Hindustan-Times, 2019;",
"ref_id": null
},
{
"start": 357,
"end": 373,
"text": "YourStory, 2018)",
"ref_id": "BIBREF39"
},
{
"start": 465,
"end": 493,
"text": "(Palakodety et al., 2020a,c;",
"ref_id": null
},
{
"start": 494,
"end": 514,
"text": "Sarkar et al., 2020;",
"ref_id": "BIBREF34"
},
{
"start": 515,
"end": 536,
"text": "Cinelli et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 786,
"end": 799,
"text": "(Dewen, 2010)",
"ref_id": "BIBREF9"
},
{
"start": 851,
"end": 868,
"text": "(Jaffrelot, 2000)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 1063,
"end": 1070,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Set: YouTube Video Comments",
"sec_num": "2"
},
{
"text": "Learning from weak labels: The role of unlabeled and weakly (or noisily) labeled data in supervised learning is a well-studied problem and has received sustained focus (Mitchell, 2004; Donmez et al., 2010) , and annotation efficiency in low-resource settings is a well-established requirement (Joshi et al., 2019) . Our work leverage\u015d L polyglot (Palakodety et al., 2020a) , a recentlyproposed method for noisy language identification that requires minimal supervision. We utilize it as a dependency to obtain weak labels and reduce annotation burden and construct a substantially more robust system. Language identification: While language identification of well-formed text is a nearly-solved problem, the difficulty in identifying language in a noisy social media setting is well-established (Bergsma et al., 2012; Lui and Baldwin, 2014; Jaech et al., 2016; Jauhiainen et al., 2019) . We see our work as a part of this continuing trend and as an important resource contribution to analyze Indian social media. Romanized Indian Languages: In the context of processing Indian languages expressed on the web, challenges posed by the use of Roman script instead of the native script have been reported in several recent studies in the context of code-mixed English-Bengali (Chanda et al., 2016) , and English-Hindi (Kumar et al., 2018) text. While addressing word level language identification, reported that 90% of posts in Indian languages on Facebook are expressed in Roman script. Prevalence of Romanized Hindi has also been previously reported in (Barman et al., 2014) . Our study takes previous findings one small step forward with a (noisy) geographical analysis of Hindi web usage. Bridging the resource gap:",
"cite_spans": [
{
"start": 168,
"end": 184,
"text": "(Mitchell, 2004;",
"ref_id": "BIBREF28"
},
{
"start": 185,
"end": 205,
"text": "Donmez et al., 2010)",
"ref_id": "BIBREF10"
},
{
"start": 293,
"end": 313,
"text": "(Joshi et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 346,
"end": 372,
"text": "(Palakodety et al., 2020a)",
"ref_id": "BIBREF30"
},
{
"start": 795,
"end": 817,
"text": "(Bergsma et al., 2012;",
"ref_id": "BIBREF3"
},
{
"start": 818,
"end": 840,
"text": "Lui and Baldwin, 2014;",
"ref_id": "BIBREF25"
},
{
"start": 841,
"end": 860,
"text": "Jaech et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 861,
"end": 885,
"text": "Jauhiainen et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 1272,
"end": 1293,
"text": "(Chanda et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 1314,
"end": 1334,
"text": "(Kumar et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 1551,
"end": 1572,
"text": "(Barman et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "We also see our work as a part of the ongoing effort in bridging the resource gap between Indian languages and world languages (Vyas et al., 2014; Vijayakrishna and Sobha, 2008; Kunchukuttan et al., 2014; Mohanty et al., 2017; Joshi et al., 2020) .",
"cite_spans": [
{
"start": 127,
"end": 146,
"text": "(Vyas et al., 2014;",
"ref_id": "BIBREF38"
},
{
"start": 147,
"end": 177,
"text": "Vijayakrishna and Sobha, 2008;",
"ref_id": "BIBREF36"
},
{
"start": 178,
"end": 204,
"text": "Kunchukuttan et al., 2014;",
"ref_id": "BIBREF23"
},
{
"start": 205,
"end": 226,
"text": "Mohanty et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 227,
"end": 246,
"text": "Joshi et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "In this section, we summarize a few key NLP models and results critical to our methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "4"
},
{
"text": "The Skip-gram model takes as input a word w \u2208 W (vocabulary), and predicts words w c \u2208 W that are likely to occur in the context of w. The training objective (predicting an input word's context) is parameterized by real-valued word representations or embeddings (Mikolov et al., 2013) . introduced sub-word extensions to the Skipgram model to learn robust word representations even in the presence of misspellings or spelling variations. Following (Palakodety et al., 2020a) , we normalize and average a document's constituent word embeddings to yield the document embedding. Monolingual cluster discovery: Palakodety et al. (2020a) introduced a minimal supervision language detection method using polyglot Skip-gram embeddings with sub-word information. These embeddings discover monolingual subsets (clusters) in a multilingual corpus which are subsequently retrieved using k-Means and a small sample percluster (10 documents) are annotated. We refer to this method asL polyglot and leverage it for constructing our data set with minimal annotation burden. For obvious reason, we do not compareL polyglot against our supervised solution that supports more than 300 languages. In Section 7.5, we demonstrate that our method detects languages with trace presence in a corpus (< 1%), a known limitation ofL polyglot (Palakodety et al., 2020a) .",
"cite_spans": [
{
"start": 262,
"end": 284,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 448,
"end": 474,
"text": "(Palakodety et al., 2020a)",
"ref_id": "BIBREF30"
},
{
"start": 1315,
"end": 1341,
"text": "(Palakodety et al., 2020a)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-gram embeddings:",
"sec_num": null
},
{
"text": "Research question: How to construct an annotation-efficient language identification method supporting a wide array of Indian languages? Our method has two main components: (i) an annotation-efficient procedure to construct a substantial data set with weak labels, (ii) a supervised system trained on a data set comprising this corpus and an existing data set, D tatoeba (Tatoeba, 2020), a well-known annotated data set supporting 355 languages (Tatoeba, 2020). For the construction of this data set, the election corpus (Palakodety et al., 2020b) is stripped of all comments containing any non English character. This maintains the focus on Romanized Indian languages with the native variants sourced from D tatoeba .",
"cite_spans": [
{
"start": 520,
"end": 546,
"text": "(Palakodety et al., 2020b)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "5"
},
{
"text": "Algorithm 1 outlines the steps in obtaining weak labels for nine Indian languages from multiple multilingual corpora and combining it with D tatoeba . Our training data set is denoted by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Weak Labels",
"sec_num": "5.1"
},
{
"text": "D = {d i , L(d i )} N 1 i=1 \u222a {d i ,L(d i )} N 2 i=1 where d i is a document, L(.) returns a label annotated by a Algorithm 1: F weakLabel ({D1, . . . , Dn}) Initialization: D \u2190 D tatoeba foreach D i \u2208 {D 1 , . . . , D n } do RunL polyglot on D i Obtain clusters C 1 , . . . , C K using k-means s.t. |C 1 | \u2265 |C 2 | . . . \u2265 |C K | Identify J \u2264 K dominant clusters for (j = 1; j \u2264 J; j = j + 1) do Assign language to C j (denoted by L(C j ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Weak Labels",
"sec_num": "5.1"
},
{
"text": ", (supplied by the annotator) Sample \u03b3|C j | comments from C j ranked by proximity from cluster center, 0 < \u03b3 \u2264 1 Add the sampled comments to D with weak label",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Weak Labels",
"sec_num": "5.1"
},
{
"text": "L(C j ) end end Output: Return D human,L(.) returns a weak label obtained from L polyglot .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Weak Labels",
"sec_num": "5.1"
},
{
"text": "D is initialized with an annotated corpus, D tatoeba (Tatoeba, 2020), i.e., N 1 is the total number of samples present in D tatoeba . Next, usingL polyglot , our method obtains weak labels for N 2 documents from the election corpus and adds to D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Weak Labels",
"sec_num": "5.1"
},
{
"text": "Our method, F weakLabel(.) , takes an array of n multilingual corpora as inputs, runsL polyglot on each of them to obtain K language clusters, C 1 , . . . , C K , such that |C 1 | \u2265 |C 2 | . . . \u2265 |C K |. J largest of these clusters are selected and annotators assign a language L(C j ), 1 \u2264 j \u2264 J. For a given cluster C j , we obtain a set of pairs d,L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Weak Labels",
"sec_num": "5.1"
},
{
"text": "(d) where d \u2208 C j , andL(d) = L(C j ), i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Weak Labels",
"sec_num": "5.1"
},
{
"text": "e., the weak label of the document is the cluster's language label. For each of these J clusters, the top \u03b3 fraction are chosen for inclusion into D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Weak Labels",
"sec_num": "5.1"
},
{
"text": "To summarize, for each multilingual corpus, L polyglot is used to obtain the top monolingual clusters, and a fraction of those are included with the cluster language label into the data set. Each cluster's language label is assigned by labeling 10 documents in the cluster and thus the vast majority of samples added to the data set is neither manually inspected nor labeled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Weak Labels",
"sec_num": "5.1"
},
{
"text": "Recall that, each of the regional news outlets we considered presents news in one of the dominant regional languages. We group all comments obtained from the news outlets of one particular state as one distinct corpus -i.e. each D i consists of comments posted in response to videos from a news outlet from a particular state. The choice of treating each individual state's corpus separately contributes further to annotation efficiency -know-ing that a corpus is sourced from a region where a certain language is dominant allows us to select the appropriate annotators and reduces the annotation cost per document. We considered comments obtained from the 14 national outlets as a separate corpus. This led to 13 multilingual corpora (12 regional and 1 national). Hence, in our experiments, n, denoting the total number of corpora in Algorithm 1, was set to 13.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Weak Labels",
"sec_num": "5.1"
},
{
"text": "Parameter configuration: Our Algorithm has two configurable parameters: (1) j, the number of clusters selected per corpus for inclusion in the final data set, and (2) \u03b3, the fraction of documents per cluster chosen for inclusion. We set j to 2. The choice of j was guided by the intuition that English is widely spoken in India and each state would have at least one dominant regional language. Our choice of \u03b3 was guided by an in-depth analysis of L polyglot in the context of code switching (Khud-aBukhsh et al., 2020). The study revealed that documents closest to the cluster centers exhibit strong monolinguality and those farther from the centers can exhibit code-switching or may even be authored in languages with trace presence. In order to obtain high quality weak labels, we set \u03b3 to 0.75.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Weak Labels",
"sec_num": "5.1"
},
{
"text": "Annotation Efficiency:L polyglot requires 10 annotated samples to assign a language label to a cluster (Palakodety et al., 2020a) . Our method requires 10jn annotated samples. Hence, our method required 10\u00d72\u00d713 = 260 annotated comments to construct a corpus of 2,793,375 comments supporting nine Indian languages. This is combined with the D tatoeba to yield D consisting of 11,042,839 documents. ",
"cite_spans": [
{
"start": 103,
"end": 129,
"text": "(Palakodety et al., 2020a)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Assigning Weak Labels",
"sec_num": "5.1"
},
{
"text": "Once D is obtained, we train a classifier that takes as input a document, and predicts the language label. We provide an end-to-end model operating directly on the text and producing a language label. The model utilizes a highly efficient text classification framework introduced in . The framework introduces a variety of optimizations and is capable of classifying billions of documents in minutes without compromising on accuracy (implementation details are presented in the Appendix). We refer to this model as F end-to-end . The model achieves comparable performance (test accuracy > 98%) against a held out set that is not seen during any of the training phases. 0 0 2 0 0 0 0 0 0 1 en 0 99 0 1 0 0 0 0 0 0 0 gu 0 0 92 4 0 0 4 0 0 0 0 hi 0 0 0 99 0 0 1 0 0 0 0 kn 1 1 2 0 86 5 0 0 0 3 2 True Label ml 0 0 0 0 1 95 1 0 2 1 0 mr 0 0 1 0 0 0 99 0 0 0 0 or 20 0 13 18 2 3 16 0 0 4 not support romanized Indic languages (achieves an overall accuracy 10% on our test set). We see our paper as a resource paper that makes a small step forward in addressing the lack of linguistic resources for Indian social media analysis. Fairness: We first emphasize that the main purpose of comparing against GoogleLangID is not to claim that our annotation-efficient solution is superior to GoogleLangID across the board.",
"cite_spans": [],
"ref_spans": [
{
"start": 669,
"end": 968,
"text": "0 0 2 0 0 0 0 0 0 1 en 0 99 0 1 0 0 0 0 0 0 0 gu 0 0 92 4 0 0 4 0 0 0 0 hi 0 0 0 99 0 0 1 0 0 0 0 kn 1 1 2 0 86 5 0 0 0 3 2 True Label ml 0 0 0 0 1 95 1 0 2 1 0 mr 0 0 1 0 0 0 99 0 0 0 0 or 20 0 13 18 2 3 16 0 0 4",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Learning with Weak Labels",
"sec_num": "5.2"
},
{
"text": "Our goal is rather to attract the research community's attention to our solution's effectiveness in this under-explored, specific domain of noisy social media texts generated in the Indian subcontinent. A fairer performance comparison between the two methods would require the methods to be trained on identical data sets and comparable computation budget. Due to these varying levels of resources, it is not possible to claim one method's superiority over the other. We are rather highlighting our method's (1) annotation efficiency and (2) ability to extend support for newer languages (e.g., at present GoogleLangID has limited support for Odia (or) and no support for Assamese as). Table 4 summarizes the performance comparison between our proposed classifier and the GoogleLangID baseline. Our results indicate that our method considerably outperforms the baseline which is a well-known commercial solution. A closer look at the individual performance of each language (confusion matrices presented in 2 and Table 3) reveals that across all languages, our method performs equal or better than GoogleLangID. Although the performance gap primarily stems from our method's overwhelmingly stronger performance in Odia (or), we perform considerably better than GoogleLangID even if we exclude Odia from the test data set.",
"cite_spans": [
{
"start": 648,
"end": 652,
"text": "(or)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 686,
"end": 693,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6"
},
{
"text": "Our performance comparison highlights the following two points. First, we reiterate that our goal is not to claim that our method would perform better than GoogleLangID across the board, but rather, to demonstrate the value our method adds in processing noisy social media texts. It is possible that our method is more attuned to noisy short social media texts while GoogleLangID could be (possibly) trained on cleaner corpora which explains our method's stronger performance. This is further corroborated by the fact that even our mispredictions (barring one) remain confined to the regional languages while many of GoogleLangID's mispredictions are distributed across other languages (ol). Second, GoogleLangID's weak performance in detecting Odia highlights the gap in current solutions and shows how our method can effectively and efficiently address these issues 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "7"
},
{
"text": "Recall that D tatoeba contains a large set of languages (including the native script versions of the Indian languages considered in this paper). Test accuracy on a held-out set was 98.4%. Experiments reveal that introduction of the weakly labeled corpus does not impact performance on D tatoeba (identical test accuracy of 98.4%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "7"
},
{
"text": "We constructed a new data set of comments on YouTube videos from an Assamese news channel (News18 Assam/Northeast) and used the same approach of usingL polyglot to obtain weak labels for Romanized Assamese. We do not show a direct comparison with GoogleLangID because GoogleLangID does not support Assamese (as). However, on an augmented test data set of 1,100 comments (100 Assamese comments with consensus labels from two annotators), we achieved a performance of 92% accuracy on identifying Assamese while retaining our previous performance on every other language. Assam has been a center for political debates and unrest in recent times (BBC). Our resource to detect Romanized Assamese can be a vital tool which to the best of our knowledge, does not exist. Details are presented in the Appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extensibility",
"sec_num": "7.1"
},
{
"text": "Our goal is to present an important Indian NLP resource that can perform well across multiple social media platforms. Hence, it is paramount that our system generalizes well both to in domain and out of domain instances. In the context of the task of language identification, domain adaption has received recent attention (Li et al., 2018) . In this section, we present an analysis on our system's out of domain performance. Data set of Hinglish tweets: We consider a data set of tweets introduced in Mathur et al. (2018) . The data set consists of 3,189 tweets written in English (en), Romanized Hindi (hi) and code-mixed English-Hindi (en-hi). We construct a randomly sampled data set of 100 tweets with equal proportion of Hindi and English tweets (consensus labels obtained from two annotators).",
"cite_spans": [
{
"start": 322,
"end": 339,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 501,
"end": 521,
"text": "Mathur et al. (2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain-robustness",
"sec_num": "7.2"
},
{
"text": "As shown in Table 5 , our system's out of domain performance was consistent with its in domain performance. We performed marginally better than GoogleLangID. We admit that a more robust test on multiple data sets comprising content from a larger set of Indian languages from other social media platforms would further validate our out of domain performance. However, our current experiment indicates that our system's success is not limited to YouTube comment texts, it can generalize to tweets as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Domain-robustness",
"sec_num": "7.2"
},
{
"text": "As demonstrated, our integrated setup covers the Romanized and native script variants of the most prevalent Indian languages. This enables us to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Usage Statistics",
"sec_num": "7.3"
},
{
"text": "Accuracy Language P R F1 F end-to-end 0.98 en 1.00 1.00 1.00 hi 0.96 0.96 0.96 GoogleLangID 0.95 en 1.00 1.00 1.00 hi 1.00 0.90 0.95 (Jaffrelot, 2000) are highlighted with blue.",
"cite_spans": [
{
"start": 133,
"end": 150,
"text": "(Jaffrelot, 2000)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "explore research questions on the usage patterns of these Indian languages. This part of our analysis is conducted on the entire election corpus containing comments written in all scripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "Weak geo-labels: Recall that, all of the 24 regional news outlets we consider present news in the dominant language of their respective states. Hence, it is reasonable to assume that a considerable fraction of users consuming the regional news and participating in the comments section have some affiliation to the region (state). Thus, a comment posted in response to a regional news outlet's video can be assigned a weak/noisy geographic label -the state targeted by the news network. For instance, we assume that a comment posted in response to a Tamil news video clip, is likely to be authored by someone who either resides in or retains strong ties to Tamil Nadu. Combining these weak/noisy geographic labels with our language identification system, we can assess the geographic distribution of language use in India. Note that, these results are only approximate estimates -YouTube comments do not contain any geographic information. Further, it is also not possible to estimate a user's knowledge of other languages through our analysis. For example, if a user comments solely in Hindi, it is not possible to assess their fluency in English or Bengali.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "Geographic extent of Hindi usage: We label each comment with the language prediction from F end-to-end and a geographic label corresponding to the origin state of the news outlet. All comments posted in Romanized and Devanagari Hindi (denoted by hi and hi N , respectively) are retained and the resulting choropleth is visualized in Figure 1(b) . We also provide in Figure 1(a) , the region referred to as the Hindi belt (Jaffrelot, 2000) where Hindi is the first language of the bulk of the population. We observe a strong correlation of the estimated geographic extent of Hindi with the Hindi belt states.",
"cite_spans": [
{
"start": 421,
"end": 438,
"text": "(Jaffrelot, 2000)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 333,
"end": 344,
"text": "Figure 1(b)",
"ref_id": "FIGREF2"
},
{
"start": 366,
"end": 377,
"text": "Figure 1(a)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Hindi Web Usage",
"sec_num": "7.4"
},
{
"text": "In Table 6 , we list the state-wise estimates of Hindi usage in our corpus. Our findings are consistent with existing knowledge of Hindi's geographic spread. In the Hindi belt states, more than 80% of the comments were authored in Hindi. Consistent with census data (Census, 2011) and prior literature (Ramaswamy, 1997) , the fraction of Hindi comments discovered in the Tamil Nadu origin subset was minuscule. Romanized vs Devanagari: As shown in Table 6 , the ratio of hi and hi N usage reveals that a vast majority of internet users eschew the traditional Devanagari script and instead use Roman script. However, the ratio of Roman script to Devanagari script is substantially less lopsided in the Hindi belt states than in the other states. Our studies are consistent with . Estimating bilinguality: We conduct a userfocused study by computing language usage statistics on a per-user basis. We assume that a user is proficient in a language, L, if she posts two or more comments (in order to accommodate for some estimation error) in L. If a user is estimated to be proficient in two languages, then we label her as bilingual. Romanized and native script comments are both considered to be an equal demonstration of proficiency in a given language. We acknowledge that this is at best a noisy estimate.",
"cite_spans": [
{
"start": 302,
"end": 319,
"text": "(Ramaswamy, 1997)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 448,
"end": 455,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Hindi Web Usage",
"sec_num": "7.4"
},
{
"text": "Out of 159,993 total users, 41,776 users (26.1%) were marked as bilinguals using F end-to-end . According to the 2011 census (Census, 2011), 26% of the Indian population are bilinguals. Hence, surprisingly, our noisy estimate was reasonably close to the ground truth. We observe that over 70% of the discovered bilinguals in our corpus used Hindi-English. A detailed plot is presented in the Appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hindi Web Usage",
"sec_num": "7.4"
},
{
"text": "We conclude this section with an analysis on (1) to what extent we addressL polyglot 's inability to detect trace languages, and (2) why it could be worth addressing. Our definition of trace language is corpus-specific. We consider a language L to be a trace language in a corpus D if fewer than 1% shows the patterns of Hindi usage. We extend the results for Andhra Pradesh to Telangana, and Bihar to Jharkhand because the same news networks cater to both states. The base maps used for this plot are sourced from the Government of India. The authors are aware that these maps include disputed territories. These maps do not constitute judgments on existing disputes. of the documents in D are authored in L. In this section, we focus on the following three corpora one of which (D COVID ) we introduce here: 1. D hope : 2.04 million YouTube comments relevant to the 2019 India-Pakistan conflict (Palakodety et al., 2020a) . 2. D help : 263k YouTube comments relevant to the Rohingya refugee crisis (Palakodety et al., 2020c) . 3. D COVID : 777,748 comments from 5,301 videos from two highly popular Indian news channels (NDTV and Zee News) posted between 30 th January, 2020 3 and 10 th April, 2020.",
"cite_spans": [
{
"start": 897,
"end": 923,
"text": "(Palakodety et al., 2020a)",
"ref_id": "BIBREF30"
},
{
"start": 1000,
"end": 1026,
"text": "(Palakodety et al., 2020c)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trace Language Detection",
"sec_num": "7.5"
},
{
"text": "As reported in Palakodety et al. (2020a) ,L polyglot discovered three clusters in D hope : (1) en, (2) hi and (3) hi N ; no other languages were detected. However, in our experiments, we found presence of multiple trace languages. For instance, overall, our method F end-to-end found 3,373 Telugu (te) and 205 Bengali (bn) comments in D hope . Human annotation of randomly sampled 100 comments in each of the two languages revealed a precision of 100% and 97% for te and bn, respectively. Similarly, we conducted a search for bn and te comments in D help , and found 1,251 and 146 comments, respectively. Human annotation on randomly sampled 100 comments from each of the two languages yielded precision of 99% for both bn and te. In Table 7 , we list example peace-seeking, hostility-diffusing comments (hope speech) and comments indicating support for the disenfranchised Rohingyas (help speech).",
"cite_spans": [
{
"start": 15,
"end": 40,
"text": "Palakodety et al. (2020a)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 734,
"end": 741,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Trace Language Detection",
"sec_num": "7.5"
},
{
"text": "Finally, when F end-to-end is run on D COVID , we discover comments in several languages requesting assistance during the nationwide lockdown (BBC, 2020). Our method reveals the presence of vulnerable individuals who express themselves in lowresource languages. We hope our tool can open the gates for research in this humanitarian domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trace Language Detection",
"sec_num": "7.5"
},
{
"text": "In this paper, we present a language identification tool with a focus on nine major Romanized Indian languages. Despite the widespread use of Romanization on social media, NLP resources and tools often focus more on the native scripts. Our tool integrates with an existing large-scale corpus and holds promise in being a valuable resource for Indian social media analysis. Our pipeline leverages a recent NLP algorithm and obtains weak labels for a large number of samples substantially reducing the annotation cost. Finally, we conduct studies on the geographic extent, bilinguality, and Romanization of Hindi and observe that these align with existing studies and surveys.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "All annotations are performed by two native speakers of each of the languages we considered. All labels are consensus labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "9.1"
},
{
"text": "Our data set was crawled using publicly available YouTube API on the YouTube channel of CNN News18 Assam/Northeast. Overall, we obtained 66,923 comments from 7,170 videos of which weak labels (4,337 English and 21,411 Assamese) were obtained usingL polyglot . We augmented our previous training set with these obtained (weak labels) comments. The detailed performance is presented in Table 9 . The classification framework we use , contains a variety of optimizations focused on text classification -an architecture that enables parameter sharing, and efficient techniques to include token n-grams. The inference phase is able to process and label over 10 million documents in under five minutes (wall clock time).",
"cite_spans": [],
"ref_spans": [
{
"start": 384,
"end": 391,
"text": "Table 9",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Detailed Performance with Assamese",
"sec_num": "9.2"
},
{
"text": "100 comments are randomly sampled for each of the 10 languages (bn, en, gu, hi, kn, ml, mr, or, ta, te) . The average number of tokens in the comments is 22.6 \u00b1 18.7. A language-wise breakdown is presented in Table 10 . IndiaTV, NDTV India, Republic World, The Times of India, Zee News, Aaj Tak, ABP NEWS, CNN-News18, News18 India, NDTV, TIMES NOW, India Today, The Economic Times, Hindustan Times ",
"cite_spans": [
{
"start": 63,
"end": 103,
"text": "(bn, en, gu, hi, kn, ml, mr, or, ta, te)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 209,
"end": 217,
"text": "Table 10",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Test data set details",
"sec_num": "9.4"
},
{
"text": "Resources are available at: https://www.cs.cmu. edu/\u02dcakhudabu/IndicLanguage.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Odia is listed as one of the supported languages by GoogleLangID, it is unclear if this tool supports Romanized Odia. Assamese is not supported by GoogleLangID.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "First COVID-19 positive case was reported in India on this day.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The YouTube channels considered in Palakodety et al. (2020b) are listed in Table 8 and 11. ",
"cite_spans": [
{
"start": 35,
"end": 60,
"text": "Palakodety et al. (2020b)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 75,
"end": 90,
"text": "Table 8 and 11.",
"ref_id": null
}
],
"eq_spans": [],
"section": "List of YouTube channels",
"sec_num": "9.6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Code mixing: A challenge for language identification in the language of social media",
"authors": [
{
"first": "Utsab",
"middle": [],
"last": "Barman",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the first workshop on computational approaches to code switching",
"volume": "",
"issue": "",
"pages": "13--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Utsab Barman, Amitava Das, Joachim Wagner, and Jen- nifer Foster. 2014. Code mixing: A challenge for language identification in the language of social me- dia. In Proceedings of the first workshop on compu- tational approaches to code switching, pages 13-23.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Why has india's assam erupted over an 'antimuslim' law? Online; accessed 12",
"authors": [],
"year": 2020,
"venue": "BBC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "BBC. Why has india's assam erupted over an 'anti- muslim' law? Online; accessed 12-May-2020.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Coronavirus: India's pandemic lockdown turns into a human tragedy. Online",
"authors": [],
"year": 2020,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "BBC. 2020. Coronavirus: India's pandemic lockdown turns into a human tragedy. Online; accessed 3- June-2020.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Language identification for creating language-specific twitter collections",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "Mossaab",
"middle": [],
"last": "Bagdouri",
"suffix": ""
},
{
"first": "Clayton",
"middle": [],
"last": "Fink",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the second workshop on language in social media",
"volume": "",
"issue": "",
"pages": "65--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma, Paul McNamee, Mossaab Bagdouri, Clayton Fink, and Theresa Wilson. 2012. Language identification for creating language-specific twitter collections. In Proceedings of the second workshop on language in social media, pages 65-74. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Census. 2011. 2011 census data. Online; accessed 3",
"authors": [],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Census. 2011. 2011 census data. Online; accessed 3- June-2020.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unraveling the English-Bengali code-mixing phenomenon",
"authors": [
{
"first": "Arunavha",
"middle": [],
"last": "Chanda",
"suffix": ""
},
{
"first": "Dipankar",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Chandan",
"middle": [],
"last": "Mazumdar",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Second Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "80--89",
"other_ids": {
"DOI": [
"10.18653/v1/W16-5810"
]
},
"num": null,
"urls": [],
"raw_text": "Arunavha Chanda, Dipankar Das, and Chandan Mazumdar. 2016. Unraveling the English-Bengali code-mixing phenomenon. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 80-89, Austin, Texas. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Resource creation for training and testing of transliteration systems for indian languages",
"authors": [
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Tirthankar",
"middle": [],
"last": "Dasgupta",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Basu",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Monojit Choudhury, Kalika Bali, Tirthankar Dasgupta, and Anupam Basu. 2010. Resource creation for training and testing of transliteration systems for in- dian languages. LREC.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The covid-19 social media infodemic",
"authors": [
{
"first": "Matteo",
"middle": [],
"last": "Cinelli",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Quattrociocchi",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Galeazzi",
"suffix": ""
},
{
"first": "Carlo",
"middle": [
"Michele"
],
"last": "Valensise",
"suffix": ""
},
{
"first": "Emanuele",
"middle": [],
"last": "Brugnoli",
"suffix": ""
},
{
"first": "Ana",
"middle": [
"Lucia"
],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Zola",
"suffix": ""
},
{
"first": "Fabiana",
"middle": [],
"last": "Zollo",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Scala",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matteo Cinelli, Walter Quattrociocchi, Alessandro Galeazzi, Carlo Michele Valensise, Emanuele Brug- noli, Ana Lucia Schmidt, Paola Zola, Fabiana Zollo, and Antonio Scala. 2020. The covid-19 social media infodemic.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A study on the two waves of statesreorganization in india",
"authors": [
{
"first": "Ma",
"middle": [],
"last": "Dewen",
"suffix": ""
}
],
"year": 2010,
"venue": "South Asian Studies Quarterly",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ma Dewen. 2010. A study on the two waves of states- reorganization in india [j]. South Asian Studies Quarterly, 1.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A probabilistic framework to learn from multiple annotators with time-varying accuracy",
"authors": [
{
"first": "Pinar",
"middle": [],
"last": "Donmez",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 SIAM international conference on data mining",
"volume": "",
"issue": "",
"pages": "826--837",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pinar Donmez, Jaime Carbonell, and Jeff Schneider. 2010. A probabilistic framework to learn from mul- tiple annotators with time-varying accuracy. In Pro- ceedings of the 2010 SIAM international conference on data mining, pages 826-837. SIAM.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Online; accessed 3",
"authors": [
{
"first": "",
"middle": [],
"last": "Fasttext",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fasttextlangid",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "FastText. FastTextLangID. [Online; accessed 3-June- 2020].",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ye word kis lang ka hai bhai?\" testing the limits of word level language identification",
"authors": [
{
"first": "Spandana",
"middle": [],
"last": "Gella",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 11th International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "368--377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spandana Gella, Kalika Bali, and Monojit Choudhury. 2014. \"ye word kis lang ka hai bhai?\" testing the limits of word level language identification. In Pro- ceedings of the 11th International Conference on Natural Language Processing, pages 368-377.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Online; accessed 3",
"authors": [
{
"first": "",
"middle": [],
"last": "Google",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Googlelangid",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Google. GoogleLangID. [Online; accessed 3-June- 2020].",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Youtube now has 265 million users in india. Online",
"authors": [
{
"first": "",
"middle": [],
"last": "Hindustantimes",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "HindustanTimes. 2019. Youtube now has 265 million users in india. Online; accessed 3-June-2020.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Hierarchical character-word models for language identification",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Jaech",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Shobhit",
"middle": [],
"last": "Hathi",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The Fourth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "84--93",
"other_ids": {
"DOI": [
"10.18653/v1/W16-6212"
]
},
"num": null,
"urls": [],
"raw_text": "Aaron Jaech, George Mulcaire, Shobhit Hathi, Mari Ostendorf, and Noah A. Smith. 2016. Hierarchi- cal character-word models for language identifica- tion. In Proceedings of The Fourth International Workshop on Natural Language Processing for So- cial Media, pages 84-93, Austin, TX, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The rise of the other backward classes in the hindi belt",
"authors": [
{
"first": "Christophe",
"middle": [],
"last": "Jaffrelot",
"suffix": ""
}
],
"year": 2000,
"venue": "The Journal of Asian Studies",
"volume": "59",
"issue": "1",
"pages": "86--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christophe Jaffrelot. 2000. The rise of the other back- ward classes in the hindi belt. The Journal of Asian Studies, 59(1):86-108.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic language identification in texts: A survey",
"authors": [
{
"first": "Tommi",
"middle": [],
"last": "Sakari Jauhiainen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Artificial Intelligence Research",
"volume": "65",
"issue": "",
"pages": "675--782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommi Sakari Jauhiainen, Marco Lui, Marcos Zampieri, Timothy Baldwin, and Krister Lind\u00e9n. 2019. Automatic language identification in texts: A survey. Journal of Artificial Intelligence Research, 65:675-782.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Monojit Choudhury, and Kalika Bali",
"authors": [
{
"first": "Pratik",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Christain",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Sebastin",
"middle": [],
"last": "Santy",
"suffix": ""
},
{
"first": "Simran",
"middle": [],
"last": "Khanuja",
"suffix": ""
},
{
"first": "Sanket",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Srinivasan",
"suffix": ""
},
{
"first": "Satwik",
"middle": [],
"last": "Bhattamishra",
"suffix": ""
},
{
"first": "Sunayana",
"middle": [],
"last": "Sitaram",
"suffix": ""
}
],
"year": 2019,
"venue": "Unsung challenges of building and deploying language technologies for low resource language communities",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.03457"
]
},
"num": null,
"urls": [],
"raw_text": "Pratik Joshi, Christain Barnes, Sebastin Santy, Simran Khanuja, Sanket Shah, Anirudh Srinivasan, Satwik Bhattamishra, Sunayana Sitaram, Monojit Choud- hury, and Kalika Bali. 2019. Unsung challenges of building and deploying language technologies for low resource language communities. arXiv preprint arXiv:1912.03457.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The state and fate of linguistic diversity and inclusion in the nlp world",
"authors": [
{
"first": "Pratik",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Sebastin",
"middle": [],
"last": "Santy",
"suffix": ""
},
{
"first": "Amar",
"middle": [],
"last": "Budhiraja",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.09095"
]
},
"num": null,
"urls": [],
"raw_text": "Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the nlp world. arXiv preprint arXiv:2004.09095.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th EACL",
"volume": "2",
"issue": "",
"pages": "427--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th EACL: Vol- ume 2, Short Papers, pages 427-431.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Harnessing code switching to transcend the linguistic barrier",
"authors": [
{
"first": "R",
"middle": [],
"last": "Ashiqur",
"suffix": ""
},
{
"first": "Shriphani",
"middle": [],
"last": "Khudabukhsh",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Palakodety",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence",
"volume": "2020",
"issue": "",
"pages": "4366--4374",
"other_ids": {
"DOI": [
"10.24963/ijcai.2020/602"
]
},
"num": null,
"urls": [],
"raw_text": "Ashiqur R. KhudaBukhsh, Shriphani Palakodety, and Jaime G. Carbonell. 2020. Harnessing code switch- ing to transcend the linguistic barrier. In Proceed- ings of the Twenty-Ninth International Joint Confer- ence on Artificial Intelligence, IJCAI 2020, pages 4366-4374. ijcai.org.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Consonantvowel sequences as subword units for code-mixed languages",
"authors": [
{
"first": "Upendra",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Vishal",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Santhoshini",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Upendra Kumar, Vishal Singh, Chris Andrew, San- thoshini Reddy, and Amitava Das. 2018. Consonant- vowel sequences as subword units for code-mixed languages. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sata-anuvadak: Tackling multiway translation of indian languages",
"authors": [
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Abhijit",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2014,
"venue": "pan",
"volume": "54",
"issue": "",
"pages": "4--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anoop Kunchukuttan, Abhijit Mishra, Rajen Chatter- jee, Ritesh Shah, and Pushpak Bhattacharyya. 2014. Sata-anuvadak: Tackling multiway translation of in- dian languages. pan, 841(54,570):4-135.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "What's in a domain? learning domain-robust text representations using adversarial training",
"authors": [
{
"first": "Yitong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "474--479",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2076"
]
},
"num": null,
"urls": [],
"raw_text": "Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. What's in a domain? learning domain-robust text representations using adversarial training. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 2 (Short Papers), pages 474-479, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Accurate language identification of twitter messages",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 5th workshop on language analysis for social media (LASM)",
"volume": "",
"issue": "",
"pages": "17--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lui and Timothy Baldwin. 2014. Accurate lan- guage identification of twitter messages. In Proceed- ings of the 5th workshop on language analysis for social media (LASM), pages 17-25.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Did you offend me? classification of offensive tweets in Hinglish language",
"authors": [
{
"first": "Puneet",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Ramit",
"middle": [],
"last": "Sawhney",
"suffix": ""
},
{
"first": "Meghna",
"middle": [],
"last": "Ayyar",
"suffix": ""
},
{
"first": "Rajiv",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "138--148",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5118"
]
},
"num": null,
"urls": [],
"raw_text": "Puneet Mathur, Ramit Sawhney, Meghna Ayyar, and Rajiv Shah. 2018. Did you offend me? classification of offensive tweets in Hinglish language. In Pro- ceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 138-148, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "1st International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In 1st International Confer- ence on Learning Representations.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The role of unlabeled data in supervised learning",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2004,
"venue": "Language, Knowledge, and Representation",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom M Mitchell. 2004. The role of unlabeled data in supervised learning. In Language, Knowledge, and Representation, pages 103-111. Springer.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Building a sentiwordnet for odia",
"authors": [
{
"first": "Gaurav",
"middle": [],
"last": "Mohanty",
"suffix": ""
},
{
"first": "Abishek",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Radhika",
"middle": [],
"last": "Mamidi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "143--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaurav Mohanty, Abishek Kannan, and Radhika Mamidi. 2017. Building a sentiwordnet for odia. In Proceedings of the 8th Workshop on Computa- tional Approaches to Subjectivity, Sentiment and So- cial Media Analysis, pages 143-148.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Hope speech detection: A computational analysis of the voice of peace",
"authors": [
{
"first": "Shriphani",
"middle": [],
"last": "Palakodety",
"suffix": ""
},
{
"first": "Ashiqur",
"middle": [
"R"
],
"last": "Khudabukhsh",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2020,
"venue": "ECAI 2020 -24th European Conference on Artificial Intelligence",
"volume": "325",
"issue": "",
"pages": "1881--1889",
"other_ids": {
"DOI": [
"10.3233/FAIA200305"
]
},
"num": null,
"urls": [],
"raw_text": "Shriphani Palakodety, Ashiqur R. KhudaBukhsh, and Jaime G. Carbonell. 2020a. Hope speech detection: A computational analysis of the voice of peace. In ECAI 2020 -24th European Conference on Artificial Intelligence, volume 325 of Frontiers in Artificial In- telligence and Applications, pages 1881-1889. IOS Press.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Mining insights from large-scale corpora using fine-tuned language models",
"authors": [
{
"first": "Shriphani",
"middle": [],
"last": "Palakodety",
"suffix": ""
},
{
"first": "Ashiqur",
"middle": [
"R"
],
"last": "Khudabukhsh",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2020,
"venue": "ECAI 2020 -24th European Conference on Artificial Intelligence",
"volume": "325",
"issue": "",
"pages": "1890--1897",
"other_ids": {
"DOI": [
"10.3233/FAIA200306"
]
},
"num": null,
"urls": [],
"raw_text": "Shriphani Palakodety, Ashiqur R. KhudaBukhsh, and Jaime G. Carbonell. 2020b. Mining insights from large-scale corpora using fine-tuned language mod- els. In ECAI 2020 -24th European Conference on Artificial Intelligence, volume 325 of Frontiers in Artificial Intelligence and Applications, pages 1890- 1897. IOS Press.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Voice for the voiceless: Active sampling to detect comments supporting the rohingyas",
"authors": [
{
"first": "Shriphani",
"middle": [],
"last": "Palakodety",
"suffix": ""
},
{
"first": "Ashiqur",
"middle": [
"R"
],
"last": "Khudabukhsh",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2020,
"venue": "The Thirty-Fourth AAAI Conference on Artificial Intelligence",
"volume": "2020",
"issue": "",
"pages": "454--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shriphani Palakodety, Ashiqur R. KhudaBukhsh, and Jaime G. Carbonell. 2020c. Voice for the voiceless: Active sampling to detect comments supporting the rohingyas. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, pages 454- 462.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Passions of the tongue: Language devotion in Tamil India",
"authors": [
{
"first": "Sumathi",
"middle": [],
"last": "Ramaswamy",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "29",
"issue": "",
"pages": "1891--1970",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumathi Ramaswamy. 1997. Passions of the tongue: Language devotion in Tamil India, 1891-1970, vol- ume 29. Univ of California Press.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Social media attributions in the context of water crisis",
"authors": [
{
"first": "Rupak",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "Sayantan",
"middle": [],
"last": "Mahinder",
"suffix": ""
},
{
"first": "Hirak",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "Ashiqur",
"middle": [
"R"
],
"last": "Khudabukhsh",
"suffix": ""
}
],
"year": 2020,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rupak Sarkar, Sayantan Mahinder, Hirak Sarkar, and Ashiqur R. KhudaBukhsh. 2020. Social media attri- butions in the context of water crisis. In Empirical Methods in Natural Language Processing (EMNLP), 2020, page to appear.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Most popular social networks worldwide as of january 2020, ranked by number of active users",
"authors": [
{
"first": "",
"middle": [],
"last": "Statista",
"suffix": ""
}
],
"year": 2020,
"venue": "Online; accessed 3",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Statista. 2020. Most popular social networks world- wide as of january 2020, ranked by number of active users. Online; accessed 3-June-2020.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Domain focused named entity recognizer for tamil using conditional random fields",
"authors": [
{
"first": "R",
"middle": [],
"last": "Vijayakrishna",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sobha",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the IJCNLP-08 Workshop on Named Entity Recognition for South and South East Asian Languages",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Vijayakrishna and L Sobha. 2008. Domain focused named entity recognizer for tamil using conditional random fields. In Proceedings of the IJCNLP-08 Workshop on Named Entity Recognition for South and South East Asian Languages.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Transliteration of proper names in cross-lingual information retrieval",
"authors": [
{
"first": "Paola",
"middle": [],
"last": "Virga",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the ACL 2003 workshop on Multilingual and mixed-language named entity recognition",
"volume": "15",
"issue": "",
"pages": "57--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paola Virga and Sanjeev Khudanpur. 2003. Transliter- ation of proper names in cross-lingual information retrieval. In Proceedings of the ACL 2003 work- shop on Multilingual and mixed-language named en- tity recognition-Volume 15, pages 57-64. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "POS tagging of English-Hindi code-mixed social media content",
"authors": [
{
"first": "Yogarshi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Spandana",
"middle": [],
"last": "Gella",
"suffix": ""
},
{
"first": "Jatin",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "974--979",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1105"
]
},
"num": null,
"urls": [],
"raw_text": "Yogarshi Vyas, Spandana Gella, Jatin Sharma, Kalika Bali, and Monojit Choudhury. 2014. POS tagging of English-Hindi code-mixed social media content. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 974-979, Doha, Qatar. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Youtube monthly user base touches 265 million in india, reaches 80 pc of internet population",
"authors": [
{
"first": "Yourstory",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Online; accessed 3",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "YourStory. 2018. Youtube monthly user base touches 265 million in india, reaches 80 pc of internet popu- lation. Online; accessed 3-June-2020.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Predicted Label bn en gu hi kn ml mr or ta te ol bn 100",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Choropleths of Hindi usage patterns in India. (a) shows the geographic region identified as the Hindi belt. (b)",
"uris": null,
"num": null
},
"FIGREF4": {
"type_str": "figure",
"text": "The top language pairs used by bilinguals in our corpus. Hindi and English feature prominently in all the language pairs.",
"uris": null,
"num": null
},
"TABREF1": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": ""
},
"TABREF2": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Confusion matrix of performance evaluation of F end-to-end on 1000 annotated comments. For a given language, better or equal performance than the baseline is highlighted with blue; ol denotes other languages."
},
"TABREF3": {
"content": "<table><tr><td>Predicted</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "These documents are randomly sampled from the output of F weakLabel and are never seen during our supervised training phase with weak labels. The average number of tokens in the comments is 22.6 \u00b1 18.7. Please see Appendix for detailed statistics of our test data set. Label bn en gu hi kn ml mr or ta te ol bn 97"
},
"TABREF4": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Confusion matrix of performance evaluation of GoogleLandID on 1000 annotated comments. For a given language, better or equal performance than F end-to-end is highlighted with blue; ol denotes other languages."
},
"TABREF6": {
"content": "<table><tr><td>: Performance comparison. Since it is unclear</td></tr><tr><td>if GoogleLangID supports Odia (or), the left-most</td></tr><tr><td>column presents performance excluding Odia from the</td></tr><tr><td>test set.</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": ""
},
"TABREF7": {
"content": "<table><tr><td>State</td><td>hi</td><td>hiN</td><td>hi \u2229 hiN</td></tr><tr><td colspan=\"4\">Andhra Pradesh 1.66% 0.05% 1.71%</td></tr><tr><td>Bihar</td><td colspan=\"3\">67.97% 14.29% 82.26%</td></tr><tr><td>Gujarat</td><td colspan=\"3\">24.23% 3.41% 27.64%</td></tr><tr><td>Karnataka</td><td colspan=\"3\">1.85% 0.02% 1.87%</td></tr><tr><td>Kerala</td><td colspan=\"2\">0.48% 0.02%</td><td>0.5%</td></tr><tr><td colspan=\"4\">Madhya Pradesh 76.21% 10.39% 86.60%</td></tr><tr><td>Maharashtra</td><td colspan=\"3\">7.46% 4.18% 11.64%</td></tr><tr><td>Odisha</td><td colspan=\"3\">9.20% 0.02% 9.22%</td></tr><tr><td>Rajasthan</td><td colspan=\"3\">58.48% 29.90% 88.38%</td></tr><tr><td>Tamil Nadu</td><td colspan=\"3\">0.22% 0.01% 0.23%</td></tr><tr><td>Uttar Pradesh</td><td colspan=\"3\">63.56% 22.23% 85.79%</td></tr><tr><td>West Bengal</td><td colspan=\"3\">4.72% 0.18% 4.90%</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Performance comparison on the tweet data set. Best metric is highlighted in bold for each language. P: precision, R: recall."
},
"TABREF8": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Presence of Hindi. hi is Romanized Hindi. hi N is Devanagari Hindi. Hindi belt states"
},
"TABREF10": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Random sample of comments in trace language detected by our system."
},
"TABREF12": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Regional channels."
},
"TABREF13": {
"content": "<table><tr><td>State</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Confusion matrix of performance evaluation of F end-to-end on 1,100 annotated comments; ol denotes other languages.9.5 Language pairs used by bilingualsFigure 2 summarizes the relative distribution of language pairs in our bilingualism estimation experiment.Results show that Hindi-English bilingualism is the most dominant one. Comment length as 17.03 \u00b1 19.95 bn 25.19 \u00b1 18.33 en 31.52 \u00b1 27.04 gu 23.82 \u00b1 20.91 hi 24.72 \u00b1 15.33 kn 17.85 \u00b1 10.57 ml 19.91 \u00b1 11.92 mr 25.73 \u00b1 21.88 or 13.34 \u00b1 11.72 ta 21.98 \u00b1 14.59 te 22.05 \u00b1 20.95"
},
"TABREF14": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Statistics of test data set."
},
"TABREF15": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": ""
}
}
}
}