diff --git "a/citations.jsonl" "b/citations.jsonl" deleted file mode 100644--- "a/citations.jsonl" +++ /dev/null @@ -1,7877 +0,0 @@ -{"year":"2016","title":"A Case Study of Complex Graph Analysis in Distributed Memory: Implementation and Optimization","authors":["GM Slota, S Rajamanickam, K Madduri"],"snippet":"... Focusing on one of the largest publicly-available hyperlink graphs (the 2012 Web Data Commons graph1, which was in- turn extracted from the open Common Crawl web corpus2), we develop parallel ... 1http://webdatacommons.org/hyperlinkgraph/ 2http://commoncrawl.org ...","url":["http://www.personal.psu.edu/users/g/m/gms5016/pub/Dist-IPDPS16.pdf"]} -{"year":"2016","title":"A Convolutional Encoder Model for Neural Machine Translation","authors":["J Gehring, M Auli, D Grangier, YN Dauphin - arXiv preprint arXiv:1611.02344, 2016"],"snippet":"... WMT'15 English-German. We use all available parallel training data, namely Europarl v7, Common Crawl and News Commentary v10 and apply the standard Moses tokenization to obtain 3.9M sentence pairs (Koehn et al., 2007). We report results on newstest2015. ...","url":["https://arxiv.org/pdf/1611.02344"]} -{"year":"2016","title":"A Deep Fusion Model for Domain Adaptation in Phrase-based MT","authors":["N Durrani, S Joty, A Abdelali, H Sajjad"],"snippet":"... test-13 993 18K 17K test-13 1169 26K 28K Table 1: Statistics of the English-German and Arabic-English training corpora in terms of Sentences and Tokens (represented in millions). ep = Europarl, cc = Common Crawl, un = United Nations ...","url":["https://www.aclweb.org/anthology/C/C16/C16-1299.pdf"]} -{"year":"2016","title":"A Large DataBase of Hypernymy Relations Extracted from the Web","authors":["J Seitner, C Bizer, K Eckert, S Faralli, R Meusel… - … of the 10th edition of the …, 2016"],"snippet":"... 3http://webdatacommons.org/framework/ 4http://commoncrawl.org ... The corpus is provided by the Common Crawl Foundation on AWS S3 as free download.6 The extraction of ... isadb/) and can be used to repeat the tuple extraction for different or newer Common Crawl releases. ...","url":["http://webdatacommons.org/isadb/lrec2016.pdf"]} -{"year":"2016","title":"A Maturity Model for Public Administration as Open Translation Data Providers","authors":["N Bel, ML Forcada, A Gómez-Pérez - arXiv preprint arXiv:1607.01990, 2016"],"snippet":"... There are techniques to mitigate the need of large quantities of parallel text, but most often at the expense of resulting translation quality. As a reference of the magnitude we can take as a standard corpus the Common Crawl corpus (Smith et al. ...","url":["http://arxiv.org/pdf/1607.01990"]} -{"year":"2016","title":"A Neural Architecture Mimicking Humans End-to-End for Natural Language Inference","authors":["B Paria, KM Annervaz, A Dukkipati, A Chatterjee… - arXiv preprint arXiv: …, 2016"],"snippet":"... We used batch normalization [Ioffe and Szegedy, 2015] while training. The various model parameters used are mentioned in Table I. We experimented with both GloVe vectors trained1 on Common Crawl dataset as well as Word2Vec vector trained2 on Google news dataset. ...","url":["https://arxiv.org/pdf/1611.04741"]} -{"year":"2016","title":"A practical guide to big data research in psychology.","authors":["EE Chen, SP Wojcik - Psychological Methods, 2016"],"snippet":"... as well as general collections, such as Amazon Web Services' Public Data Sets repository (AWS, nd, http://aws.amazon.com/public-data-sets/) which includes the 1000 Genomes Project, with full genomic sequences for 1,700 individuals, and the Common Crawl Corpus, with ...","url":["http://psycnet.apa.org/journals/met/21/4/458/"]} -{"year":"2016","title":"A semantic based Web page classification strategy using multi-layered domain ontology","authors":["AI Saleh, MF Al Rahmawy, AE Abulwafa - World Wide Web, 2016"],"snippet":"Page 1. A semantic based Web page classification strategy using multi-layered domain ontology Ahmed I. Saleh1 & Mohammed F. Al Rahmawy2 & Arwa E. Abulwafa1 Received: 3 February 2016 /Revised: 13 August 2016 /Accepted ...","url":["http://link.springer.com/article/10.1007/s11280-016-0415-z"]} -{"year":"2016","title":"A Story of Discrimination and Unfairness","authors":["A Caliskan-Islam, J Bryson, A Narayanan"],"snippet":"... power has led to high quality language models such as word2vec [7] and GloVe [8]. These language models, which consist of up to half a million unique words, are trained on billions of documents from sources such as Wikipedia, CommonCrawl, GoogleNews, and Twitter. ...","url":["https://www.securityweek2016.tu-darmstadt.de/fileadmin/user_upload/Group_securityweek2016/pets2016/9_a_story.pdf"]} -{"year":"2016","title":"A Way out of the Odyssey: Analyzing and Combining Recent Insights for LSTMs","authors":["S Longpre, S Pradhan, C Xiong, R Socher - arXiv preprint arXiv:1611.05104, 2016"],"snippet":"... All models in this paper used publicly available 300 dimensional word vectors, pre-trained using Glove on 840 million tokens of Common Crawl Data (Pennington et al., 2014), and both the word vectors and the subsequent weight matrices were trained using Adam with a ...","url":["https://arxiv.org/pdf/1611.05104"]} -{"year":"2016","title":"A Web Application to Search a Large Repository of Taxonomic Relations from the Web","authors":["S Faralli, C Bizer, K Eckert, R Meusel, SP Ponzetto"],"snippet":"... 1 https://commoncrawl.org 2 http://webdatacommons.org/framework/ 3 https://www.mongodb. com ... of the two noun phrases involved in the isa relations into pre-modifiers, head and post-modifiers [6], as well as the frequency of occurrence of the relation in the Common Crawl...","url":["http://ceur-ws.org/Vol-1690/paper58.pdf"]} -{"year":"2016","title":"Abu-MaTran at WMT 2016 Translation Task: Deep Learning, Morphological Segmentation and Tuning on Character Sequences","authors":["VM Sánchez-Cartagena, A Toral - Proceedings of the First Conference on Machine …, 2016"],"snippet":"... 362 Page 2. Corpus Sentences (k) Words (M) Europarl v8 2 121 39.5 Common Crawl 113 995 2 416.7 News Crawl 2014–15 6 741 83.1 Table 1: Finnish monolingual data, after preprocessing, used to train the LMs of our SMT submission. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2322.pdf"]} -{"year":"2016","title":"Action Classification via Concepts and Attributes","authors":["A Rosenfeld, S Ullman - arXiv preprint arXiv:1605.07824, 2016"],"snippet":"... To assign GloVe [19] vectors to object names or attributes, we use the pre-trained model on the Common-Crawl (42B) corpus, which contains a vocabulary of 1.9M words. We break up phrases into their words and assign to them their mean GloVe vector. ...","url":["http://arxiv.org/pdf/1605.07824"]} -{"year":"2016","title":"Active Content-Based Crowdsourcing Task Selection","authors":["P Bansal, C Eickhoff, T Hofmann"],"snippet":"... Pennington et. al. [28] showed distributed text representations to capture more semantic information when the models are trained on Wikipedia text, as opposed to other large corpora such as the Common Crawl. This is attributed ...","url":["https://www.researchgate.net/profile/Piyush_Bansal4/publication/305442609_Active_Content-Based_Crowdsourcing_Task_Selection/links/578f416d08ae81b44671ad85.pdf"]} -{"year":"2016","title":"Adverse Drug Reaction Classification With Deep Neural Networks","authors":["T Huynh, Y He, A Willis, S Rüger"],"snippet":"... 4http://commoncrawl.org/ 5Source code is available at https://github.com/trunghlt/ AdverseDrugReaction 879 Page 4. max pooling feedforward layer convolutional layer (a) Convolutional Neural Network (CNN) (b) Recurrent Convolutional Neural Network (RCNN) ...","url":["http://www.aclweb.org/anthology/C/C16/C16-1084.pdf"]} -{"year":"2016","title":"All Your Data Are Belong to us. European Perspectives on Privacy Issues in 'Free'Online Machine Translation Services","authors":["P Kamocki, J O'Regan, M Stauch - Privacy and Identity Management. Time for a …, 2016"],"snippet":"... http://​www.​cnet.​com/​news/​google-translate-now-serves-200-million-people-daily/​. Accessed 23 Oct 2014. Smith, JR, Saint-Amand, H., Plamada, M., Koehn, P., Callison-Burch, C., Lopez, A.: Dirt cheap web-scale parallel text from the Common Crawl...","url":["http://link.springer.com/chapter/10.1007/978-3-319-41763-9_18"]} -{"year":"2016","title":"An Analysis of Real-World XML Queries","authors":["P Hlísta, I Holubová - OTM Confederated International Conferences\" On the …, 2016"],"snippet":"... crawler. Or, there is another option – Common Crawl [1], an open repository of web crawled data that is universally accessible and analyzable, containing petabytes of data collected over the last 7 years. ... 3.1 Common Crawl. We ...","url":["http://link.springer.com/chapter/10.1007/978-3-319-48472-3_36"]} -{"year":"2016","title":"An Attentive Neural Architecture for Fine-grained Entity Type Classification","authors":["S Shimaoka, P Stenetorp, K Inui, S Riedel - arXiv preprint arXiv:1604.05525, 2016"],"snippet":"... appearing in the training set. Specifically, we used the freely available 300 dimensional cased word embeddings trained on 840 billion to- kens from the Common Crawl supplied by Pennington et al. (2014). As embeddings ...","url":["http://arxiv.org/pdf/1604.05525"]} -{"year":"2016","title":"Analysing Structured Scholarly Data Embedded in Web Pages","authors":["P Sahoo, U Gadiraju, R Yu, S Saha, S Dietze"],"snippet":"... the following section. 2.2 Methodology and Dataset For our investigation, we use the Web Data Commons (WDC) dataset, being the largest available corpus of markup, extracted from the Common Crawl. Of the crawled web ...","url":["http://cs.unibo.it/save-sd/2016/papers/pdf/sahoo-savesd2016.pdf"]} -{"year":"2016","title":"ArabicWeb16: A New Crawl for Today's Arabic Web","authors":["R Suwaileh, M Kutlu, N Fathima, T Elsayed, M Lease"],"snippet":"... English content dominates the crawl [12]. While Common Crawl could be mined to identify and ex- tract a useful Arabic subset akin to ArClueWeb09, this would address only recency, not coverage. To address the above concerns ...","url":["http://www.ischool.utexas.edu/~ml/papers/sigir16-arabicweb.pdf"]} -{"year":"2016","title":"Ask Your Neurons: A Deep Learning Approach to Visual Question Answering","authors":["M Malinowski, M Rohrbach, M Fritz - arXiv preprint arXiv:1605.02697, 2016"],"snippet":"Page 1. Noname manuscript Ask Your Neurons: A Deep Learning Approach to Visual Question Answering Mateusz Malinowski · Marcus Rohrbach · Mario Fritz Abstract We address a question answering task on realworld images that is set up as a Visual Turing Test. ...","url":["http://arxiv.org/pdf/1605.02697"]} -{"year":"2016","title":"Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations","authors":["P Blair, Y Merhav, J Barry - arXiv preprint arXiv:1611.01547, 2016"],"snippet":"... (2013a), the 840-billion token Common Crawl corpus-trained GloVe model released by Pennington et al. (2014), and the English, Spanish, German, Japanese, and Chinese MultiCCA vectors5 from Ammar et al. ... Outliers OOV GloVe Common Crawl 75.53 38.57 5 6.33 5.70 ...","url":["https://arxiv.org/pdf/1611.01547"]} -{"year":"2016","title":"Automated Haiku Generation based on Word Vector Models","authors":["AF Aji"],"snippet":"... and Page 28. 16 Chapter 3. Design Common Crawl data. Those data also come with various vector dimension size from 50-D to 300-D. Those pre-trained word vectors are used directly for this project as they take considerably ...","url":["http://project-archive.inf.ed.ac.uk/msc/20150275/msc_proj.pdf"]} -{"year":"2016","title":"Automatic Construction of Morphologically Motivated Translation Models for Highly Inflected, Low-Resource Languages","authors":["J Hewitt, M Post, D Yarowsky - AMTA 2016, Vol., 2016"],"snippet":"... sentences of Europarl (Koehn, 2005), SETIMES3 (Tyers and Alperen, 2010), extracted from OPUS (Tiedemann, 2009), or Common Crawl (Bojar et al ... Turkish, we train models on 29000 sentences of biblical data with 1000 and 20000 sentences of CommonCrawl and SETIMES ...","url":["https://www.researchgate.net/profile/John_Ortega3/publication/309765044_Fuzzy-match_repair_using_black-box_machine_translation_systems_what_can_be_expected/links/5822496f08ae7ea5be6af317.pdf#page=183"]} -{"year":"2016","title":"B1A3D2 LUC@ WMT 2016: a Bilingual1 Document2 Alignment3 Platform Based on Lucene","authors":["L Jakubina, P Langlais"],"snippet":"... 2013. Dirt cheap web-scale parallel text from the common crawl. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1374–1383. Jakob Uszkoreit, Jay M. Ponte, Ashok C. Popat, and Moshe Dubiner. 2010. ...","url":["http://www-etud.iro.umontreal.ca/~jakubinl/publication/badluc_jaklan_wmt16_stbad.pdf"]} -{"year":"2016","title":"Big Data Facilitation and Management","authors":["J Fagerli"],"snippet":"Page 1. Faculty of Science and Technology Department of Computer Science Big Data Facilitation and Management A requirements analysis and initial evaluation of a big biological data processing service — Jarl Fagerli INF ...","url":["http://bdps.cs.uit.no/papers/capstone-jarl.pdf"]} -{"year":"2016","title":"Bootstrap, Review, Decode: Using Out-of-Domain Textual Data to Improve Image Captioning","authors":["W Chen, A Lucchi, T Hofmann - arXiv preprint arXiv:1611.05321, 2016"],"snippet":"... We report the performance of our model and competing methods in terms of six standard metrics used for image captioning as described in [4]. During the bootstrap learning phase, we use both the 20082010 News-CommonCrawl and Europarl corpus 2 as out- of-domain ...","url":["https://arxiv.org/pdf/1611.05321"]} -{"year":"2016","title":"bot. zen@ EVALITA 2016-A minimally-deep learning PoS-tagger (trained for Italian Tweets)","authors":["EW Stemle"],"snippet":"... The data was only distributed to the task participants. 4.1.4 C4Corpus (w2v) c4corpus8 is a full documents Italian Web corpus that has been extracted from CommonCrawl, the largest publicly available general Web crawl to date. ...","url":["http://ceur-ws.org/Vol-1749/paper_020.pdf"]} -{"year":"2016","title":"Building mutually beneficial relationships between question retrieval and answer ranking to improve performance of community question answering","authors":["M Lan, G Wu, C Xiao, Y Wu, J Wu - Neural Networks (IJCNN), 2016 International Joint …, 2016"],"snippet":"... The first is the 300-dimensional version of word2vec [23] vectors, which is trained on part of Google News dataset (about 100 billion words). The second is 300-dimensional Glove vectors [24] which is trained on 840 billion tokens of Common Crawl data. ...","url":["http://ieeexplore.ieee.org/abstract/document/7727286/"]} -{"year":"2016","title":"C4Corpus: Multilingual Web-size corpus with free license","authors":["I Habernal, O Zayed, I Gurevych"],"snippet":"... documents. Our project is entitled C4Corpus, an abbreviation of Creative Commons from Common Crawl Corpus and is hosted under the DKPro umbrella4 at https:// github.com/dkpro/dkpro-c4corpus under ASL 2.0 license. ...","url":["https://www.ukp.tu-darmstadt.de/fileadmin/user_upload/Group_UKP/publikationen/2016/lrec2016-c4corpus-camera-ready.pdf"]} -{"year":"2016","title":"Capturing Pragmatic Knowledge in Article Usage Prediction using LSTMs","authors":["J Kabbara, Y Feng, JCK Cheung"],"snippet":"... GloVe: The embedding is initialized by the global vectors Pennington et al. (2014) that are trained on the Common Crawl corpus (840 billion tokens). Both word2vec and GloVe word embeddings consist of 300 dimensions. ...","url":["https://www.aclweb.org/anthology/C/C16/C16-1247.pdf"]} -{"year":"2016","title":"Character-level and Multi-channel Convolutional Neural Networks for Large-scale Authorship Attribution","authors":["S Ruder, P Ghaffari, JG Breslin - arXiv preprint arXiv:1609.06686, 2016"],"snippet":"... σ: Standard deviation of document number. d: Median document size (tokens). All word embedding channels are initialized with 300-dimensional GloVe vectors (Pennington et al., 2014) trained on 840B tokens of the Common Crawl corpus11. ...","url":["http://arxiv.org/pdf/1609.06686"]} -{"year":"2016","title":"Citation Classification for Behavioral Analysis of a Scientific Field","authors":["D Jurgens, S Kumar, R Hoover, D McFarland… - arXiv preprint arXiv: …, 2016"],"snippet":"... The classifier is implemented using SciKit (Pedregosa et al., 2011) and syntactic processing was done using CoreNLP (Manning et al., 2014). Selectional preferences used pretrained 300-dimensional vectors from the 840B token Common Crawl (Pennington et al., 2014). ...","url":["http://arxiv.org/pdf/1609.00435"]} -{"year":"2016","title":"CNRC at SemEval-2016 Task 1: Experiments in crosslingual semantic textual similarity","authors":["C Lo, C Goutte, M Simard - Proceedings of SemEval, 2016"],"snippet":"... The system was 3We use the glm function in R. 669 Page 3. trained using standard resources – Europarl, Common Crawl (CC) and News & Commentary (NC) – totaling approximately 110M words in each language. Phrase ...","url":["http://anthology.aclweb.org/S/S16/S16-1102.pdf"]} -{"year":"2016","title":"Commonsense Knowledge Base Completion","authors":["X Li, A Taheri, L Tu, K Gimpel"],"snippet":"... We use the GloVe (Pennington et al., 2014) embeddings trained on 840 billion tokens of Common Crawl web text and the PARAGRAM-SimLex embeddings of Wieting et al. (2015), which were tuned to have strong performance on the SimLex-999 task (Hill et al., 2015). ...","url":["http://ttic.uchicago.edu/~kgimpel/papers/li+etal.acl16.pdf"]} -{"year":"2016","title":"Comparing Topic Coverage in Breadth-First and Depth-First Crawls Using Anchor Texts","authors":["AP de Vries - Research and Advanced Technology for Digital …, 2016","T Samar, MC Traub, J van Ossenbruggen, AP de Vries - International Conference on …, 2016"],"snippet":"... nl domain, with the goal to crawl websites as completes as possible. The second crawl was collected by the Common Crawl foundation using a breadth-first strategy on the entire Web, this strategy focuses on discovering as many links as possible. ...","url":["http://books.google.de/books?hl=en&lr=lang_en&id=VmTUDAAAQBAJ&oi=fnd&pg=PA133&dq=%22common+crawl%22&ots=STVgD4vke3&sig=Gr5Q94wWtvFSfT_EYf1cQGP-Mrg","http://link.springer.com/chapter/10.1007/978-3-319-43997-6_11"]} -{"year":"2016","title":"COMPARISON OF DISTRIBUTIONAL SEMANTIC MODELS FOR RECOGNIZING TEXTUAL ENTAILMENT.","authors":["Y WIBISONO, DWIH WIDYANTORO… - Journal of Theoretical & …, 2016"],"snippet":"... To our knowledge, this paper is the first study of various DSM on RTE. We found that DSM improves entailment accuracy, with the best DSM is GloVe trained with 42 billion tokens taken from Common Crawl corpus. ... Glove_42B Common Crawl 42 billion tokens ...","url":["http://search.ebscohost.com/login.aspx?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=19928645&AN=120026939&h=neaFgJXHcv5SjyzIFWIJp046Uq5Cr3qfiPCmXc4DYTEi9kN6SN9YQqm1CUdjmDg%2BwZzzXWI6ftJLniJiB6Go1g%3D%3D&crl=c"]} -{"year":"2016","title":"ConceptNet 5.5: An Open Multilingual Graph of General Knowledge","authors":["R Speer, J Chin, C Havasi - arXiv preprint arXiv:1612.03975, 2016"],"snippet":"... 2013), and the GloVe 1.2 embeddings trained on 840 billion words of the Common Crawl (Pennington, Socher, and Manning 2014). These matrices are downloadable, and we will be using them both as a point of comparison and as inputs to an ensemble. ...","url":["https://arxiv.org/pdf/1612.03975"]} -{"year":"2016","title":"Content Selection through Paraphrase Detection: Capturing different Semantic Realisations of the Same Idea","authors":["E Lloret, C Gardent - WebNLG 2016, 2016"],"snippet":"... either sentences or pred-arg structures, GLoVe pre-trained WE vectors (Pennington et al., 2014) were used, specifically the ones derived from Wikipedia 2014+ Gi- gaword 5 corpora, containing around 6 billion to- kens; and the ones derived from a Common Crawl, with 840 ...","url":["https://webnlg2016.sciencesconf.org/data/pages/book.pdf#page=33"]} -{"year":"2016","title":"Corporate Smart Content Evaluation","authors":["R Schäfermeier, AA Todor, A La Fleur, A Hasan… - 2016"],"snippet":"Page 1. Fraunhofer FOKUS FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS STUDY – CORPORATE SMART CONTENT EVALUATION Page 2. Page 3. STUDY – CORPORATE SMART CONTENT EVALUATION ...","url":["http://www.diss.fu-berlin.de/docs/servlets/MCRFileNodeServlet/FUDOCS_derivate_000000006523/CSCStudie2016.pdf"]} -{"year":"2016","title":"Crawl and crowd to bring machine translation to under-resourced languages","authors":["A Toral, M Esplá-Gomis, F Klubička, N Ljubešić… - Language Resources and …"],"snippet":"... Wikipedia. The CommonCrawl project 5 should be mentioned here as it allows researchers to traverse a frequently updated crawl of the whole web in search of specific data, and therefore bypass the data collection process. ...","url":["http://link.springer.com/article/10.1007/s10579-016-9363-6"]} -{"year":"2016","title":"Cross Site Product Page Classification with Supervised Machine Learning","authors":["J HUSS"],"snippet":"... An other data set used often is Common Crawl [1], which is a possible source that contain product specification pages. The data of Common Crawl is not complete with HTML-source code and it was collected in 2013, which creates many dead links. ...","url":["http://www.nada.kth.se/~ann/exjobb/jakob_huss.pdf"]} -{"year":"2016","title":"CSA++: Fast Pattern Search for Large Alphabets","authors":["S Gog, A Moffat, M Petri - arXiv preprint arXiv:1605.05404, 2016"],"snippet":"... The latter were extracted from a sentence-parsed prefix of the German and Spanish sections of the CommonCrawl5. The four 200 ... translation process described by Shareghi et al., corresponding to 40,000 sentences randomly selected from the German part of Common Crawl...","url":["http://arxiv.org/pdf/1605.05404"]} -{"year":"2016","title":"CUNI-LMU Submissions in WMT2016: Chimera Constrained and Beaten","authors":["A Tamchyna, R Sudarikov, O Bojar, A Fraser - Proceedings of the First Conference on …, 2016"],"snippet":"... tag). Our input is factored and contains the form, lemma, morphological tag, 1http://commoncrawl.org/ 387 Page 4. lemma ... 2015. The second LM only uses 4-grams but additionally contains the full Common Crawl corpus. We ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2325.pdf"]} -{"year":"2016","title":"D6. 3: Improved Corpus-based Approaches","authors":["CP Escartin, LS Torres, CO UoW, AZ UMA, S Pal - 2016"],"snippet":"... Based on this system and the data retrieved from Common Crawl, several websites were identified as possible candidates for crawling. ... 8http://commoncrawl.org/ 9For a description of this tool, see Section 3.1.2 in this Deliverable. 6 Page 9. ...","url":["http://expert-itn.eu/sites/default/files/outputs/expert_d6.3_20160921_improved_corpus-based_approaches.pdf"]} -{"year":"2016","title":"Data Selection for IT Texts using Paragraph Vector","authors":["MS Duma, W Menzel - Proceedings of the First Conference on Machine …, 2016"],"snippet":"... models/doc2vec.html 3http://commoncrawl.org/ 4https://github.com/melix/jlangdetect 5-gram LMs using the SRILM toolkit (Stolcke, 2002) with Kneser-Ney discounting (Kneser and Ney, 1995) on the target side of the Commoncrawl and IT corpora. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2331.pdf"]} -{"year":"2016","title":"David W. Embley, Mukkai S. Krishnamoorthy, George Nagy &","authors":["S Seth"],"snippet":"... tabulated data on the web even before Big Data became a byword [1]. Assuming “that an average table contains on average 50 facts it is possible to extract more than 600 billion facts taking into account only the 12 billion sample tables found in the Common Crawl” [2]. Tables ...","url":["https://www.ecse.rpi.edu/~nagy/PDF_chrono/2016_Converting%20Web%20Tables,IJDAR,%2010.1007_s10032-016-0259-1.pdf"]} -{"year":"2016","title":"Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translatin","authors":["J Zhou, Y Cao, X Wang, P Li, W Xu - arXiv preprint arXiv:1606.04199, 2016"],"snippet":"... 4.1 Data sets For both tasks, we use the full WMT'14 parallel corpus as our training data. The detailed data sets are listed below: • English-to-French: Europarl v7, Common Crawl, UN, News Commentary, Gigaword • English-to-German: Europarl v7, Common 7M for each organization /(Organization) …","url":["https://pdfs.semanticscholar.org/presentation/08d7/0e3f0b27b03a5f99f22bfeebeafa47c9bbb7.pdf"]} -{"year":"2018","title":"On the Compressed Sensing Properties of Word Embeddings","authors":["M Khodak - 2018"],"snippet":"… word2vec embeddings trained on Google News and GloVe vectors trained on Common Crawl were obtained from public repositories [20, 23] while Amazon and Wikipedia embeddings were trained for 100 iterations using …","url":["ftp://ftp.cs.princeton.edu/techreports/2018/008.pdf"]} -{"year":"2018","title":"On the Design and Tuning of Machine Learning Models for Language Toxicity Classification in Online Platforms","authors":["M Rybinski, W Miller, J Del Ser, MN Bilbao… - International Symposium on …, 2018"],"snippet":"… For (B) we have used vectors pre-trained on Common Crawl corpus with GloVe algorithm [9]. A more complete description of the text representations involved in our experiments is given below … For pre-trained embeddings …","url":["https://link.springer.com/chapter/10.1007/978-3-319-99626-4_29"]} -{"year":"2018","title":"Ontology Augmentation Through Matching with Web Tables","authors":["O Lehmberg, O Hassanzadeh"],"snippet":"… 3 http://commoncrawl.org … The used PageRank values are obtained from the publicly available Common Crawl WWW Ranking.4 For each partition of columns, we use the maximum PageRank of all source web pages and …","url":["http://disi.unitn.it/~pavel/om2018/papers/om2018_LTpaper4.pdf"]} -{"year":"2018","title":"Ontology Driven Extraction of Research Processes","authors":["V Pertsas, P Constantopoulos, I Androutsopoulos"],"snippet":"… Our experiments with other general-purpose, publicly available embeddings, such as those trained on the Common Crawl corpus using GloVe9, or those trained on Wikipedia articles with word2vec, showed inferior performance …","url":["http://www2.aueb.gr/users/ion/docs/iswc2018.pdf"]} -{"year":"2018","title":"Open Bibliometrics and Undiscovered Public Knowledge","authors":["D Stuart - Online Information Review, 2018"],"snippet":"… Whether altmetrics is really any more open than traditional citation analysis is a matter of debate, although services such as Common Crawl (http://commoncrawl.org), an open repository of web crawl data, provides …","url":["https://www.emeraldinsight.com/doi/abs/10.1108/OIR-07-2017-0209"]} -{"year":"2018","title":"OpenSeq2Seq: extensible toolkit for distributed and mixed precision training of sequence-to-sequence models","authors":["O Kuchaiev, B Ginsburg, I Gitman, V Lavrukhin, C Case… - arXiv preprint arXiv …, 2018"],"snippet":"… training). In our experiments, we used WMT 2016 English→German data set obtained by combining the Europarlv7, News Commentary v10, and Common Crawl corpora and resulting in roughly 4.5 million sentence pairs …","url":["https://arxiv.org/pdf/1805.10387"]} -{"year":"2018","title":"Optimizing Automatic Evaluation of Machine Translation with the ListMLE Approach","authors":["M Li, M Wang - ACM Transactions on Asian and Low-Resource …, 2018"],"snippet":"… Bilingual parallel data comprising Europarl v7, Common Crawl corpus, and News Commentary v10, released by the WMT'2015 Machine Translation Shared Task [39] were employed to train the bidirectional lexical translation probability …","url":["https://dl.acm.org/citation.cfm?id=3226045"]} -{"year":"2018","title":"Out-of-Distribution Detection using Multiple Semantic Label Representations","authors":["G Shalev, Y Adi, J Keshet - arXiv preprint arXiv:1808.06664, 2018"],"snippet":"… respectively. The third and forth representations were based on GloVe [36], where the third one was trained using both Wikipedia corpus and Gigawords [34] dataset, the fourth was trained using Common Crawl dataset. The …","url":["https://arxiv.org/pdf/1808.06664"]} -{"year":"2018","title":"Pangloss: Fast Entity Linking in Noisy Text Environments","authors":["M Conover, M Hayes, S Blackburn, P Skomoroch… - arXiv preprint arXiv …, 2018"],"snippet":"… For each surface form Pangloss calls a RocksDB key-value store to retrieve candidate entries (represented by circles) based on associations between hyperlink anchor text and Wikipedia URLs in Wikipedia and Common Crawl (Section 3.5) … 3.3.4 Common Crawl …","url":["https://arxiv.org/pdf/1807.06036"]} -{"year":"2018","title":"ParaNMT-50M: Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations","authors":["J Wieting, K Gimpel - Proceedings of the 56th Annual Meeting of the …, 2018"],"snippet":"… Dataset Avg. Length Avg. IDF Avg. Para. Score Vocab. Entropy Parse Entropy Total Size Common Crawl 24.0±34.7 7.7±1.1 0.83±0.16 7.2 3.5 0.16M … (2017). Its training data includes four sources: Common Crawl, CzEng 1.6 (Bojar …","url":["http://www.aclweb.org/anthology/P18-1042"]} -{"year":"2018","title":"Periodizing Web Archiving: Biographical, Event-Based, National and Autobiographical Traditions","authors":["R Rogers"],"snippet":"Page 1. Periodizing Web Archiving: Biographical, Event-Based, National and Autobiographical Traditions Richard Rogers INTRODUCTION: HISTORIOGRAPHIES BUILT INTO WEB ARCHIVES The purpose of this chapter is …","url":["https://www.researchgate.net/profile/Richard_Rogers13/publication/327403018_Periodizing_Web_Archiving_Biographical_Event-Based_National_and_Autobiographical_Traditions/links/5b8d511e299bf114b7eeea4e/Periodizing-Web-Archiving-Biographical-Event-Based-National-and-Autobiographical-Traditions.pdf"]} -{"year":"2018","title":"Phrase-Level Metaphor Identification using Distributed Representations of Word Meaning","authors":["O Zayed, JP McCrae, P Buitelaar - NAACL HLT 2018, 2018"],"snippet":"… 10. –GloVe Common Crawl5: We used a pretrained model on the Common Crawl dataset containing 840 billion tokens of web data (about 2 million words). The vectors are 300dimensional using 100 training iteration. For …","url":["http://www.cl.cam.ac.uk/~es407/papers/Fig-Lang2018-proceedings.pdf#page=93"]} -{"year":"2018","title":"Phrase-level Self-Attention Networks for Universal Sentence Encoding","authors":["W Wu, H Wang, T Liu, S Ma - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… and sentence textual similarity. 3.1 Model Configuration 300-dimensional GloVe (Pennington et al., 2014) word embeddings (Common Crawl, uncased) are used to represent words. Following Parikh et al. (2016), out-of-vocabulary …","url":["http://www.aclweb.org/anthology/D18-1408"]} -{"year":"2018","title":"Platypus–A Multilingual Question Answering Platform for Wikidata","authors":["TP Tanon, MD de Assunçao, E Caron, FM Suchanek"],"snippet":"… The template analyzer is implemented using RasaNLU [34]. We used the Glove [35] word vectors trained on Common Crawl provided by Spacy and the RasaNLU entity extractor based on the CRFsuite library [36]. Our system can be accessed in three ways …","url":["https://2018.eswc-conferences.org/wp-content/uploads/2018/02/ESWC2018_paper_130.pdf"]} -{"year":"2018","title":"Pointer-CNN for Visual Question Answering","authors":["J Svidt, JS Jepsen - 2018"],"snippet":"Page 1. Pointer-CNN for Visual Question Answering Jakob Svidt Aalborg University jsvidt13@student.aau.dk Jens Søholm Jepsen Aalborg University jjepse12@student. aau June 14, 2018 Abstract Visual Question Answering …","url":["https://projekter.aau.dk/projekter/files/281551620/SvidtJepsen.pdf"]} -{"year":"2018","title":"Pooling Word Vector Representations Across Models","authors":["AC Graesser, V Rus - … Linguistics and Intelligent Text Processing: 18th …"],"snippet":"… The model was trained on non-zero elements in a global word co-occurrence matrix. We used the pre-trained model GloVe-42B which was trained on 42 billion words of Common Crawl corpus and it contains about 1.9 million unique tokens …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=dDNyDwAAQBAJ&oi=fnd&pg=PA17&dq=commoncrawl&ots=qmDDIrUYu6&sig=KN6CvMWGvERnEsXe5NqWJlRnqnM"]} -{"year":"2018","title":"Predicting and Generating Discussion Inspiring","authors":["YJWJ Park"],"snippet":"… response time before being presented to the human. • An LSTM model [2] [4] using pretrained 300 dimensional GloVe word embeddings [5] on Common Crawl to embed the comments. A self-attention layer was added on top …","url":["http://web.stanford.edu/class/cs224n/reports/6879446.pdf"]} -{"year":"2018","title":"Predicting Company Ratings through Glassdoor","authors":["TE Whittle"],"snippet":"… This dataset has 300-dimnesional vectors for 3 million words and phrases. The GloVe pre-trained embeddings come from a model trained on 42 billion tokens encountered by Common Crawl, a program designed to crawl the web and extract text …","url":["http://web.stanford.edu/class/cs224n/reports/6880837.pdf"]} -{"year":"2018","title":"Predictive Embeddings for Hate Speech Detection on Twitter","authors":["R Kshirsagar, T Cukuvac, K McKeown, S McGregor - arXiv preprint arXiv:1809.10644, 2018"],"snippet":"… 5 Experimental Setup We tokenize the data using Spacy (HonnibalandJohnson, 2015). We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) (Pennington et al., 2014) and fine tune them for the task. We …","url":["https://arxiv.org/pdf/1809.10644"]} -{"year":"2018","title":"Preferred Answer Selection in Stack Overflow: Better Text Representations... and Metadata, Metadata, Metadata","authors":["X Xu, A Bennett, D Hoogeveen, JH Lau, T Baldwin"],"snippet":"… Word embeddings were set to 150 di- mensions. The co-occurrence weighting function's maximum value xmax was kept at the default of 10. For SEMEVAL, we used pretrained Common Crawl cased embeddings with 840G tokens …","url":["http://noisy-text.github.io/2018/pdf/W-NUT201819.pdf"]} -{"year":"2018","title":"Preserved Structure Across Vector Space Representations","authors":["A Amatuni, E He, E Bergelson - arXiv preprint arXiv:1802.00840, 2018"],"snippet":"… We use the set of vectors pretrained by the GloVe authors on the Common Crawl corpus with 42 billion tokens, resulting in 300 dimensional vectors for 1.9 million unique words1. Such vectors have shown promise in modeling early semantic networks (Amatuni & Bergelson …","url":["https://arxiv.org/pdf/1802.00840"]} -{"year":"2018","title":"Probabilistic German Morphosyntax","authors":["R Schäfer"],"snippet":"Page 1. Probabilistic German Morphosyntax HABILITATIONSSCHRIFT zur Erlangung der Lehrbefähigung für das Fach Germanistische und Allgemeine Sprachwissenschaft vorgelegt der Philosophischen Fakultät II der …","url":["http://rolandschaefer.net/wp-content/uploads/RolandSchaefer_2018_ProbabilisticGermanMorphosyntax_Habil_DRAFT.pdf"]} -{"year":"2018","title":"PROMT Systems for WMT 2018 Shared Translation Task","authors":["A Molchanov - Proceedings of the Third Conference on Machine …, 2018"],"snippet":"… The CommonCrawl and (especially) ParaCrawl corpora were heavily filtered and normalized using the PROMT tools and algorithms (including language recognition, removal of meaningless sentences, in-house tools for parallel …","url":["http://www.aclweb.org/anthology/W18-6420"]} -{"year":"2018","title":"Pseudo Descriptions for Meta-Data Retrieval","authors":["T Gollub, E Genc, N Lipka, B Stein - Proceedings of the 2018 ACM SIGIR International …, 2018"],"snippet":"… As reference collections, the TREC collections themselves as well as Wikipedia and the CommonCrawl are used … 2Not considered were lists, disambiguations, and short (#words < 250) articles. 3http://commoncrawl …","url":["https://dl.acm.org/citation.cfm?id=3234957"]} -{"year":"2018","title":"QED: A fact verification system for the FEVER shared task","authors":["J Luken, N Jiang, MC de Marneffe - Proceedings of the First Workshop on Fact …, 2018"],"snippet":"… described below. 4.2 Embedding We used GloVe word embeddings (Pennington et al., 2014) with 300 dimensions pre-trained us- ing CommonCrawl to get a vector representation of the evidence sentence. We also experimented …","url":["http://www.aclweb.org/anthology/W18-5526"]} -{"year":"2018","title":"Quantifying macroeconomic expectations in stock markets using Google Trends","authors":["J Bock - arXiv preprint arXiv:1805.00268, 2018"],"snippet":"… trends.google.com/trends/, March 5, 2018. 3 Common Crawl (42B tokens) GloVe word embeddings, retrieved from Stanford University, https://nlp.stanford.edu/projects/ glove/, March 4, 2018. 4 GloVe word embeddings are vector …","url":["https://arxiv.org/pdf/1805.00268"]} -{"year":"2018","title":"Quantitative Web History Methods","authors":["A Cocciolo - The SAGE Handbook of Web History, 2018"]} -{"year":"2018","title":"Quantum-like Generalization of Complex Word Embedding: a lightweight approach for textual classification","authors":["H Liu"],"snippet":"… (B) - crawl-300d-2M-vec ‡ 2 Million 600 Billion (C) - GloVe.Common Crawl.840B.300d † 2.2 Million 840 Billion Table 1. The pre-trained word embedding models selected for this experiment, where †= GloVe algorithm embeddings, ‡= Fasttext algorithm embeddings …","url":["http://ceur-ws.org/Vol-2191/paper19.pdf"]} -{"year":"2018","title":"Quester: A Speech-Based Question Answering Support System for Oral Presentations","authors":["R Asadi, H Trinh, HJ Fell, TW Bickmore - … of the 23rd International Conference on …, 2018"],"snippet":"… ( , ) = √ 2 =0 (2) ( ) is the word vector representation of keyword k. We used a pre-trained GloVe [14] vector representation with 1.9 million uncased words and vectors with 300 elements, trained using 42 billion tokens of web data from Common Crawl …","url":["http://relationalagents.com/publications/IUI18.pdf"]} -{"year":"2018","title":"Question Answering on SQuAD Dataset","authors":["ZDJ Dong, J Geng"],"snippet":"… We use the 300-dimensional case-insensitive Common Crawl GloVe word embeddings [7], and do not retrain the embeddings during training … We believe this is mainly because the Common Crawl version has a much larger …","url":["http://web.stanford.edu/class/cs224n/reports/6878267.pdf"]} -{"year":"2018","title":"Question Answering System with Question Type Modelling","authors":["K Ponomareva"],"snippet":"… and bug fixing. It might be also useful to consider additional training data sets with more variety of question types present and using a larger pretrained word vectors set, such as CommonCrawl.840B.300d. Acknowledgments I …","url":["http://web.stanford.edu/class/cs224n/reports/6904810.pdf"]} -{"year":"2018","title":"Ranking Documents by Answer-Passage Quality","authors":["E Yulianti, RC Chen, F Scholer, WB Croft, M Sanderson - 2018"],"snippet":"… We use the same set of word embeddings learned from the Y!A data (as with EmbYA), but the effectiveness is roughly comparable to a pre-trained model learned on the Common Crawl data [9]. For all methods tested in …","url":["http://rueycheng.com/paper/answer-passages.pdf"]} -{"year":"2018","title":"Reaching Human-level Performance in Automatic Grammatical Error Correction: An Empirical Study","authors":["T Ge, F Wei, M Zhou - arXiv preprint arXiv:1807.01270, 2018"],"snippet":"Page 1. Microsoft Research Technical Report REACHING HUMAN-LEVEL PERFORMANCE IN AUTOMATIC GRAMMATICAL ERROR CORRECTION: AN EMPIRICAL STUDY Tao Ge, Furu Wei, Ming Zhou Natural Language …","url":["https://arxiv.org/pdf/1807.01270"]} -{"year":"2018","title":"Reasoning with Sarcasm by Reading In-between","authors":["Y Tay, LA Tuan, SC Hui, J Su - arXiv preprint arXiv:1805.02856, 2018"],"snippet":"… We use the GloVe model trained on 2B Tweets for the Tweets and Reddit dataset. The Glove model trained on Common Crawl is used for the Debates corpus. The size of the word em- beddings is fixed at d = 100 and are fine-tuned during training …","url":["https://arxiv.org/pdf/1805.02856"]} -{"year":"2018","title":"Recommendation System Developer","authors":["E Nandini, J Neal, T Olson, C Prater-Lee"],"snippet":"… This matrix is mapped to a vector space; GloVe uses a least squares regression model to minimize the dot product between word vectors. spaCy trains GloVe on Common Crawl corpus by default, which offers free web page data …","url":["https://taylorlolson.com/docs/recommendation-system-developer.pdf"]} -{"year":"2018","title":"Refining Word Embeddings Using Intensity Scores for Sentiment Analysis","authors":["LC Yu, J Wang, KR Lai, X Zhang"],"snippet":"Page 1. 2329-9290 (c) 2017 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://www.researchgate.net/profile/Jin_Wang115/publication/322143064_Refining_Word_Embeddings_Using_Intensity_Scores_for_Sentiment_Analysis/links/5a4d9ad50f7e9b8284c4e442/Refining-Word-Embeddings-Using-Intensity-Scores-for-Sentiment-Analysis.pdf"]} -{"year":"2018","title":"Regularized Training Objective for Continued Training for Domain Adaptation in Neural Machine Translation","authors":["H Khayrallah, B Thompson, K Duh, P Koehn - Proceedings of the 2nd Workshop on …, 2018"],"snippet":"… large, out-of-domain corpus we utilize bi- text from WMT2017 (Bojar et al., 2017),4 which contains data from several sources: Europarl parliamentary proceedings (Koehn, 2005),5 News Commentary (political and economic …","url":["http://www.aclweb.org/anthology/W18-2705"]} -{"year":"2018","title":"Report on the Third Quality Translation Shared Task","authors":["P Williams, P Koehn, O Bojar, T Kocmi, L Specia… - 2018"],"snippet":"Page 1. This document is part of the Research and Innovation Action “Quality Translation 21 (QT21)”. This project has received funding from the European Union's Horizon 2020 program for ICT under grant agreement no. 645452 …","url":["http://www.qt21.eu/wp-content/uploads/2018/08/QT21-D4.3.pdf"]} -{"year":"2018","title":"Representativeness of Latent Dirichlet Allocation Topics Estimated from Data Samples with Application to Common Crawl","authors":["Y Du, A Herzog, A Luckow, R Nerella, C Gropp, A Apon"],"snippet":"Abstract—Common Crawl is a massive multi-petabyte dataset hosted by Amazon. It contains archived HTML web page data from 2008 to date. Common Crawl has been widely used for text mining purposes. Using data extracted from Common Crawl has several advantages","url":["https://www.researchgate.net/profile/Yuheng_Du/publication/322512712_Representativeness_of_latent_dirichlet_allocation_topics_estimated_from_data_samples_with_application_to_common_crawl/links/5a67b4980f7e9b76ea8f086e/Representativeness-of-latent-dirichlet-allocation-topics-estimated-from-data-samples-with-application-to-common-crawl.pdf"]} -{"year":"2018","title":"Reproducible Web Corpora: Interactive Archiving with Automatic Quality Assessment","authors":["J KIESEL, F KNEIST, M ALSHOMARY, B STEIN… - 2018"],"snippet":"… since disappeared. The Common Crawl [36] is also missing many of such resources. Other … the web. As a population of web pages to draw a sample from, we resort to the recent billion-page Common Crawl 2017-04 [36]. From …","url":["https://webis.de/downloads/publications/papers/stein_2018q.pdf"]} -{"year":"2018","title":"Research Data Management","authors":["S Kühne"],"snippet":"… the web as graph data - 886 m. nodes (07/2018) - 5.4 bn. edges (07/2018) THE COMMON CRAWL ARCHIVE, http://commoncrawl.org Some research questions - prevalence of Web advertising - etymologies of words …","url":["https://www.ral.uni-leipzig.de/fileadmin/user_upload/dokumente/Schleyer/Kuehne_Introduction.pdf"]} -{"year":"2018","title":"Research Frontiers in Information Retrieval Report from the Third Strategic Workshop on Information Retrieval in Lorne (SWIRL 2018)","authors":["JS Culpepper, F Diaz, MD Smucker"],"snippet":"Page 1. WORKSHOP REPORT Research Frontiers in Information Retrieval Report from the Third Strategic Workshop on Information Retrieval in Lorne (SWIRL 2018) Editors J. Shane Culpepper, Fernando Diaz, and Mark D. Smucker …","url":["http://www.damianospina.com/wp-content/uploads/2018/04/swirl3-report.pdf"]} -{"year":"2018","title":"RI-Match: Integrating Both Representations and Interactions for Deep Semantic Matching","authors":["L Chen, Y Lan, L Pang, J Guo, J Xu, X Cheng - Asia Information Retrieval Symposium, 2018"],"snippet":"… First, we introduce our experimental settings, including parameter setting, and evaluation metrics. Parameter Settings. We initialize word embeddings in the word embedding layer with 300-dimensional Glove word vectors pre-trained in the 840B Common Crawl corpus …","url":["https://link.springer.com/chapter/10.1007/978-3-030-03520-4_9"]} -{"year":"2018","title":"Rigging Research Results by Manipulating Top Websites Rankings","authors":["VL Pochat, T Van Goethem, W Joosen - arXiv preprint arXiv:1806.01156, 2018"],"snippet":"Page 1. Rigging Research Results by Manipulating Top Websites Rankings Victor Le Pochat, Tom Van Goethem and Wouter Joosen imec-DistriNet, KU Leuven 3001 Leuven, Belgium Email: firstname.lastname@cs.kuleuven.be …","url":["https://arxiv.org/pdf/1806.01156"]} -{"year":"2018","title":"Risk Analysis of Information-Leakage Through Interest Packets in NDN","authors":["D Kondo, T Silverston, H Tode, T Asami, O Perrin"],"snippet":"… We collected URLs from the data repository provided by Common Crawl and we evaluate the performances of our per-packet filters … All the URLs in our data set provided by Common Crawl did not necessarily have part as defined in RFC 1808 …","url":["http://ieeexplore.ieee.org/iel7/8106907/8116300/08116403.pdf"]} -{"year":"2018","title":"Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots","authors":["Y Yoon, WR Ko, M Jang, J Lee, J Kim, G Lee - arXiv preprint arXiv:1810.12541, 2018"],"snippet":"… In the space of word embedding, words of similar meaning have similar representations, so understanding natural language is easier. We used the pretrained word embedding model GloVe, trained on the Common Crawl corpus [21] …","url":["https://arxiv.org/pdf/1810.12541"]} -{"year":"2018","title":"S3BD: Secure Semantic Search over Encrypted Big Data in the Cloud","authors":["J Woodworth, MA Salehi - arXiv preprint arXiv:1809.07927, 2018"],"snippet":"Page 1. CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE Concurrency Computat.: Pract. Exper. 0000; 00:1–22 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/cpe …","url":["https://arxiv.org/pdf/1809.07927"]} -{"year":"2018","title":"Sandpiper: Scaling Probabilistic Inferencing to Large Scale Graphical Models","authors":["A Ulanov, M Marwah, M Kim, R Dathathri, C Zubieta…"],"snippet":"… It is the hyperlinked graph obtained from a web crawl conducted by Common Crawl in August 2012 [32] … We used a real, large-scale web graph for this use case. We derived it from the web graph available at Web Data Commons [30], based on the Common Crawl data [32] …","url":["http://marwah.org/publications/papers/bigdata2017.pdf"]} -{"year":"2018","title":"Scheduled Multi-Task Learning: From Syntax to Translation","authors":["E Kiperwasser, M Ballesteros - arXiv preprint arXiv:1804.08915, 2018"],"snippet":"Page 1. Scheduled Multi-Task Learning: From Syntax to Translation Eliyahu Kiperwasser∗ Computer Science Department Bar-Ilan University Ramat-Gan, Israel elikip@gmail.com Miguel Ballesteros IBM Research 1101 …","url":["https://arxiv.org/pdf/1804.08915"]} -{"year":"2018","title":"SDC: structured data collection by yourself","authors":["T Ohshima, M Toyama - Proceedings of the 8th International Conference on …, 2018"],"snippet":"… PVLDB 1, 1 (2008), 538--549. http://www.vldb.org/pvldb/171453916.pdf. 2. Common Crawl [nd]. Common Crawl. http://commoncrawl.org/. ([nd]). 3. DataHub - Frictionless Data [nd]. DataHub - Frictionless Data. http://datahub.io/. ([nd]) …","url":["https://dl.acm.org/citation.cfm?id=3200849"]} -{"year":"2018","title":"Searching Arguments in German with ArgumenText","authors":["C Stahlhut"],"snippet":"… for arguments on a topic such as “nuclear energy”, it first retrieves relevant documents via Elasticsearch from a large collection of documents, such as Common Crawl2. In the … 1A demonstrator is publicly available at …","url":["http://desires.dei.unipd.it/papers/paper20.pdf"]} -{"year":"2018","title":"Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings","authors":["M Tkachenko, CC Chia, H Lauw - Proceedings of the 56th Annual Meeting of the …, 2018"],"snippet":"… embeddings. The previous preoccupation centers around corpus size, ie, a larger corpus is perceived to be richer in statistical information. For instance, popular corpora include Wikipedia, Common Crawl, and Google News. We …","url":["http://www.aclweb.org/anthology/P18-1112"]} -{"year":"2018","title":"SEGBOT: A Generic Neural Text Segmentation Model with Pointer Network","authors":["J Li, A Sun, S Joty - IJCAI. Under Review, 2018"],"snippet":"Page 1. SEGBOT: A Generic Neural Text Segmentation Model with Pointer Network Jing Li, Aixin Sun and Shafiq Joty School of Computer Science and Engineering, Nanyang Technological University, Singapore …","url":["https://www.researchgate.net/profile/Aixin_Sun/publication/325168535_SEGBOT_A_Generic_Neural_Text_Segmentation_Model_with_Pointer_Network/links/5afb9f710f7e9b3b0bf2a964/SEGBOT-A-Generic-Neural-Text-Segmentation-Model-with-Pointer-Network.pdf"]} -{"year":"2018","title":"Semantic Term\" Blurring\" and Stochastic\" Barcoding\" for Improved Unsupervised Text Classification","authors":["RF Martorano III - arXiv preprint arXiv:1811.02456, 2018"],"snippet":"… trained on millions of documents from Google News. In the case of later models, they have trained on common crawl, a dataset of billions of web pages. The high level intuition with these models, is that terms used in similar contexts, likely have similar semantics …","url":["https://arxiv.org/pdf/1811.02456"]} -{"year":"2018","title":"Semi-Supervised Neural System for Tagging, Parsing and Lemmatization","authors":["P Rybak, A Wróblewska - CoNLL 2018, 2018"],"snippet":"… For Uyghur language only 3M words are available. The provided data sets come either from Wikipedia or Commom Crawl. Where it is possible we choose the sentences from Common Crawl, due to longer (on average) sentence sizes …","url":["http://universaldependencies.org/conll18/proceedings/K18-2.pdf#page=53"]} -{"year":"2018","title":"Sentence Classification for Investment Rules Detection","authors":["Y Mansar, S Ferradans - Proceedings of the First Workshop on Economics and …, 2018"],"snippet":"… embedding. This is justified by the fact that some words used in prospectuses are uncommon in the general use of language and thus are not included in available word vectors pre-trained on Wikipedia or common crawl alone …","url":["http://www.aclweb.org/anthology/W18-3106"]} -{"year":"2018","title":"Sentence Encoding with Tree-constrained Relation Networks","authors":["L Yu, CM d'Autume, C Dyer, P Blunsom, L Kong… - arXiv preprint arXiv …, 2018"],"snippet":"Page 1. SENTENCE ENCODING WITH TREE-CONSTRAINED RE- LATION NETWORKS Lei Yu Cyprien de Masson d'Autume Chris Dyer Phil Blunsom Lingpeng Kong Wang Ling DeepMind {leiyu, cyprien, cdyer, pblunsom, lingpenk, lingwang}@google.com ABSTRACT …","url":["https://arxiv.org/pdf/1811.10475"]} -{"year":"2018","title":"Sentence Modeling via Multiple Word Embeddings and Multi-level Comparison for Semantic Textual Similarity","authors":["HN Tien, MN Le, Y Tomohiro, I Tatsuya - arXiv preprint arXiv:1805.07882, 2018"],"snippet":"… The em- bedding representations in fastText are 300dimensional vectors. • GloVe is a 300-dimensional word embedding model learned on aggregated global wordword co-occurrence statistics from Common Crawl (840 billion tokens) …","url":["https://arxiv.org/pdf/1805.07882"]} -{"year":"2018","title":"Sentence Selection and Weighting for Neural Machine Translation Domain Adaptation","authors":["R Wang, M Utiyama, A Finch, L Liu, K Chen, E Sumita - IEEE/ACM Transactions on …, 2018"],"snippet":"Page 1. 2329-9290 (c) 2018 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8360031/"]} -{"year":"2018","title":"Sentence Similarity Learning Method based on Attention Hybrid Model","authors":["Y Wang, X Di, J Li, H Yang, L Bi - Journal of Physics: Conference Series, 2018"],"snippet":"… with our method. 4.3. Experimental Setup We initialize word representation in the word embedding layer with the 300-dimensional GloVe word vectors pre-trained from the Common Crawl Corpus [18]. Embeddings for words …","url":["http://iopscience.iop.org/article/10.1088/1742-6596/1069/1/012119/pdf"]} -{"year":"2018","title":"Sentence Simplification with Memory-Augmented Neural Networks","authors":["T Vu, B Hu, T Munkhdalai, H Yu - arXiv preprint arXiv:1804.07445, 2018"],"snippet":"… We used the same hyperparameters across all datasets. Word embeddings were initialized either randomly or with Glove vectors (Pennington et al., 2014) pre-trained on Common Crawl data (840B tokens), and fine-tuned during training …","url":["https://arxiv.org/pdf/1804.07445"]} -{"year":"2018","title":"SentEval: An Evaluation Toolkit for Universal Sentence Representations","authors":["A Conneau, D Kiela - arXiv preprint arXiv:1803.05449, 2018"],"snippet":"… Continuous bag-of-words embeddings (average of word vectors). We consider the most commonly used pretrained word vectors available, namely the fastText (Mikolov et al., 2017) and the GloVe (Pennington et al., 2014) vectors trained on CommonCrawl …","url":["https://arxiv.org/pdf/1803.05449"]} -{"year":"2018","title":"Sentiment Bias in Predictive Text Recommendations Results in Biased Writing","authors":["KC Arnold, K Chauncey, KZ Gajos"],"snippet":"… lending, or law enforcement—if the data sets used to train the algorithms are bi- ased [2]. Such biased data sets are more common than initially suspected: Recent work demonstrated that two popular text corpora, the …","url":["https://www.eecs.harvard.edu/~kgajos/papers/2018/arnold18sentiment.pdf"]} -{"year":"2018","title":"Sentiment Expression Boundaries in Sentiment Polarity Classification","authors":["R Kaljahi, J Foster - Proceedings of the 9th Workshop on Computational …, 2018"],"snippet":"… The input layer for these systems is the concatenation of an embedding layer, which uses pre-trained GloVe (Pennington et al., 2014) word embeddings 2 (1.9M vocabulary Common Crawl), concatenated with a one-hot vector …","url":["http://www.aclweb.org/anthology/W18-6222"]} -{"year":"2018","title":"Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples","authors":["M Cheng, J Yi, H Zhang, PY Chen, CJ Hsieh - arXiv preprint arXiv:1803.01128, 2018"],"snippet":"… one is trained from scratch. For the machine translation task, we train our model using 453k pairs from the Europal corpus of German-English WMT 157, common crawl and news-commentary. We use the hyper-parameters suggested …","url":["https://arxiv.org/pdf/1803.01128"]} -{"year":"2018","title":"Sequence-to-sequence Models for Cache Transition Systems","authors":["X Peng, L Song, D Gildea, G Satta"],"snippet":"… Hidden state sizes for both encoder and decoder are set to 100. The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training …","url":["https://www.cs.rochester.edu/u/gildea/pubs/peng-acl18.pdf"]} -{"year":"2018","title":"Shortcutting Label Propagation for Distributed Connected Components","authors":["S Stergiou, D Rughwani, K Tsioutsiouliklis - … Conference on Web Search and Data …, 2018"],"snippet":"… Note: OCR errors may be found in this Reference List extracted from the full text article. ACM has opted to expose the complete List rather than only correct and linked references. 1. 2016. Common Crawl. (2016). http://commoncrawl.org/. 2. 2016. Twitter Graph. (2016) …","url":["https://dl.acm.org/citation.cfm?id=3159696"]} -{"year":"2018","title":"Simple Algorithms For Sentiment Analysis On Sentiment Rich, Data Poor Domains.","authors":["PK Sarma, W Sethares - Proceedings of the 27th International Conference on …, 2018"],"snippet":"… Furthermore, in some of these domains, representing words from off-the-shelf word embeddings such as ones obtained from training word2vec, GloVe on Wikipedia or common-crawl may not be efficient. This is because the …","url":["http://www.aclweb.org/anthology/C18-1290"]} -{"year":"2018","title":"SLIND: Identifying Stable Links in Online Social Networks","authors":["J Zhang, L Tan, X Tao, X Zheng, Y Luo, JCW Lin - International Conference on …, 2018"],"snippet":"… The dataset chosen for this study, as well as for the demo, was crawled from Facebook and obtained from the repositories of the Common Crawl (August 2016) 1 . It is de-anonymized to reveal the following relational …","url":["https://link.springer.com/chapter/10.1007/978-3-319-91458-9_54"]} -{"year":"2018","title":"Smart Focused Web Crawler for Hidden Web","authors":["S Kaur, G Geetha - Information and Communication Technology for …, 2019"],"snippet":"… The number of partitions will depend on the number of URLs in site database. Tel-8 and common crawl datasets will be used. MapReduce function will be called, and the input will split into 64 MB plus copies of this on other clusters …","url":["https://link.springer.com/chapter/10.1007/978-981-13-0586-3_42"]} -{"year":"2018","title":"SocialLink: exploiting graph embeddings to link DBpedia entities to Twitter profiles","authors":["Y Nechaev, F Corcoglioniti, C Giuliano - Progress in Artificial Intelligence, 2018"],"snippet":"Page 1. Progress in Artificial Intelligence https://doi.org/10.1007/s13748-018-0160-x REGULAR PAPER SocialLink: exploiting graph embeddings to link DBpedia entities to Twitter profiles Yaroslav Nechaev1,2 · Francesco Corcoglioniti1 · Claudio Giuliano1 …","url":["https://link.springer.com/article/10.1007/s13748-018-0160-x"]} -{"year":"2018","title":"Software Requirements Classification Using Word Embeddings and Convolutional Neural Networks","authors":["VL Fong - 2018"],"snippet":"Page 1. SOFTWARE REQUIREMENTS CLASSIFICATION USING WORD EMBEDDINGS AND CONVOLUTIONAL NEURAL NETWORKS A Thesis presented to the Faculty of California Polytechnic State University, San Luis Obispo In Partial Fulfillment …","url":["https://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=3249&context=theses"]} -{"year":"2018","title":"SOLVENT: A Mixed Initiative System for Finding Analogies between Research Papers","authors":["J CHAN, JC CHANG, TOM HOPE, D SHAHAF… - 2018"],"snippet":"… We originally used pre-trained GloVe [35] vectors trained on the Common Crawl dataset 5. However, baseline performance was very poor … Finally, we used an initial prototype GloVe model (with Common Crawl) to suggest new matches we might have missed …","url":["http://joelchan.me/assets/pdf/2018-cscw-schema-highlighter.pdf"]} -{"year":"2018","title":"Speech-Based Real-Time Presentation Tracking Using Semantic Matching","authors":["R Asadi - 2017"],"snippet":"Speech-Based Real-Time Presentation Tracking Using Semantic Matching. Abstract. Oral presentations are an essential yet challenging aspect of academic and professional life. To date, many commercial and research products …","url":["http://search.proquest.com/openview/a91c85d71f1e130a00bee5b6f95d90e3/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2018","title":"Stop Illegal Comments: A Multi-Task Deep Learning Approach","authors":["A Elnaggar, B Waltl, I Glaser, J Landthaler… - arXiv preprint arXiv …, 2018"],"snippet":"… 12] and our Glove). The Fast Text is 2 million word vectors trained on Common Crawl with dimension 300, while Glove is 2.2 million word vectors trained on Common Crawl with dimension 300. Furthermore, we trained a custom …","url":["https://arxiv.org/pdf/1810.06665"]} -{"year":"2018","title":"Studio Ousia's Quiz Bowl Question Answering System at NIPS HCQA 2017","authors":["I Yamada"],"snippet":"… Moreover, we use the GloVe word embeddings [10] trained on the 840 billion Common Crawl corpus to initialize the word representations. We randomly select 10% questions from the dataset as a validation set and use the remaining questions to train the model …","url":["http://www.cs.umd.edu/~miyyer/data/Ikuya.pdf"]} -{"year":"2018","title":"Studio Ousia's Quiz Bowl Question Answering System","authors":["I Yamada, R Tamaki, H Shindo, Y Takefuji"],"snippet":"… 1,000. We use filter window sizes of 2, 3, 4, and 5, and 1,000 feature maps for each filter. We use the GloVe word embeddings [12] trained on the 840 billion Common Crawl corpus to initialize the word representations. As in …","url":["https://www.researchgate.net/profile/Yoshiyasu_Takefuji/publication/323535360_Studio_Ousia%27s_Quiz_Bowl_Question_Answering_System/links/5a9a6cde45851586a2aa0ade/Studio-Ousias-Quiz-Bowl-Question-Answering-System.pdf"]} -{"year":"2018","title":"Studying the Difference Between Natural and Programming Language Corpora","authors":["C Casalnuovo, K Sagae, P Devanbu - arXiv preprint arXiv:1806.02437, 2018"],"snippet":"… The German and Spanish corpora were selected from a sample of files from the unlabeled datasets from the ConLL 2017 Shared Task (Ginter et al, 2017), which consist of web text obtained from CommonCrawl.8 Like the 1 billion …","url":["https://arxiv.org/pdf/1806.02437"]} -{"year":"2018","title":"Style Transfer Through Back-Translation","authors":["S Prabhumoye, Y Tsvetkov, R Salakhutdinov, AW Black - arXiv preprint arXiv …, 2018"],"snippet":"… We used data from Workshop in Statistical Machine Translation 2015 (WMT15) (Bojar et al., 2015) to train our translation models. We used the French– English data from the Europarl v7 corpus, the news commentary …","url":["https://arxiv.org/pdf/1804.09000"]} -{"year":"2018","title":"SumeCzech: Large Czech News-Based Summarization Dataset","authors":["M Straka, N Mediankin, T Kocmi, Z Žabokrtský… - Proceedings of the Eleventh …, 2018"],"snippet":"… The raw data for the dataset was collected from the Common Crawl project2 using the Common Crawl API. Initially, five Czech news websites were selected to create the dataset: novinky.cz, lidovky.cz, denik.cz, idnes.cz, and ihned.cz …","url":["http://www.aclweb.org/anthology/L18-1551"]} -{"year":"2018","title":"Supplementary Material for “Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering”","authors":["DK Nguyen, T Okatani"],"snippet":"… All the questions were tokenized using Python Natural Language Toolkit (nltk) [2]. We used the vocabulary provided by the CommonCrawl-840B Glove model for English word vectors [11], and set out-of-vocabulary words to unk …","url":["http://openaccess.thecvf.com/content_cvpr_2018/Supplemental/3586-supp.pdf"]} -{"year":"2018","title":"Survey of Simple Neural Networks in Semantic Textual Similarity Analysis","authors":["DS Prijatelj, J Ventura, J Kalita"],"snippet":"… vectors. This specific set of word vectors have 300 dimensions and were pre-trained on 840 billion tokens taken from Common Crawl3. Different pretrained word vectors may be used in-place of this specific pretrained set …","url":["http://cs.uccs.edu/~jkalita/work/reu/REU2017/11Prijatelj.pdf"]} -{"year":"2018","title":"SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference","authors":["R Zellers, Y Bisk, R Schwartz, Y Choi - arXiv preprint arXiv:1808.05326, 2018"],"snippet":"… We consider three different types of word representations: 300d GloVe vectors from Common Crawl (Pennington et al., 2014), 300d Numberbatch vectors retrofitted using ConceptNet relations (Speer et al., 2017), and …","url":["https://arxiv.org/pdf/1808.05326"]} -{"year":"2018","title":"Systems and methods for improved user interface","authors":["Z Wei, T Nguyen, I Chan, KM Liou, H Wang, H Lu - US Patent App. 15/621,647, 2018"],"snippet":"Aspects of the present disclosure relate to systems and methods for a voice-centric virtual or soft keyboard (or keypad). Unlike other keyboards, embodiments of the present disclosure prioritize the voice keyboard, meanwhile providing users with a quick and uniform navigation to …","url":["https://patents.google.com/patent/US20180011688A1/en"]} -{"year":"2018","title":"T2S: An Encoder-Decoder Model for Topic-Based Natural Language Generation","authors":["W Ou, C Chen, J Ren - International Conference on Applications of Natural …, 2018"],"snippet":"… We initialize our word embeddings with publicly available 300-dimensional Glove vectors [13], which is trained on 840 billion tokens of Common Crawl data 2 . Words that do not exist in the pretrained Glove vectors are replaced by “” token …","url":["https://link.springer.com/chapter/10.1007/978-3-319-91947-8_15"]} -{"year":"2018","title":"TabVec: Table Vectors for Classification of Web Tables","authors":["M Ghasemi-Gol, P Szekely - arXiv preprint arXiv:1802.06290, 2018"],"snippet":"… They evaluated their system on the common crawl dataset, and reported signi cant improvement compared to previous feature based methods … Three of these datasets are from unusual domains, and one is a sample from Common Crawl …","url":["https://arxiv.org/pdf/1802.06290"]} -{"year":"2018","title":"TCS Research at SemEval-2018 Task 1: Learning Robust Representations using Multi-Attention Architecture","authors":["H Meisheri, L Dey - Proceedings of The 12th International Workshop on …, 2018"],"snippet":"… corpus which results in parallel attention mechanism - one set from the twitter space and another from a common crawl corpus … 2014) trained over common crawl corpus with 300 dimension vector, Character1 level embeddings trained …","url":["http://www.aclweb.org/anthology/S18-1043"]} -{"year":"2018","title":"Temporal Modular Networks for Retrieving Complex Compositional Activities in Videos","authors":["L Fei-Fei, JC Niebles"],"snippet":"… 4.2. The Stanford Parser [27] is used to obtain the initial parse trees for the compositional structure. For word vectors as part of the base module input, we use the 300-dimensional GloVe [36] vectors pretrained on Common Crawl (42 billion tokens) …","url":["http://svl.stanford.edu/assets/papers/liu2018eccv.pdf"]} -{"year":"2018","title":"Ten Years of WebTables","authors":["M Cafarella, A Halevy, H Lee, J Madhavan, C Yu… - Proceedings of the VLDB …, 2018"],"snippet":"… Several researchers produced web tables from the public Common Crawl [1, 24, 15], thereby making them available to a broad audience outside the large Web companies. Wang, et al. [36] improved ex- traction quality by leveraging curated knowledge bases …","url":["http://www.vldb.org/pvldb/vol11/p2140-cafarella.pdf"]} -{"year":"2018","title":"Text Embeddings for Retrieval From a Large Knowledge Base","authors":["T Cakaloglu, C Szegedy, X Xu - arXiv preprint arXiv:1810.10176, 2018"],"snippet":"… We specially utilized the ”glove-840B-300d” pre-trained word vectors where it was trained on using the common crawl within 840B tokens, 2.2M vocab, cased, 300d vectors. We created the GloVe representation of our corpus …","url":["https://arxiv.org/pdf/1810.10176"]} -{"year":"2018","title":"Text-based Sentiment Analysis and Music Emotion Recognition","authors":["E Çano - 2018"],"snippet":"… 38 3.4 Confusion matrix of lexicon-generated song labels . . . . . 42 5.1 Listofwordembeddingcorpora . . . . . 65 5.2 Google News compared with Common Crawl . . . . . 69 5.3 Propertiesofself_w2vmodels . . . . . 70 …","url":["https://www.researchgate.net/profile/Erion_Cano/publication/325651523_Text-based_Sentiment_Analysis_and_Music_Emotion_Recognition/links/5b1a8d640f7e9b68b429cdae/Text-based-Sentiment-Analysis-and-Music-Emotion-Recognition.pdf"]} -{"year":"2018","title":"Text-Driven Head Motion Synthesis Using Neural Networks","authors":["BTS Bojlén"],"snippet":"… We also compared embeddings trained on Common Crawl (a large collection of websites), and on Wikipedia and the Gigaword corpus of news articles. The model trained was a baseline RNN with the architecture specified in Table 4.2 …","url":["https://btao.org/static/dissertation.pdf"]} -{"year":"2018","title":"TEXTBUGGER: Generating Adversarial Text Against Real-world Applications","authors":["J Li, S Ji, T Du, B Li, T Wang"],"snippet":"… This is because we observe that the stop-words also have impact on the prediction results. In particular, our experiments utilize the 300-dimension GloVe embeddings7 trained on 840 billion tokens of Common Crawl. Words …","url":["https://nesa.zju.edu.cn/download/TEXTBUGGER%20Generating%20Adversarial%20Text%20Against%20Real-world%20Applications.pdf"]} -{"year":"2018","title":"The ADAPT System Description for the IWSLT 2018 Basque to English Translation Task","authors":["A Poncelas, A Way, K Sarasola - International Workshop on Spoken Language …, 2018"],"snippet":"… pair (see Table 4)[12]. In particular, we use the CommonCrawl, Europarl V7, NewsCommentary V12 and UN datasets for training, 5 the NewsTest 2008-2012 corpora for validation and NewsTest 2013 for testing. We did not use …","url":["https://workshop2018.iwslt.org/downloads/Proceedings_IWSLT_2018.pdf#page=91"]} -{"year":"2018","title":"The AFRL WMT18 Systems: Ensembling, Continuation and Combination","authors":["J Gwinnup, T Anderson, G Erdmann, K Young - … of the Third Conference on Machine …, 2018"],"snippet":"… We took the Russian and English monolingual CommonCrawl (Smith et al., 2013) data provided by the organizers and applied tokenization and BPE with our common, joint model … 2013. Dirt cheap web-scale parallel text from the common crawl …","url":["http://www.aclweb.org/anthology/W18-6411"]} -{"year":"2018","title":"The Geometry of Culture: Analyzing Meaning through Word Embeddings","authors":["AC Kozlowski, M Taddy, JA Evans - arXiv preprint arXiv:1803.09288, 2018"],"snippet":"Page 1. The Geometry of Culture: Analyzing Meaning through Word Embeddings Austin C. Kozlowski​1 Matt Taddy​2,3 James A. Evans​1 1 ​University of Chicago, Department of Sociology 2 ​University of Chicago, Booth School of Business 3 ​Amazon …","url":["https://arxiv.org/pdf/1803.09288"]} -{"year":"2018","title":"The Importance of Subword Embeddings in Sentence Pair Modeling","authors":["W Lan, W Xu"],"snippet":"… GloVe word vectors (Pennington et al.), trained on 27 billion words from Twitter (vocabulary size of 1.2 milion words) for social media datasets, and 300-dimensional GloVe vectors, trained on 840 billion words (vocabulary …","url":["https://pdfs.semanticscholar.org/c99f/e106e7d1cc62f7cb73ea6fc745b8679e4d2f.pdf"]} -{"year":"2018","title":"The Knowledge and Language Gap in Medical Information Seeking","authors":["L Soldaini - 2018"],"snippet":"The Knowledge and Language Gap in Medical Information Seeking. Abstract. Interest in medical information retrieval has risen significantly in the last few years. The Internet has become a primary source for consumers looking …","url":["http://search.proquest.com/openview/e669cd1478b33d52fa4cc71e8393c639/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2018","title":"The MLLP-UPV German-English Machine Translation System for WMT18","authors":["J Iranzo-Sánchez, P Baquero-Arnal, GVG Dıaz-Munıo…"],"snippet":"… 422 Page 2. Table 1: Size by corpus of the WMT18 parallel dataset Corpus Sentences (M) News Commentary v13 0.3 Rapid (press releases) 1.3 Common Crawl 1.9 Europarl v7 2.4 ParaCrawl 36.4 WMT18 total 42.3 the rest of the WMT corpora …","url":["http://www.statmt.org/wmt18/pdf/WMT041.pdf"]} -{"year":"2018","title":"The Natural Language Decathlon: Multitask Learning as Question Answering","authors":["B McCann, NS Keskar, C Xiong, R Socher - arXiv preprint arXiv:1806.08730, 2018"],"snippet":"Page 1. The Natural Language Decathlon: Multitask Learning as Question Answering Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher Salesforce Research {bmccann,nkeskar,cxiong,rsocher}@salesforce.com Abstract …","url":["https://arxiv.org/pdf/1806.08730"]} -{"year":"2018","title":"The RWTH Aachen Machine Translation Systems for IWSLT 2017","authors":["P Bahar, J Rosendahl, N Rossenbach, H Ney"],"snippet":"… The majority of removed sentence pairs are part of the Common Crawl (300k sentences ie 14% of Common Crawl) and the OpenSubtitles corpora (1000k sentences ie 8% of OpenSubtitles) … Common Crawl Europarl UN News Comment OpenSub QED TED Wiki Total …","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1061/BaharParniaRosendahlJanRossenbachNickNeyHermann--TheRWTHAachenMachineTranslationSystemsforIWSLT2017--2017.pdf"]} -{"year":"2018","title":"The RWTH Aachen University Filtering System for the WMT 2018 Parallel Corpus Filtering Task","authors":["N Rossenbach, J Rosendahl, Y Kim, M Graça… - Proceedings of the Third …, 2018"],"snippet":"… We train IBM1 models for both directions (s2t and t2s) using the bilingual data from the WMT 2018 German↔English task namely the Europarl, CommonCrawl, NewsCommentary and Rapid corpus. 4.3 Neural Network Language Model …","url":["http://www.aclweb.org/anthology/W18-6487"]} -{"year":"2018","title":"The RWTH Aachen University Supervised Machine Translation Systems for WMT 2018","authors":["J Schamper, J Rosendahl, P Bahar, Y Kim, A Nix… - Proceedings of the Third …, 2018"],"snippet":"… 1.4% BLEU. The Transformer model was trained using the standard parallel WMT 2018 data sets (namely Europarl, CommonCrawl, NewsCommentary and Rapid, in total 5.9M sentence pairs) as well as the 4.2M sen3http://www …","url":["http://www.aclweb.org/anthology/W18-6426"]} -{"year":"2018","title":"The Speechmatics Parallel Corpus Filtering System for WMT18","authors":["T Ash, R Francis, W Williams - Proceedings of the Third Conference on Machine …, 2018"],"snippet":"… This data comprises the data for the WMT 2018 news translation task data for German-English without the Paracrawl parallel corpus. This data is approximately 130M words, drawn from Europarl, Common Crawl, News …","url":["http://www.aclweb.org/anthology/W18-6472"]} -{"year":"2018","title":"The study of keyword search in open source search engines and digital forensics tools with respect to the needs of cyber crime investigations","authors":["J Hansen - 2017"],"snippet":"Page 1. The study of keyword search in open source search engines and digital forensics tools with respect to the needs of cyber crime investigations Joachim Hansen Master in Information Security Supervisor: Katrin Franke …","url":["https://brage.bibsys.no/xmlui/bitstream/handle/11250/2479196/18187_FULLTEXT.pdf?sequence=1"]} -{"year":"2018","title":"The University of Cambridge's Machine Translation Systems for WMT18","authors":["F Stahlberg, A de Gispert, B Byrne - arXiv preprint arXiv:1808.09465, 2018"],"snippet":"… Page 3. Corpus Over-sampling #Sentences Common Crawl 2x 4.43M Europarl v7 2x 3.76M News Commentary v13 2x 0.57M Rapid 2016 2x 2.27M ParaCrawl 1x 11.16M Synthetic (news-2017) 1x 20.00M Total 42.19M Table …","url":["https://arxiv.org/pdf/1808.09465"]} -{"year":"2018","title":"The University of Edinburgh's Submissions to the WMT18 News Translation Task","authors":["B Haddow, N Bogoychev, D Emelin, U Germann… - Proceedings of the Third …, 2018"],"snippet":"… closely Corpus % Back translations1 50% CommonCrawl 5% Europarl 15% News-commentary 10% ParaCrawl 10% Rapid 10% Table 1: Blend of data for training the DE↔EN en- semble models (40M sentence pairs total) …","url":["http://www.aclweb.org/anthology/W18-6412"]} -{"year":"2018","title":"The USTC-NEL Speech Translation system at IWSLT 2018","authors":["D Liu, J Liu, W Guo, S Xiong, Z Ma, R Song, C Wu… - arXiv preprint arXiv …, 2018"],"snippet":"… Page 2. Table 2: text training data. Corpus raw filtered commoncrawl 2.39M 1.80M rapid 1.32M 1.00M europal 1.92M 1.81M commentary 0.284M 0.233M paracrawl 36.35M 12.35M opensubtitles 22.51M 14.24M WIT3(in domain) 0.209M 0.207M …","url":["https://arxiv.org/pdf/1812.02455"]} -{"year":"2018","title":"Topic coherence analysis for the classification of Alzheimer's disease}}","authors":["A Pompili, A Abad, DM de Matos, IP Martins - Proc. IberSPEECH 2018, 2018"],"snippet":"… regularities among sentences. To this purpose, we rely on a pre-trained model of word vector representations containing 2 million word vectors, in 300 dimensions, trained with fastText on Common Crawl [27]. In the process …","url":["https://www.isca-speech.org/archive/IberSPEECH_2018/pdfs/IberS18_O5-1_Pompili.pdf"]} -{"year":"2018","title":"Topic Modeling for Analyzing Open-Ended Survey Responses","authors":["AS Pietsch, S Lessmann"],"snippet":"… Word2Vec [32] and the second one on Common Crawl web data via Global Vectors for Word Representation (GloVe) [33] … The set is trained on 42 billion tokens of Common Crawl web data and contains 300-dimensional vectors …","url":["https://www.wiwi.hu-berlin.de/de/forschung/irtg/results/discussion-papers/discussion-papers-2017-1/irtg1792dp2018-054.pdf"]} -{"year":"2018","title":"Toward better reasoning from natural language","authors":["A Purtee - 2018"],"snippet":"Page 1. Toward Better Reasoning from Natural Language by Adam Lee Purtee Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Supervised by Professor Lenhart Schubert and …","url":["https://urresearch.rochester.edu/fileDownloadForInstitutionalItem.action?itemId=34810&itemFileId=186239"]} -{"year":"2018","title":"Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection","authors":["L Konstantinovskiy, O Price, M Babakar, A Zubiaga - arXiv preprint arXiv:1809.08193, 2018"],"snippet":"… The method provided by InferSent involves words be- ing converted to their common crawl GloVe implementations before being passed through a bidirectional long-short-term memory (BiLSTM) network (Hochreiter & Schmidhuber, 1997) …","url":["https://arxiv.org/pdf/1809.08193"]} -{"year":"2018","title":"Towards Knowledge Graph Construction from Entity Co-occurrence","authors":["N Heist"],"snippet":"… patterns. 2 https://www.mturk.com/ 3 Pages starting with List of in http://downloads. dbpedia.org/2016-10/corei18n/en/labels en.ttl.bz2 4 http://commoncrawl.org/ 5 http://webdatacommons.org/structureddata/#results-2017-1 Page 7 …","url":["https://people.kmi.open.ac.uk/francesco/wp-content/uploads/2018/11/EKAWDC2018_3.pdf"]} -{"year":"2018","title":"Towards Linear Time Neural Machine Translation with Capsule Networks","authors":["M Wang, J Xie, Z Tan, J Su - arXiv preprint arXiv:1811.00287, 2018"],"snippet":"… French translation are presented in Table 1. We compare CAPSNMT with various other systems including the winning system in WMT'14 (Buck et al., 2014), a phrase-based system whose language models were trained on …","url":["https://arxiv.org/pdf/1811.00287"]} -{"year":"2018","title":"Towards Personalized Learning using Counterfactual Inference for Randomized Controlled Trials","authors":["S Zhao - 2018"],"snippet":"Page 1. Towards Personalized Learning using Counterfactual Inference for Randomized Controlled Trials by Siyuan Zhao A Dissertation Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE …","url":["https://web.wpi.edu/Pubs/ETD/Available/etd-042618-010745/unrestricted/szhao.pdf"]} -{"year":"2018","title":"Towards Two-Dimensional Sequence to Sequence Model in Neural Machine Translation","authors":["P Bahar, C Brix, H Ney - arXiv preprint arXiv:1810.03975, 2018"],"snippet":"… 5 Experiments We have done the experiments on the WMT 2017 German→English and English→German news tasks consisting of 4.6M training samples collected from the well-known data sets Europarl-v7, News-Commentary-v10 and Common-Crawl …","url":["https://arxiv.org/pdf/1810.03975"]} -{"year":"2018","title":"Training a Neural Network in a Low-Resource Setting on Automatically Annotated Noisy Data","authors":["MA Hedderich, D Klakow - arXiv preprint arXiv:1807.00745, 2018"],"snippet":"… Page 4. tries other than Britain until the scientific” where ”Britain” is the target word with label y = LOC. Sentence boundaries are padded. We encode the words using the 300-dimensional GloVe vectors trained on cased text …","url":["https://arxiv.org/pdf/1807.00745"]} -{"year":"2018","title":"Training Tips for the Transformer Model","authors":["M Popel, O Bojar - arXiv preprint arXiv:1804.00247, 2018"],"snippet":"… commoncrawl 161 k 3.3 M 2.9 M … Most of our training data comes from the CzEng parallel treebank, version 1.7 (57M sentence pairs), and the rest (1M sentence pairs) comes from three smaller sources (Europarl, News …","url":["https://arxiv.org/pdf/1804.00247"]} -{"year":"2018","title":"Transfer Learning from LDA to BiLSTM-CNN for Offensive Language Detection in Twitter","authors":["G Wiedemann, E Ruppert, R Jindal, C Biemann - Austrian Academy of Sciences …, 2018"],"snippet":"… classification labels per task. have not been seen during training the embedding model. We use a model pre-trained with German language data from Wikipedia and Common Crawl provided by Mikolov et al.(2018). First, we unify all …","url":["https://www.oeaw.ac.at/fileadmin/subsites/academiaecorpora/PDF/GermEval2018_Proceedings.pdf#page=91"]} -{"year":"2018","title":"Transferred Embeddings for Igbo Similarity, Analogy and Diacritic Restoration Tasks","authors":["IEMHI Onyenwe, C Enemuo - COLING 2018, 2018"],"snippet":"… org news dataset. • igWkSbwd from same as igWkNews but with subword information. • igWkCrl from fastText Common Crawl dataset Table 1 shows the vocabulary lengths (vocabs), and the dimensions (vectors) of each of the models used in our experiments …","url":["http://www.aclweb.org/anthology/W18-40#page=40"]} -{"year":"2018","title":"Transferred Embeddings for Igbo Similarity, Analogy, and Diacritic Restoration Tasks","authors":["I Ezeani, I Onyenwe, M Hepple - Proceedings of the Third Workshop on Semantic …, 2018"],"snippet":"… igWkSbwd from same as igWkNews but with subword information. • igWkCrl from fastText Common Crawl dataset Table 1 shows the vocabulary lengths (vocabs), and the dimensions (vectors) of each of the models used in our experiments. 3 Model Evaluation …","url":["http://www.aclweb.org/anthology/W18-4004"]} -{"year":"2018","title":"Translation of Biomedical Documents with Focus on Spanish-English","authors":["MS Duma, W Menzel - Proceedings of the Third Conference on Machine …, 2018"],"snippet":"… 2http://commoncrawl.org/ 3https://paracrawl.eu/index.html … Track / Corpora EN-ES EN-PT EN-RO Commoncrawl 1.8M - - Paracrawl - 2.1M 2.4M Wikipedia 1.6M 1.6M - EMEA 678K 1.08M 994K Scielo-gma 2016 166K 613K - Table 1: Corpora used for DSTF 3.2 Tools …","url":["http://www.aclweb.org/anthology/W18-6444"]} -{"year":"2018","title":"Two-Step Multi-factor Attention Neural Network for Answer Selection","authors":["P Zhang, Y Hou, Z Su, Y Su - Pacific Rim International Conference on Artificial …, 2018"],"snippet":"… 3.3 Experimental Settings. We initialize word embeddings with 300-dimensional GloVe word vectors pre-trained from the 840B Common Crawl corpus [20]. For out of vocabulary (OOV) words, their embeddings are initialized randomly …","url":["https://link.springer.com/chapter/10.1007/978-3-319-97304-3_50"]} -{"year":"2018","title":"UBC-NLP at IEST 2018: Learning Implicit Emotion With an Ensemble of Language Models","authors":["H Alhuzali, M Elaraby, M Abdul-Mageed"],"snippet":"… morphology like English. Additionally, fastText partially solves issues with out-of-vocabulary words since it exploits character sequences. FastText is trained on the Common Crawl dataset, consisting of 600B tokens. For this and the …","url":["https://mageed.sites.olt.ubc.ca/files/2018/09/emnlp18_IEST_WASSA_2018.pdf"]} -{"year":"2018","title":"Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation","authors":["M Artetxe, G Labaka, I Lopez-Gazpio, E Agirre - arXiv preprint arXiv:1809.02094, 2018"],"snippet":"… analogies. We use the largest pre-trained model published by the authors9, which was trained on 840 billion words of the Common Crawl corpus and contains 300dimensional vectors for 2.2 million words. Fasttext (Bojanowski …","url":["https://arxiv.org/pdf/1809.02094"]} -{"year":"2018","title":"Understanding Back-Translation at Scale","authors":["S Edunov, M Ott, M Auli, D Grangier - arXiv preprint arXiv:1808.09381, 2018"],"snippet":"… It also allows us to estimate the value of BT data for domain adaptation since the newscrawl corpus (BT-news) is pure news whereas the bitext is a mixture of eu- roparl and commoncrawl with only a small newscommentary portion …","url":["https://arxiv.org/pdf/1808.09381"]} -{"year":"2018","title":"Understanding Search Queries in Natural Language","authors":["Z Neverilová, M Kvaššay - RASLAN 2018 Recent Advances in Slavonic Natural …, 2018"],"snippet":"… stop-words are removed. Tokens are mapped to 300-dimensional word embeddings using publicly available vocabulary of FastText [2] vectors trained on CommonCrawl dataset. Missing words are ignored. Vectors are then …","url":["http://nlp.fi.muni.cz/raslan/raslan18.pdf#page=93"]} -{"year":"2018","title":"Unsupervised Disambiguation of Abstract Syntax","authors":["O KALLDAL, M LUDVIGSSON"],"snippet":"Page 1. NoPConj : PConj AAnter : Ant TTAnt : Temp whoever_NP : NP NoVoc : Voc PPos : Pol is_right_VP : VP PhrUtt : Phr TPast : Tense PredVP : Cl UseCl : S UttS : Utt Unsupervised Disambiguation of Abstract Syntax A Language …","url":["http://publications.lib.chalmers.se/records/fulltext/255307/255307.pdf"]} -{"year":"2018","title":"Unsupervised Domain Adaptation by Adversarial Learning for Robust Speech Recognition","authors":["P Denisov, NT Vu, MF Font - arXiv preprint arXiv:1807.11284, 2018"],"snippet":"… Summary of the used corpora is given in Tab. 1. In addition to that, 197 millions words of Italian Deduplicated CommonCrawl Text are used to build Italian language model. Italian dictionary ILE with pronunciations for 588k words is used as a lexicon. 3.2 Baseline …","url":["https://arxiv.org/pdf/1807.11284"]} -{"year":"2018","title":"Unsupervised Mining of Analogical Frames by Constraint Satisfaction","authors":["L De Vine, S Geva, P Bruza - Australasian Language Technology Association …"],"snippet":"… 2 a3, 3 Figure 4: Determining an analogy completion from a larger frame We conducted experiments with embeddings constructed by ourselves as well as with publicly accessible embeddings from the fastText web site2 trained …","url":["http://alta2018.alta.asn.au/alta2018-draft-proceedings.pdf#page=44"]} -{"year":"2018","title":"Unsupervised Neural Machine Translation Initialized by Unsupervised Statistical Machine Translation","authors":["B Marie, A Fujita - arXiv preprint arXiv:1810.12703, 2018"],"snippet":"… These methods usually exploit existing accurate translation models and have shown to be useful especially when targeting 1See for instance the Common Crawl project: http:// commoncrawl.org/ low-resource language pairs and domains …","url":["https://arxiv.org/pdf/1810.12703"]} -{"year":"2018","title":"Unsupervised Post-processing of Word Vectors via Conceptor Negation","authors":["T Liu, L Ungar, J Sedoc - arXiv preprint arXiv:1811.11001, 2018"],"snippet":"… We use the publicly available pre-trained Google News Word2Vec (Mikolov et al. 2013)5 and Common Crawl GloVe6 (Pennington, Socher, and Manning 2014) to perform lexical-level experiments. For CN, we fix α = 2 for …","url":["https://arxiv.org/pdf/1811.11001"]} -{"year":"2018","title":"Unsupervised semantic frame induction using triclustering","authors":["D Ustalov, A Panchenko, A Kutuzov, C Biemann… - arXiv preprint arXiv …, 2018"],"snippet":"… In our evaluation, we use triple frequencies from the DepCC dataset (Panchenko et al., 2018) , which is a dependency-parsed version of the Common Crawl corpus, and the standard 300-dimensional word embeddings …","url":["https://arxiv.org/pdf/1805.04715"]} -{"year":"2018","title":"Unsupervised Sense-Aware Hypernymy Extraction","authors":["D Ustalov, A Panchenko, C Biemann, SP Ponzetto - arXiv preprint arXiv:1809.06223, 2018"],"snippet":"… Recent approaches to hypernym extraction went into three directions: (1) unsupervised methods based on such huge corpora as CommonCrawl1 to ensure extraction coverage using Hearst (1992) patterns (Seitner et al …","url":["https://arxiv.org/pdf/1809.06223"]} -{"year":"2018","title":"User-Centric Ontology Population","authors":["K Clarkson, AL Gentile, D Gruhl, P Ristoski, J Terdiman…"],"snippet":"… Ristoski et al. [29] use standard word embeddings and graph embeddings to align instances extracted from the Common Crawl4 to the DBpedia ontology. The use of deep learning models has also been explored for this task. Dong et al …","url":["https://2018.eswc-conferences.org/wp-content/uploads/2018/02/ESWC2018_paper_10.pdf"]} -{"year":"2018","title":"Using a Stacked Residual LSTM Model for Sentiment Intensity Prediction","authors":["J Wang, B Peng, X Zhang - Neurocomputing, 2018"],"snippet":"… To enhance performance of LSTM layers, we also introduce a bi-directional strategy [34]. The word embeddings used in this experiment was respectively pre-trained on Common Crawl 840B 2 (English) and wiki dumps 3 (Chinese) by GloVe [55] …","url":["https://www.sciencedirect.com/science/article/pii/S0925231218311226"]} -{"year":"2018","title":"Using context to identify the language of face-saving","authors":["N Naderi, G Hirst"],"snippet":"… For all our Neural Network models, we initialized our word representations using the publicly available GloVe pre-trained word embeddings (Pennington et al., 2014)8 (300-dimensional vectors trained on Common Crawl data) …","url":["ftp://ftp.db.toronto.edu/public_html/cs/ftp/public_html/pub/gh/Naderi+Hirst-ArgMining-2018.pdf"]} -{"year":"2018","title":"Using Deep Learning For Title-Based Semantic Subject Indexing To Reach Competitive Performance to Full-Text","authors":["F Mai, L Galke, A Scherp - arXiv preprint arXiv:1801.06717, 2018"],"snippet":"… We adopt the preprocessing and tokenization scheme of Galke et. al [5]. For the LSTM and CNN, we use 300-dimensional pretrained word embeddings obtained from training GloVe [28] on Common Crawl with 840 billion tokens7. Out-of-vocabulary words are discarded …","url":["https://arxiv.org/pdf/1801.06717"]} -{"year":"2018","title":"Using Machine Learning to Detect Malicious Websites","authors":["R Elsaleh - 2018"],"snippet":"… Benign Data Benign data was obtained from the Common Crawl6. The Common Crawl is a massive, continuously updated collection of crawled websites available for download … PhishTank 36,485 65,000 VirusTotal 5,036 0 6 http://commoncrawl.org/ Page 21. 12 …","url":["http://search.proquest.com/openview/da712bc2891c9bddbdc64e287a72dcc1/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2018","title":"Using Monolingual Data in Neural Machine Translation: a Systematic Study","authors":["F Burlot, F Yvon - Proceedings of the Third Conference on Machine …, 2018"],"snippet":"… For German, we use samples from News-Commentary-11, Rapid, Common-Crawl (WMT 2017) and Multi- UN (see table 1). Bilingual BPE units (Sennrich et al., 2016b) are learned with 50k merge operations, yielding …","url":["http://www.aclweb.org/anthology/W18-6315"]} -{"year":"2018","title":"Using Wikipedia Edits in Low Resource Grammatical Error Correction","authors":["A Boyd"],"snippet":"… 2.4 Language Model For reranking, we train a language model on the first one billion lines (~12 billion tokens) of the deduplicated German Common Crawl corpus (Buck et al., 2014). 3 Method … 2014. N-gram counts and language models from the Common Crawl …","url":["http://noisy-text.github.io/2018/pdf/W-NUT201811.pdf"]} -{"year":"2018","title":"Using Word Embeddings for Information Retrieval: How Collection and Term Normalization Choices Affect Performance","authors":["D Roy, D Ganguly, S Bhatia, S Bedathur, M Mitra - 2018"],"snippet":"… In future, we plan to solidify these observations to offer general best practices for a range of different neural IR methods (eg DRRM[7]) as well as experiment using large datasets (eg Common Crawl). REFERENCES [1] Qingyao Ai …","url":["http://sumitbhatia.net/papers/cikm18.pdf"]} -{"year":"2018","title":"Utilizing Neural Networks and Linguistic Metadata for Early Detection of Depression Indications in Text Sequences","authors":["M Trotzek, S Koitka, CM Friedrich - arXiv preprint arXiv:1804.07000, 2018"],"snippet":"Page 1. SUBMITTED FOR PUBLICATION TO THE IEEE, 2018 1 Utilizing Neural Networks and Linguistic Metadata for Early Detection of Depression Indications in Text Sequences Marcel Trotzek, Sven Koitka, and Christoph M. Friedrich, Member, IEEE …","url":["https://arxiv.org/pdf/1804.07000"]} -{"year":"2018","title":"UWB at SemEval-2018 Task 10: Capturing Discriminative Attributes from Word Distributions","authors":["T Brychcín, T Hercig, J Steinberger, M Konkol - … of The 12th International Workshop on …, 2018"],"snippet":"… SS-GloVe 6B, Wikipedia + Gigaword 5 n = 300 62.0% 62.5% SS-GloVe 42B, Common Crawl n = 300 62.6% 62.7% SS-GloVe 840B, Common Crawl n = 300 62.1% 62.6 … SS-LDA 1-5B, Wikipedia n = 200 60.5% 63.1 …","url":["http://www.aclweb.org/anthology/S18-1153"]} -{"year":"2018","title":"Vecsigrafo: Corpus-based Word-Concept Embeddings","authors":["R Denaux, JM Gomez-Perez"],"snippet":"… To compare our embeddings to those trained on a very large corpus, we use pre-calculated GloVe embeddings that were trained on CommonCrawl7. Besides the text corpora, the tested embeddings con …","url":["http://semantic-web-journal.net/system/files/swj1864.pdf"]} -{"year":"2018","title":"Visual and affective grounding in language and mind","authors":["S De Deyne, DJ Navarro, G Collell, A Perfors"],"snippet":"… We also included an extremely large corpus consisting of 840 billion words from the Common Crawl project.5 As before, the language vectors were combined in a multimodal visual or affective model and the correlations were optimized by fitting values of β …","url":["https://compcogscisydney.org/publications/DeDeyneNCP_grounding.pdf"]} -{"year":"2018","title":"Visual Concept Selection with Textual Knowledge for Understanding Activities of Daily Living and Life Moment Retrieval","authors":["TH Tang12, MH Fu, HH Huang, KT Chen, HH Chen13"],"snippet":"… Page 8. GloVe [9] trained on Common Crawl with 840B tokens and ConceptNet Numberbatch [8]. The comparison in percentage dissimilarity [1] is shown in Table 1, where (G) and (N) denote GloVe and ConceptNet Numberbatch word vectors, respectively …","url":["http://ceur-ws.org/Vol-2125/paper_124.pdf"]} -{"year":"2018","title":"Visual Question Answering using Explicit Visual Attention","authors":["V Lioutas, N Passalis, A Tefas - Circuits and Systems (ISCAS), 2018 IEEE …, 2018"],"snippet":"… For extracting textual representations we used pre-trained GloVe embedding vectors (Common Crawl (42B tokens), 300d) [1]. Note that the GloVe embeddings were used only for initialization and then they were optimized during the training …","url":["https://ieeexplore.ieee.org/abstract/document/8351158/"]} -{"year":"2018","title":"Visual Relationship Detection Based on Guided Proposals and Semantic Knowledge Distillation","authors":["F Plesse, A Ginsca, B Delezoide, F Prêteux - arXiv preprint arXiv:1805.10802, 2018"],"snippet":"… iterations. The word embeddings used by the semantic knowledge introduced in Section 2.1 were obtained from the publicly available Glove model [19] trained on the Common Crawl corpus, consisting of 42B tokens. 4.2. Results …","url":["https://arxiv.org/pdf/1805.10802"]} -{"year":"2018","title":"Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs","authors":["S Kumar, Y Tsvetkov - arXiv preprint arXiv:1812.04616, 2018"],"snippet":"… target word embeddings for English and French on corpora constructed using WMT'16 (Bojar et al., 2016) monolingual datasets containing data from Europarl, News Commentary, News Crawl from 2007 to 2015 and News …","url":["https://arxiv.org/pdf/1812.04616"]} -{"year":"2018","title":"WBI at CLEF eHealth 2018 Task 1: Language-independent ICD-10 coding using multi-lingual embeddings and recurrent neural networks","authors":["J Ševa, M Sänger, U Leser - 2018"],"snippet":"… Each token is represented using pre-trained fastText5 word embeddings [4]. We utilize fastText embedding models for French, Italian and Hungarian trained on Common Crawl and Wikipedia articles6. Independently from …","url":["http://ceur-ws.org/Vol-2125/paper_118.pdf"]} -{"year":"2018","title":"Weaver: Deep Co-Encoding of Questions and Documents for Machine Reading","authors":["M Raison, PE Mazaré, R Das, A Bordes - arXiv preprint arXiv:1804.10490, 2018"],"snippet":"… Unless otherwise noted, we use 300dimensional FastText word embeddings trained on Common Crawl (Mikolov et al., 2017) and keep them fixed during training. Out-of-vocabulary words are represented with a fixed randomly initialized vector …","url":["https://arxiv.org/pdf/1804.10490"]} -{"year":"2018","title":"Web archives and Knowledge organisation","authors":["NO Finnemann, D Phil"],"snippet":"… Internet Archive, established in 1996, and Common Crawl (commoncrawl.org) established in 2007.12 Since 2006 the Internet Archive also provide a subscriptionbased archive service, Archive-it (archive-it.org) allowing anybody …","url":["https://curis.ku.dk/ws/files/189392223/Web_Archives_Manuscript.pdf"]} -{"year":"2018","title":"What can we learn from Semantic Tagging?","authors":["M Abdou, A Kulmizev, V Ravishankar, L Abzianidze… - arXiv preprint arXiv …, 2018"],"snippet":"… sets of experiments: we optimized using Adam with a learning rate of 0.00005; we weight the auxiliary semantic tagging loss with λ = 0.1; the pre-trained word embeddings we use are GloVe embeddings of dimension 300 trained …","url":["https://arxiv.org/pdf/1808.09716"]} -{"year":"2018","title":"What's Cached is Prologue: Reviewing Recent Web Archives Research Towards Supporting Scholarly Use","authors":["E Maemura"],"snippet":"… Internet Archive. Samar et al. (2016) analyze coverage of trending topics for the Netherlands in 2014 by comparing the National Library of the Netherlands' web archive to the Common Crawl dataset. Milligan et al. (2016) use …","url":["https://tspace.library.utoronto.ca/bitstream/1807/89426/1/Maemura%20AM2018%20Paper-Postprint.pdf"]} -{"year":"2018","title":"When data permutations are pathological: the case of neural natural language inference","authors":["N Schluter, D Varab - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… via an LSTM. Other hyperparameters. We use 300 dimensional GloVe embeddings trained on the Common Crawl 840B tokens dataset (Pennington et al., 2014), which remain fixed during training. Out of vocabulary (OOV …","url":["http://www.aclweb.org/anthology/D18-1534"]} -{"year":"2018","title":"Who gets held accountable when a facial recognition algorithm fails?","authors":["E Broad - IQ: The RIM Quarterly, 2018"],"snippet":"… In that experiment, the machine learning tool was trained on what's called a “common crawl” corpus: a list of 840 billion words in material published on the Web. Training AI on historical data can freeze our society in its current setting, or even turn it back …","url":["https://search.informit.com.au/documentSummary;dn=965944620566147;res=IELBus"]} -{"year":"2018","title":"WikiAtomicEdits: A Multilingual Corpus of Wikipedia Edits for Modeling Language and Discourse Supplementary Material","authors":["M Faruqui, E Pavlick, I Tenney, D Das"],"snippet":"… 3.3 Experimental Setting We use FastText (Mikolov et al., 2018; Grave et al., 2018)3 word vectors of length 300, originally trained on more than 600 billion word to- kens each from Common Crawl corpus for each language …","url":["http://anthology.aclweb.org/attachments/D/D18/D18-1028.Attachment.pdf"]} -{"year":"2018","title":"Wikipedia Text Reuse: Within and Without","authors":["M Alshomary, M Völske, T Licht, H Wachsmuth, B Stein… - arXiv preprint arXiv …, 2018"],"snippet":"… Abstract. We study text reuse related to Wikipedia at scale by compiling the first corpus of text reuse cases within Wikipedia as well as without (ie, reuse of Wikipedia text in a sample of the Common Crawl) … 3 …","url":["https://arxiv.org/pdf/1812.09221"]} -{"year":"2018","title":"WikiRef: Wikilinks as a route to recommending appropriate references for scientific Wikipedia pages","authors":["A Jana, P Kanojiya, A Mukherjee, P Goyal - arXiv preprint arXiv:1806.04092, 2018"],"snippet":"… proposed by Conneau et al. (2017). Note that, for applying this architecture we use the GloVe vectors trained on Common Crawl data (840B tokens)7as seeds for representing words in a document. We name these variants of …","url":["https://arxiv.org/pdf/1806.04092"]} -{"year":"2018","title":"Will it Blend? Blending Weak and Strong Labeled Data in a Neural Network for Argumentation Mining","authors":["E Shnarch, C Alzate, L Dankin, M Gleize, Y Hou… - Proceedings of the 56th …, 2018"],"snippet":"… maximum global norm of 1.0. Words are represented using the 300 dimensional GloVe embeddings learned on 840B Common Crawl tokens and are left untouched during training (Pennington et al., 2014). We note that even …","url":["http://www.aclweb.org/anthology/P18-2095"]} -{"year":"2018","title":"Word embedding for French natural language in healthcare: a comparative study","authors":["E DYNOMANT, R LELONG, B DAHAMNA…"],"snippet":"… [30] compared the three word embedding methods but the three models were trained on different datasets (Word2Vec on news data, while FastText and GloVe trained on more definitional data, Wikipedia and Common Crawl respectively) …","url":["https://preprints.jmir.org/preprint/download/12310/pdf"]} -{"year":"2018","title":"Word embeddings for monolingual and cross-lingual domain-specific information retrieval","authors":["C Wigder - 2018"],"snippet":"Page 1. Word embeddings for monolingual and cross-lingual domain-specific information retrieval CHAYA WIGDER Master in Computer Science Date: June 4, 2018 Supervisor: Johan Boye Examiner: Viggo Kann Swedish title …","url":["http://www.nada.kth.se/~ann/exjobb/chaya_wigder.pdf"]} -{"year":"2018","title":"Word Emotion Induction for Multiple Languages as a Deep Multi-Task Learning Problem","authors":["S Buechel, U Hahn"],"snippet":"… experiments, we rely on the following widely used, publicly available embedding models trained on very large corpora (summarized in Table 3): the SGNS model trained on the Google News corpus2 (GOOGLE), the …","url":["https://www.researchgate.net/profile/Sven_Buechel/publication/325019685_Word_Emotion_Induction_for_Multiple_Languages_as_a_Deep_Multi-Task_Learning_Problem/links/5af1b275aca272bf425628a9/Word-Emotion-Induction-for-Multiple-Languages-as-a-Deep-Multi-Task-Learning-Problem.pdf"]} -{"year":"2018","title":"Word2Bits-Quantized Word Vectors","authors":["M Lam - arXiv preprint arXiv:1803.05651, 2018"],"snippet":"… complete picture of the relative performance of the two. We would also like to train quantized word vectors on much larger corpuses of data such as Common Crawl or Google News. Another task is to validate that overfitting occurs …","url":["https://arxiv.org/pdf/1803.05651"]} -{"year":"2018","title":"XNLI: Evaluating Cross-lingual Sentence Representations","authors":["A Conneau, G Lample, R Rinott, A Williams… - arXiv preprint arXiv …, 2018"],"snippet":"… on the word translation task. In this paper, we pretrain our embeddings using the common-crawl word embeddings (Grave et al., 2018) aligned with the MUSE library of Conneau et al. (2018b). 4.2.2 Universal Multilingual Sentence …","url":["https://arxiv.org/pdf/1809.05053"]} -{"year":"2018","title":"YouTube AV 50K: an Annotated Corpus for Comments in Autonomous Vehicles","authors":["T Li, L Lin, M Choi, K Fu, S Gong, J Wang - arXiv preprint arXiv:1807.11227, 2018"],"snippet":"Page 1. YouTube AV 50K: an Annotated Corpus for Comments in Autonomous Vehicles Tao Li Department of Computer Science Purdue University West Lafayette, IN 47907 Email: taoli@purdue.edu Kaiming Fu Weldon School …","url":["https://arxiv.org/pdf/1807.11227"]} -{"year":"2018","title":"Zero-Shot Object Detection by Hybrid Region Embedding","authors":["B Demirel, RG Cinbis, N Ikizler-Cinbis - arXiv preprint arXiv:1805.06157, 2018"],"snippet":"… 4.2 Class Embeddings For the Fashion-ZSD dataset, we generate 300-dimensional GloVe word embedding vectors [31] for each class name, using Common Crawl Data1. For the class names that contain multiple words, we take the average of the word vectors …","url":["https://arxiv.org/pdf/1805.06157"]} -{"year":"2018","title":"Zewen at SemEval-2018 Task 1: An Ensemble Model for Affect Prediction in Tweets","authors":["Z Chi, H Huang, J Chen, H Wu, R Wei - … of The 12th International Workshop on …, 2018"],"snippet":"… GloVe (Pennington et al., 2014) trained by Common Crawl … the same model hyperparameters which are listed in Table 1 and Table 2. Also, the four methods use the same word em- beddings, which is a pre-trained …","url":["http://www.aclweb.org/anthology/S18-1046"]} -{"year":"2019","title":"% 0 Conference Proceedings% TA Large DataBase of Hypernymy Relations Extracted from the Web.% A Seitner, Julian% A Bizer, Christian% A Eckert, Kai","authors":["A Ponzetto, S Paolo"],"snippet":"… for many word understanding applications. We present a publicly available database containing more than 400 million hypernymy relations we extracted from the CommonCrawl web corpus. We describe the infrastructure we …","url":["https://www.aclweb.org/anthology/papers/L/L16/L16-1056.endf"]} -{"year":"2019","title":"A 6-month Analysis of Factors Impacting Web Browsing Quality for QoE Prediction","authors":["A Saverimoutou, B Mathieu, S Vaton - Computer Networks, 2019"],"snippet":"… Fourth Party [15] instruments the Mozilla-Firefox browser and Web Xray [16] is a PhantomJS based tool for measuring HTTP traffic. XRay [17] and AdFisher [18] run automated personalization detection experiments and …","url":["https://www.sciencedirect.com/science/article/pii/S1389128619307546"]} -{"year":"2019","title":"A Bilingual Adversarial Autoencoder for Unsupervised Bilingual Lexicon Induction","authors":["X Bai, H Cao, K Chen, T Zhao - IEEE/ACM Transactions on Audio, Speech, and …, 2019"],"snippet":"… [29]. This dataset consists of gold dictionaries and 300-dimensional CBOW5 embeddings trained on WacKy crawling corpora (English, Italian, German), Common Crawl (Finish) and WMT News Crawl (Spanish). We report the results 2We set k = 10 …","url":["https://ieeexplore.ieee.org/abstract/document/8754809/"]} -{"year":"2019","title":"A Combined Approach to Automatic Taxonomy Extraction","authors":["S Pecar, M Simko"],"snippet":"… [10] presented publicly available database of hypernym relations called WebIsA. This database was created using Hearst-like patterns on CommonCrawl web corpus. They extracted more than 400 million hypernymy relations …","url":["https://ieeexplore.ieee.org/iel7/8859030/8864801/08864911.pdf"]} -{"year":"2019","title":"A Common Semantic Space for Monolingual and Cross-Lingual Meta-Embeddings","authors":["I García Ferrero - 2019","I García, R Agerri, G Rigau - arXiv preprint arXiv:2001.06381, 2020"],"snippet":"… From GloVe (GV) [34], the Common Crawl vectors (600 billion words) … The WS353 dataset is di- vided in two subsets [1]. In this section all the meta-embeddings have been mapped to the vector space of the English FastText (Common Crawl, 600B tokens) …","url":["https://addi.ehu.eus/bitstream/handle/10810/36183/MAL-Iker_Garcia.pdf?sequence=1&isAllowed=y","https://arxiv.org/pdf/2001.06381"]} -{"year":"2019","title":"A COMPARATIVE STUDY ON END-TO-END SPEECH TO TEXT TRANSLATION","authors":["P Bahar, T Bieschke, H Ney"],"snippet":"… We select our checkpoints based on the dev set. For the MT training, we use the TED, OpenSubtitles2018, Europarl, ParaCrawl, CommonCrawl, News Commentary, and Rapid corpora resulting in 32M sentence pairs after filtering noisy samples …","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1121/BaharParniaBieschkeTobiasNeyHermann--Acomparativestudyonend-to-endspeechtotexttranslation--2019.pdf"]} -{"year":"2019","title":"A Comparison of Context-sensitive Models for Lexical Substitution","authors":["AG Soler, A Cocos, M Apidianaki, C Callison-Burch"],"snippet":"… to two context-insensitive baselines that solely rely on the target-to-substitute similarity of standard, pre-trained word embeddings: 300-dimensional GloVe vectors (Pennington et al., 2014)5 and 300-dimensional FastText vectors …","url":["http://www.cis.upenn.edu/~ccb/publications/comparison-of-context-sensitive-models-for-lexical-substitution.pdf"]} -{"year":"2019","title":"A Comparison of Neural Document Classification Models","authors":["M Nitsche, S Halbritter"],"snippet":"… Bojanowski et al. (2016). It is available for 294 languages. An updated version, trained on Common Crawl in addition to Wikipedia and with adapted parameters is available for 157 languages (Grave et al., 2018). These are the …","url":["https://users.informatik.haw-hamburg.de/~ubicomp/projekte/master2019-proj/nitsche-halbritter2.pdf"]} -{"year":"2019","title":"A Comparison on Fine-grained Pre-trained Embeddings for the WMT19Chinese-English News Translation Task","authors":["Z Li, L Specia - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… In addition, the Common Crawl Corpus from WMT is used as monolingual data to pre-train the embeddings … We trained the embeddings on the Common Crawl Corpus provided by WMT19 and fine-tuned them on the task data when training the RNN …","url":["https://www.aclweb.org/anthology/W19-5324"]} -{"year":"2019","title":"A Contextualized Word Representation Approach for Irony Detection","authors":["L Garcıa, D Moctezuma, V Muniz - Proceedings of the Iberian Languages Evaluation …, 2019"],"snippet":"… 2.2 Word Embeddings We use the ELMo pre-trained word embeddings provided by [4], which were trained with a corpus of 20 million-words randomly sampled from the raw text released by the CoNLL 2018 shared …","url":["http://ceur-ws.org/Vol-2421/IroSvA_paper_5.pdf"]} -{"year":"2019","title":"A Dataset for Content Error Detection in Web Archives","authors":["J Kiesel, F Hubricht, B Stein, M Potthast"],"snippet":"… The Webis Web Archive 2017 [9] contains 10,000 web pages sampled from the Common Crawl [11] in a way which ensured that both well-known and less … January 2017 Common Crawl Archive, 2017. http: //commoncrawl.org …","url":["https://webis.de/downloads/publications/papers/stein_2019e.pdf"]} -{"year":"2019","title":"A Deep Dive into Supervised Extractive and Abstractive Summarization from Text","authors":["M Dey, D Das - Data Visualization and Knowledge Engineering, 2020"],"snippet":"… 7.1. As mentioned in the algorithm, estimated frequency p(w) of every words have been found from datasets (enwiki, poliblogs, commoncrawl, text8) [1]. The parameter “a” for our task is fixed at \\(3 * 10^{-3}\\). The vector of all …","url":["https://link.springer.com/chapter/10.1007/978-3-030-25797-2_5"]} -{"year":"2019","title":"A Deep Learning Approach for Identification of Confusion in Unstructured Crowdsourced Annotations","authors":["R Gardner, M Varma, C Zhu"],"snippet":"… improvements in model performance. We also compared the VQR binary classification model based on GLoVe embeddings with a model based on FastText embeddings trained on Common Crawl (3; 14). Since the FastText …","url":["https://pdfs.semanticscholar.org/ad70/7fc8b36a8f3daf8742cf92fcf099de434cec.pdf"]} -{"year":"2019","title":"A Dynamic Evolutionary Framework for Timeline Generation based on Distributed Representations","authors":["D Liang, G Wang, J Nie - arXiv preprint arXiv:1905.05550, 2019"],"snippet":"… To learn the distributed representations, we use pre-trained word vectors2, trained on Common Crawl and Wikipedia by fastText tookit [2]. Furthermore, each v(q) is only embedded by the name of the topic q as a experimental control …","url":["https://arxiv.org/pdf/1905.05550"]} -{"year":"2019","title":"A Framework to Estimate the Nutritional Value of Food in Real Time Using Deep Learning Techniques","authors":["R Yunus, O Arif, H Afzal, MF Amjad, H Abbas… - IEEE Access, 2019"],"snippet":"… First is Common-Crawl, which is an archive hosted on an Amazon S3 bucket … is more relevant as it return pages corresponding to precise labels while the text from Common-Crawl is more generic. The raw text data obtained …","url":["https://ieeexplore.ieee.org/iel7/6287639/8600701/08590712.pdf"]} -{"year":"2019","title":"A Language Invariant Neural Method for TimeML Event Detection","authors":["S Prabhu, P Goel, A Debnath, M Shrivastava"],"snippet":"… The CNN uses 40 filters with a window size of 3. For our contextual word embeddings, we use fastText embeddings for English (Bojanowski et al., 2017) which are pretrained on commonCrawl and the Wikipedia corpus. FastText …","url":["https://www.researchgate.net/profile/Pranav_Goel/publication/337387464_A_Language_Invariant_Neural_Method_for_TimeML_Event_Detection/links/5dd4c5ec299bf11ec8629470/A-Language-Invariant-Neural-Method-for-TimeML-Event-Detection.pdf"]} -{"year":"2019","title":"A Massive Collection of Cross-Lingual Web-Document Pairs","authors":["A El-Kishky, V Chaudhary, F Guzman, P Koehn - arXiv preprint arXiv:1911.06154, 2019"],"snippet":"… Other works (Smith et al., 2013) have mined Common Crawl for bitexts for machine … and restrictions, we mined 54 million aligned documents across 12 Common Crawl snapshots … 2: NMT performance on comparable directions …","url":["https://arxiv.org/pdf/1911.06154"]} -{"year":"2019","title":"A Multi-Task Approach for Disentangling Syntax and Semantics in Sentence Representations","authors":["S Wiseman, K Gimpel, Q Tang, M Chen"],"snippet":"04/02/19 - We propose a generative model for a sentence that uses two latent variables, with one intended to represent the syntax of the sent...","url":["https://deepai.org/publication/a-multi-task-approach-for-disentangling-syntax-and-semantics-in-sentence-representations"]} -{"year":"2019","title":"A New Corpus for Low-Resourced Sindhi Language with Word Embeddings","authors":["W Ali, J Kumar, J Lu, Z Xu - arXiv preprint arXiv:1911.12579, 2019"],"snippet":"… 3Available online at https://rdrr.io/cran/wordspace/man/WordSim353.html 4We denote Sindhi word representations as (SdfastText) recently revealed by fastText, available at (https://fasttext.cc/docs/en/crawl-vectors.html) trained …","url":["https://arxiv.org/pdf/1911.12579"]} -{"year":"2019","title":"A New Hybrid Ensemble Feature Selection Framework for Machine Learning-based Phishing Detection System","authors":["KL Chiew, CL Tan, KS Wong, KSC Yong, WK Tiong - Information Sciences, 2019"],"snippet":"… June 2017. Specifically, we selected 5000 phishing webpages based on URLs from PhishTank 2 and OpenPhish 3 , and another 5000 legitimate webpages based on URLs from Alexa 4 and the Common Crawl 5 archive. The …","url":["https://www.sciencedirect.com/science/article/pii/S0020025519300763"]} -{"year":"2019","title":"A Question-Entailment Approach to Question Answering","authors":["AB Abacha, D Demner-Fushman - arXiv preprint arXiv:1901.08079, 2019"],"snippet":"… We use the pretrained common crawl version with 840B tokens and 300d vectors, which are not updated during training. 3.3 Logistic Regression Classifier In this feature-based approach, we use Logistic Regression to …","url":["https://arxiv.org/pdf/1901.08079"]} -{"year":"2019","title":"A Recurrent Deep Neural Network Model to measure Sentence Complexity for the Italian","authors":["D Schicchi"],"snippet":"… The authors have used FastText [3], a library for efficient learning of word representations and sentence classification, trained on Common Crawl [21] and Wikipedia to create a pre-trained word vector representation for …","url":["http://ceur-ws.org/Vol-2418/paper10.pdf"]} -{"year":"2019","title":"A Robust Abstractive System for Cross-Lingual Summarization","authors":["J Ouyang, B Song, K McKeown"],"snippet":"… about 23k sentences for Somali and Swahili and 51k for Tagalog); noisy, web-crawled parallel data (So- mali only, about 354k sentences); and synthetic, backtranslated parallel data created from monolingual sources including …","url":["http://www.cs.columbia.edu/~ouyangj/OuyangSongMcKeown2019.pdf"]} -{"year":"2019","title":"A Study of Neural Networks Models applied to Natural Language Inference","authors":["VG Noronha, JCP da Silva"],"snippet":"… word vectors of different size, in order to check whether the word space dimension plays an important role on the final results: – A 100d version trained on the Wikipedia 2014 + Gigaword 5 corpus, with 6B tokens; – The 300d …","url":["https://www.researchgate.net/profile/Joao_Silva45/publication/320711838_A_Study_of_Neural_Networks_Models_applied_to_Natural_Language_Inference/links/5b23a863458515270fcff1e1/A-Study-of-Neural-Networks-Models-applied-to-Natural-Language-Inference.pdf"]} -{"year":"2019","title":"A Survey of URL-based Phishing Detection","authors":["ES Aung, CT Zan, H YAMANA"],"snippet":"… 2017 [40] 98.76 98.60 98.93 98.76 99.91 Common Crawl PhishTank 1M 1M Balanced Path length, URL entropy, length ratio, '@' and '-' count, punctuation count, TLDs count, IP address, suspicious words count …","url":["https://db-event.jpn.org/deim2019/post/papers/201.pdf"]} -{"year":"2019","title":"A Survey on Document-level Machine Translation: Methods and Evaluation","authors":["S Maruf, F Saleh, G Haffari - arXiv preprint arXiv:1912.08494, 2019"],"snippet":"Page 1. A Survey on Document-level Machine Translation: Methods and Evaluation Sameen Maruf, Fahimeh Saleh, and Gholamreza Haffari Faculty of Information Technology, Monash University, Clayton VIC, Australia {firstname.lastname}@monash.edu …","url":["https://arxiv.org/pdf/1912.08494"]} -{"year":"2019","title":"A System to Monitor Cyberbullying based on Message Classification and Social Network Analysis","authors":["S Menini, G Moretti, M Corazza, E Cabrio, S Tonelli… - Proceedings of the Third …, 2019"],"snippet":"… timestep. We use English Fasttext embeddings1 trained on Common Crawl with a size of 300. Concerning hy- perparameters, our model uses no dropout and no batch normalization on the outputs of the hidden layer. Instead …","url":["https://www.aclweb.org/anthology/W19-3511"]} -{"year":"2019","title":"A Systematic Comparison Between SMT and NMT on Translating User-Generated Content","authors":["P Lohar, M Popovic, H Afli, A Way"],"snippet":"… Morever, as the Europarl corpus is a fix-domain and did not work well for our experiments, we plan to utilise other types of mix-domain parallel resource such as common crawl corpus9 … WASSA '12 (2012) 52–60 9 http://www.statmt …","url":["http://www.computing.dcu.ie/~away/PUBS/2019/A_Systematic_Comparison_Between_SMT_and_NMT_on_Translating_User_Generated_Content.pdf"]} -{"year":"2019","title":"A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Computer Science","authors":["N Naderi - 2019"],"snippet":"Page 1. COMPUTATIONAL ANALYSIS OF ARGUMENTS AND PERSUASIVE STRATEGIES IN POLITICAL DISCOURSE by Nona Naderi A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy …","url":["ftp://ftp.db.toronto.edu/public_html/public_html/pub/gh/Naderi-PhD-thesis-2019.pdf"]} -{"year":"2019","title":"A Vector Worth a Thousand Counts","authors":["DS Hain, R Jurowetzkiφ, T Buchmannψ, P Wolfψ"],"snippet":"… task convolutional neural network model trained on OntoNotes, with GloVe vectors (685k unique vectors with 300 dimensions) trained on Common Crawl. Given a patent abstract, spaCy predicts the meaning of each term in the document …","url":["https://pdfs.semanticscholar.org/69ba/b264607119c7928a9a25ea82823d9c346350.pdf"]} -{"year":"2019","title":"Abstract Text Summarization: A Low Resource Challenge","authors":["S Parida, P Motlicek - 2019"],"snippet":"… We propose an iterative data augmentation approach which uses synthetic data along with the real summarization data for the German language. To generate synthetic data, the Common Crawl (German) dataset is exploited, which covers different domains …","url":["https://infoscience.epfl.ch/record/270135"]} -{"year":"2019","title":"AC-Net: Assessing the Consistency of Description and Permission in Android Apps","authors":["Y Feng, L Chen, A Zheng, C Gao, Z Zheng - IEEE Access, 2019"],"snippet":"… Compared to the somewhat popular embeddings such as GloVe [22] (400 thousand word vectors trained on Wikipedia) and crawl [23] (2 million word vectors trained on Common Crawl), ours fully retains domain-specific characteristics of statements …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/08694776.pdf"]} -{"year":"2019","title":"Acquiring Knowledge from Pre-trained Model to Neural Machine Translation","authors":["R Weng, H Yu, S Huang, S Cheng, W Luo - arXiv preprint arXiv:1912.01774, 2019"],"snippet":"… Following Song et al. (2019) , on the English and German, we use the monolingual data from WMT News Crawl. We select 50M sentence from year 2007 to 2017 for English and German respectively. Then, we choose 50M sentence from Common Crawl for Chinese …","url":["https://arxiv.org/pdf/1912.01774"]} -{"year":"2019","title":"Adapting Transformer-XL Techniques to QANet Architecture for SQuAD 2.0 Challenge","authors":["L Zhang"],"snippet":"… set data. • glove.840B.300dglove.840B.300d.txt: Pretrained GloVevectors. Theseare300dimensional embeddings trained on the CommonCrawl 840B corpus. • {word,char}_emb.json: Word and character embeddings. Only the …","url":["https://pdfs.semanticscholar.org/0760/248f62e5a313fb088cce37495ce79c9ba8a1.pdf"]} -{"year":"2019","title":"Adaptive Cross-Modal Few-Shot Learning","authors":["C Xing, N Rostamzadeh, BN Oreshkin, PO Pinheiro - arXiv preprint arXiv:1902.07104, 2019"],"snippet":"… category labels. GloVe is an unsupervised approach based on wordword co-occurrence statistics from large text corpora. We use the Common Crawl version trained on 840B tokens. The embeddings are of dimension 300. When …","url":["https://arxiv.org/pdf/1902.07104"]} -{"year":"2019","title":"Advanced Deep learning Methods and Applications in Open-domain Question Answering","authors":["MT Nguyễn - 2019"],"snippet":"Page 1. VIETNAM NATIONAL UNIVERSITY, HANOI UNIVERSITY OF ENGINEERING AND TECHNOLOGY Nguyen Minh Trang ADVANCED DEEP LEARNING METHODS AND APPLICATIONS IN OPEN-DOMAIN QUESTION ANSWERING MASTER THESIS …","url":["http://lib.uet.vnu.edu.vn/bitstream/123456789/1021/1/2.ToanVanLuanVan.pdf"]} -{"year":"2019","title":"Adversarial NLI: A New Benchmark for Natural Language Understanding","authors":["Y Nie, A Williams, E Dinan, M Bansal, J Weston… - arXiv preprint arXiv …, 2019"],"snippet":"… transfer. In addition to contexts from Wikipedia for Round 3, we also included contexts from the following domains: News (ex- tracted from Common Crawl), fiction (extracted from Mostafazadeh et al. 2016, Story Cloze, and Hill et al …","url":["https://arxiv.org/pdf/1910.14599"]} -{"year":"2019","title":"Adverse drug event detection from electronic health records using hierarchical recurrent neural networks with dual-level embedding","authors":["S Wunnava, X Qin, T Kakar, C Sen, EA Rundensteiner… - Drug Safety, 2019"],"snippet":"… compared the results from DLADE, which uses domainand task-specific MADE1.0 word embedding trained using wiki, and Pittsburgh EHR and PubMed articles (1,352,550 word vectors) [10, 21], with two systems that use …","url":["https://link.springer.com/article/10.1007/s40264-018-0765-9"]} -{"year":"2019","title":"AELA-DLSTMs: Attention-Enabled and Location-Aware Double LSTMs for Aspect-level Sentiment Classification","authors":["K Shuang, X Ren, Q Yang, R Li, J Loo - Neurocomputing, 2018"],"snippet":"Skip to main content …","url":["https://www.sciencedirect.com/science/article/pii/S0925231218315054"]} -{"year":"2019","title":"Aiding Intra-Text Representations with Visual Context for Multimodal Named Entity Recognition","authors":["O Arshad, I Gallo, S Nawaz, A Calefati - arXiv preprint arXiv:1904.01356, 2019"],"snippet":"… Page 5. B. Word embeddings We used 300D fasttext crawl embeddings. It contains 2 million word vectors trained with subword information on Common Crawl (600B tokens). However, we do not apply fine-tuning on these embeddings during the training stage …","url":["https://arxiv.org/pdf/1904.01356"]} -{"year":"2019","title":"Algorithmic Bias and the Biases of the Bias Catchers","authors":["D Rozado - arXiv preprint arXiv:1905.11985, 2019"],"snippet":"… This work systematically analyzed 3 popular word embedding methods: Word2vec (Skipgram) (4), Glove (9) and FastText (10), externally pretrained on a wide array of corpora such as Google News, Wikipedia, Twitter and Common Crawl …","url":["https://arxiv.org/pdf/1905.11985"]} -{"year":"2019","title":"All-in-One: Emotion, Sentiment and Intensity Prediction using a Multi-task Ensemble Framework","authors":["S Akhtar, D Ghosal, A Ekbal, P Bhattacharyya… - IEEE Transactions on …, 2019"],"snippet":"… IEEE TRANSACTIONS ON AFFECTIVE COMPUTING. 4 A. Deep Learning Models We employ the architecture of Figure 1a to train and tune all the deep learning models using pre-trained GloVe (common crawl 840 billion) word embeddings [32] …","url":["https://ieeexplore.ieee.org/abstract/document/8756111/"]} -{"year":"2019","title":"AMR-to-Text Generation with Cache Transition Systems","authors":["L Jin, D Gildea - arXiv preprint arXiv:1912.01682, 2019"],"snippet":"… 6.2 Setup The word embeddings are initialized with 300-dimensional GloVe embeddings (Penningtonetal., 2014) from the Common Crawl and are fixed during training. This embedding vocabulary consists of both English tokens and AMR concept labels …","url":["https://arxiv.org/pdf/1912.01682"]} -{"year":"2019","title":"An effective approach to candidate retrieval for cross-language plagiarism detection: A fusion of conceptual and keyword-based schemes","authors":["M Roostaee, MH Sadreddini, SM Fakhrahmad - Information Processing & …, 2020"],"snippet":"Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0306457318310148"]} -{"year":"2019","title":"An Efficient Framework for Processing and Analyzing Unstructured Text to Discover Delivery Delay and Optimization of Route Planning in Realtime","authors":["M Alshaer - 2019"],"snippet":"Page 1. HAL Id: tel-02310852 https://tel.archives-ouvertes.fr/tel-02310852 Submitted on 10 Oct 2019 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not …","url":["https://tel.archives-ouvertes.fr/tel-02310852/document"]} -{"year":"2019","title":"An Empirical Evaluation of Text Representation Schemes on Multilingual Social Web to Filter the Textual Aggression","authors":["S Modha, P Majumder - arXiv preprint arXiv:1904.08770, 2019"],"snippet":"… Glove pre-trained model available with different embed size and trained on common crawl, Twitter. We have use Glove pre-trained model with vocabulary size 2.2 million and trained on common crawl. fastText pretrained models are available in 157 language …","url":["https://arxiv.org/pdf/1904.08770"]} -{"year":"2019","title":"An Empirical study on Pre-trained Embeddings and Language Models for Bot Detection","authors":["A Garcia-Silva, C Berrio, JM Gómez-Pérez - Proceedings of the 4th Workshop on …, 2019"],"snippet":"… scratch. We use pre-trained embeddings learned from Twitter it- self, urban dictionary definitions to accommodate the informal vocabulary often used in the social network, and common crawl as a general source of information …","url":["https://www.aclweb.org/anthology/W19-4317"]} -{"year":"2019","title":"An Ensemble Method for Producing Word Representations for the Greek Language","authors":["M Lioudakis, S Outsios, M Vazirgiannis - arXiv preprint arXiv:1912.04965, 2019"],"snippet":"… In addition, we show that CBOS outperforms the CBOW and Skip-gram models when they are trained on the same data. The future work of this research could include training of our newly proposed model with the Common Crawl dataset for the Greek language …","url":["https://arxiv.org/pdf/1912.04965"]} -{"year":"2019","title":"An Exploration of Sarcasm Detection Using Deep Learning","authors":["E SAVINI - 2019"],"snippet":"… not appear in the training data (”out-of-vocabulary” words). It also supports 157 different languages. In our research we use 300-dimensional word vectors pre-trained on Common Crawl3 (600B tokens). 4.3 ELMo ELMo (Embeddings …","url":["https://webthesis.biblio.polito.it/12440/1/tesi.pdf"]} -{"year":"2019","title":"An Extended CLEF eHealth Test Collection for Cross-Lingual Information Retrieval in the Medical Domain","authors":["S Saleh, P Pecina - European Conference on Information Retrieval, 2019"],"snippet":"… However, an additional assessment was performed [15]. CLEF eHealth 2018 Consumer Health Search Task released a document collection created using CommonCrawl platform [8] containing more than five million documents from more than thousand websites …","url":["https://link.springer.com/chapter/10.1007/978-3-030-15719-7_24"]} -{"year":"2019","title":"An integrated neural decoder of linguistic and experiential meaning","authors":["AJ Anderson, JR Binder, L Fernandino, CJ Humphries… - Journal of Neuroscience, 2019"],"snippet":"… occurrence 210 matrix (vocabulary size is 2.2million words and co-occurrences were measured across 840 billion 211 tokens from Common Crawl https://commoncrawl.org). GloVe in particular was used because it yielded 212 state-of …","url":["https://www.jneurosci.org/content/early/2019/09/27/JNEUROSCI.2575-18.2019.abstract"]} -{"year":"2019","title":"An Open-Domain System for Retrieval and Visualization of Comparative Arguments from Text","authors":["M Schildwächter"],"snippet":"… parts of the Page 23. 3.1. Retrieval of Sentences 19 CommonCrawl: Full text search index Retrieval of Sentences: Elasticsearch API Sentence Classification: Keyword or ML approach Sentence Ranking: Ordering of sentences","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/teaching/theses/completed-theses/2018-ma-schildwaechter-cam.pdf"]} -{"year":"2019","title":"Analysing Coreference in Transformer Outputs","authors":["ELKC Espana-Bonet, J van Genabith","ELKC Espana-Bonet, J van Genabith - DiscoMT 2019, 2019"],"snippet":"… Recent NMT systems that include context deal with both phenomena, coreference and coherence, but usually context is limited to the previous sen- # lines S1, S3 S2 Common Crawl 2394878 x1 x4 Europarl 1775445 …","url":["https://www.aclweb.org/anthology/D19-65.pdf#page=11","https://www.cs.upc.edu/~cristinae/CV/docs/coref_DiscoMT.pdf"]} -{"year":"2019","title":"Analysing Representations of Memory Impairment in a Clinical Notes Classification Model","authors":["M Ormerod, J Martínez-del-Rincón, N Robertson… - Proceedings of the 18th …, 2019"],"snippet":"… In this study we use 300-dimensional fastText word embeddings (Bojanowski et al., 2017) which were pretrained on the Common Crawl dataset using the skipgram schema (Mikolov et al., 2013), which in- volves predicting a target word based on nearby words …","url":["https://www.aclweb.org/anthology/W19-5005"]} -{"year":"2019","title":"Analysis of Joint Multilingual Sentence Representations and Semantic K-Nearest Neighbor Graphs","authors":["H Schwenk, D Kiela, M Douze - Proceedings of the AAAI Conference on Artificial …, 2019"],"snippet":"… All the statistics in this section are calculated on ten million sentences of Common Crawl data. Please remember that we apply romanization for the languages which use a Cyrillic script (Greek, Bosnian, Bulgarian, Macedonian, Serbian and Russian) …","url":["https://www.aaai.org/ojs/index.php/AAAI/article/download/4677/4555"]} -{"year":"2019","title":"Analysis of Positional Encodings for Neural Machine Translation","authors":["J Rosendahl, VAK Tran, W Wang, H Ney"],"snippet":"… We train our models on the data from the De→En and the Zh→En news translation task of WMT 2019. For the De→En task we train on CommonCrawl, Europarl, NewsCommentary and Rapid summing up …","url":["https://zenodo.eu/record/3525024/files/IWSLT2019_paper_21.pdf"]} -{"year":"2019","title":"Annotating and Recognising Visually Descriptive Language","authors":["T Alrashid, J Wang, R Gaizauskas - … on Interoperable Semantic Annotation (ISA-15), 2019"],"snippet":"… Stop words were only removed for the word embedding representation. We did not remove stop words for tf-idf because the approach 5https://spacy. io/. We use the model en vectors web lg. 6http://commoncrawl. org/the-data/ 7http://www. scikit-learn. org/ 27 Page 34 …","url":["https://sigsem.uvt.nl/isa15/ISA-15_proceedings.pdf#page=28"]} -{"year":"2019","title":"Answering Comparative Questions: Better than Ten-Blue-Links?","authors":["M Schildwächter, A Bondarenko, J Zenker, M Hagen… - arXiv preprint arXiv …, 2019"],"snippet":"… CommonCrawl: Full text search index … Clicking on a result sentence reveals its Common Crawl context—by default the ±3 sentences around it, with the … underlying corpus of our CAM system and the keyword-based search …","url":["https://arxiv.org/pdf/1901.05041"]} -{"year":"2019","title":"Architecture for semantic search over encrypted data in the cloud","authors":["J Woodworth, MA Salehi - US Patent App. 16/168,919, 2019"],"snippet":"… The dataset has a total size of 357 MB and is made up of 6,942 text files. To evaluate S3C under large scale datasets, a second dataset, the Common Crawl Corpus from AWS (a web crawl composed of over five billion web pages) was used …","url":["https://patentimages.storage.googleapis.com/f7/5f/a4/c986d736bd81ab/US20190121873A1.pdf"]} -{"year":"2019","title":"Are we consistently biased? Multidimensional analysis of biases in distributional word vectors","authors":["A Lauscher, G Glavaš - arXiv preprint arXiv:1904.11783, 2019"],"snippet":"… 4 we compare the biases of em- beddings trained with the same model (GLOVE) but on different corpora: Common Crawl (ie, noisy … Table 4: WEAT bias effects for GLOVE embeddings trained on different corpora: Wikipedia …","url":["https://arxiv.org/pdf/1904.11783"]} -{"year":"2019","title":"Are We Safe Yet? The Limitations of Distributional Features for Fake News Detection","authors":["T Schuster, R Schuster, DJ Shah, R Barzilay - arXiv preprint arXiv:1908.09805, 2019"],"snippet":"… 2019). The generator is trained with a LM objective on a large news corpus from Common Crawl dumps. The fake news detector is a simple linear classifier on top of the last hidden state of Grover's LM on the examined article …","url":["https://arxiv.org/pdf/1908.09805"]} -{"year":"2019","title":"Argument Generation with Retrieval, Planning, and Realization","authors":["X Hua, Z Hu, L Wang - arXiv preprint arXiv:1906.03717, 2019"],"snippet":"… Wachsmuth et al., 2017b, 2018b). Recent work by Stab et al. (2018) in- dexes all web documents collected in Common Crawl, which inevitably incorporates noisy, lowquality content. Besides, existing work treats individual …","url":["https://arxiv.org/pdf/1906.03717"]} -{"year":"2019","title":"Argument Search: Assessing Argument Relevance","authors":["M Potthast, L Gienapp, F Euchner, N Heilenkötter… - 2019"],"snippet":"… online debating portals. Thereafter, ArgumenText [19], which retrieves argumentative sentences from the Common Crawl, and “multi-perspective answers” in the US version of Bing3 have been published. Another loosely related …","url":["https://webis.de/downloads/publications/papers/stein_2019j.pdf"]} -{"year":"2019","title":"Articles Classification in Myanmar Language","authors":["MS Phyu, KT Nwet - 2019 International Conference on Advanced …, 2019"],"snippet":"… Fortunately, Grave et al. [3] recently released pretrained vectors for 246 languages trained on Wikipedia and common crawl … Data source Method Number of word vectors Dimension Wikipedia fastText (skip-gram) 91,497 …","url":["https://ieeexplore.ieee.org/abstract/document/8920927/"]} -{"year":"2019","title":"Artificial Intelligence: An Overview","authors":["P Grogono"],"snippet":"Page 1. Chapter 1 Artificial Intelligence: An Overview Peter Grogono Department of Computer Science and Software Engineering Concordia University Montréal, Québec The smart devices that we have become so familiar with …","url":["https://www.worldscientific.com/doi/abs/10.1142/9789811203527_0001"]} -{"year":"2019","title":"Aspect Detection using Word and Char Embeddings with (Bi) LSTM and CRF","authors":["Ł Augustyniak, T Kajdanowicz, P Kazienko - 2019 IEEE Second International …, 2019"],"snippet":"… Glove 840B - Global Vectors for Word Representation proposed by Stanford NLP Group, trained based on Common Crawl. fastText - Distributed Word Representation proposed by Facebook, trained on Common Crawl as well …","url":["https://ieeexplore.ieee.org/abstract/document/8791735/"]} -{"year":"2019","title":"Aspect-Based Sentiment Analysis Using Deep Neural Networks and Transfer Learning","authors":["S Dugar"],"snippet":"Page 1. DEPARTMENT OF INFORMATICS TECHNISCHE UNIVERSITÄT MÜNCHEN Master's Thesis in Informatics Aspect-Based Sentiment Analysis Using Deep Neural Networks and Transfer Learning Sumit Dugar Page 2. DEPARTMENT OF INFORMATICS …","url":["https://www.social.in.tum.de/fileadmin/w00bwc/www/Gerhard_Hagerer/thesis-sumit-dugar.pdf"]} -{"year":"2019","title":"Assessing Social and Intersectional Biases in Contextualized Word Representations","authors":["YC Tan, LE Celis - arXiv preprint arXiv:1911.01485, 2019"],"snippet":"Page 1. Assessing Social and Intersectional Biases in Contextualized Word Representations Yi Chern Tan, L. Elisa Celis Yale University {yichern.tan, elisa.celis}@yale.edu Abstract Social bias in machine learning has drawn …","url":["https://arxiv.org/pdf/1911.01485"]} -{"year":"2019","title":"Assessing the Impact of Contextual Embeddings for Portuguese Named Entity Recognition","authors":["J Santos, B Consoli, C dos Santos, J Terra, S Collonini… - 2019 8th Brazilian …, 2019"],"snippet":"… 2019. [19] C. Buck, K. Heafield, and B. van Ooyen, “N-gram counts and language models from the common crawl,” in Proceedings of the Language Resources and Evaluation Conference, Reykjavik, Iceland, May 2014. [20] N …","url":["https://ieeexplore.ieee.org/abstract/document/8923652/"]} -{"year":"2019","title":"Assessing the Lexico-Semantic Relational Knowledge Captured by Word and Concept Embeddings","authors":["R Denaux, JM Gomez-Perez - arXiv preprint arXiv:1909.11042, 2019"],"snippet":"… three different corpora, which we chose to study whether relation prediction capacity varies depending on the corpus size: the English United Nations corpus[Ziemski et al., 2016] (517M tokens), the En- glish Wikipedia (just under …","url":["https://arxiv.org/pdf/1909.11042"]} -{"year":"2019","title":"Assessment of text coherence using an ontology‐based relatedness measurement method","authors":["G Giray, MO Ünalır - Expert Systems"],"snippet":"Abstract This paper proposes a novel method for assessing text coherence. Central to this approach is an ontology‐based representation of text, which captures the level of relatedness between conse...","url":["https://onlinelibrary.wiley.com/doi/abs/10.1111/exsy.12505"]} -{"year":"2019","title":"AStylometric INVESTIGATION OF CHARACTER VOICES IN LITERARY FICTION","authors":["K Vishnubhotla - 2019"],"snippet":"Page 1. ASTYLOMETRIC INVESTIGATION OF CHARACTER VOICES IN LITERARY FICTION by Krishnapriya Vishnubhotla A thesis submitted in conformity with the requirements for the degree of Master of Science …","url":["ftp://ftp.db.toronto.edu/public_html/cs/ftp/dist/gh/Vishnubhotla-MSc-thesis-2019.pdf"]} -{"year":"2019","title":"Asymmetry Sensitive Architecture for Neural Text Matching","authors":["T Belkacem, JG Moreno, T Dkaki, M Boughanem - European Conference on …, 2019"],"snippet":"… We adopted a cross-validation with \\(80\\%\\) to train, \\(10\\%\\) to test and \\(10\\%\\) to validate the different models. We used a public pre-trained 300-dimensional word vectors of GloVe 3 , which are trained in a Common crawl dataset …","url":["https://link.springer.com/chapter/10.1007/978-3-030-15719-7_8"]} -{"year":"2019","title":"Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures","authors":["PJ Ortiz Suárez, B Sagot, L Romary - 2019"],"snippet":"… 7http://commoncrawl.org/about/ 8http://microformats.org/wiki/ rel-nofollow 9https://www.robotstxt.org … In order to download, extract, filter, clean and classify Common Crawl we base ourselves on … Each of these processes first …","url":["https://ids-pub.bsz-bw.de/files/9021/Suarez_Sagot_Romary_Asynchronous_Pipeline_for_Processing_Huge_Corpora_2019.pdf"]} -{"year":"2019","title":"At the Lower End of Language—Exploring the Vulgar and Obscene Side of German","authors":["E Eder, U Krieg-Holz, U Hahn - Proceedings of the Third Workshop on Abusive …, 2019"],"snippet":"… 123 FASTTEXT (Grave et al., 2018) word embeddings, the latter being based on COMMON CRAWL and WIKIPEDIA … et al., 2017) trained on German tweets (TWITTER) and, finally, FASTTEXT word embeddings (Grave et al., 2018) …","url":["https://www.aclweb.org/anthology/W19-3513"]} -{"year":"2019","title":"Attending the Emotions to Detect Online Abusive Language","authors":["NS Samghabadi, A Hatami, M Shafaei, S Kar, T Solorio - arXiv preprint arXiv …, 2019"],"snippet":"… Then, we extract 5For ask.fm data, we use 300-dimensional Common Crawl Glove pre-trained embeddings, since it works better than the Twitter embedding. the DeepMoji vector for each sentence and calculate the average vector per post …","url":["https://arxiv.org/pdf/1909.03100"]} -{"year":"2019","title":"Attention Guided Graph Convolutional Networks for Relation Extraction","authors":["Z Guo, Y Zhang, W Lu - arXiv preprint arXiv:1906.07510, 2019"],"snippet":"… 5https://nlp.stanford.edu/projects/ tacred/ 6We use the 300-dimensional Glove word vectors trained on the Common Crawl corpus https://nlp. stanford.edu/projects/glove/ 7The results are produced by the open implementation of Zhang et al. (2018). Page 6. Model …","url":["https://arxiv.org/pdf/1906.07510"]} -{"year":"2019","title":"Attenuating Bias in Word Vectors","authors":["S Dev, J Phillips - arXiv preprint arXiv:1901.07656, 2019"],"snippet":"Page 1. Attenuating Bias in Word Vectors Sunipa Dev Jeff Phillips University of Utah University of Utah Abstract Word vector representations are well developed tools for various NLP and Machine Learning tasks and are known …","url":["https://arxiv.org/pdf/1901.07656"]} -{"year":"2019","title":"Attribute Sentiment Scoring With Online Text Reviews: Accounting for Language Structure and Attribute Self-Selection","authors":["I Chakraborty, M Kim, K Sudhir - 2019"],"snippet":"Page 1. Attribute Sentiment Scoring with Online Text Reviews: Accounting for Language Structure and Attribute Self-Selection Ishita Chakraborty, Minkyung Kim, K. Sudhir Yale School of Management March 2019 We thank the …","url":["http://sics.haas.berkeley.edu/pdf_2019/paper_cks.pdf"]} -{"year":"2019","title":"Augmenting Neural Machine Translation through Round-Trip Training Approach","authors":["B Ahmadnia, BJ Dorr"],"snippet":"… For the high-resource scenario (En- Es) we utilize the English-Spanish bilingual corpora from WMT'18² [29] which contains 10M sentence pairs extracting from Europarl, News-Commentary, UN and Common Crawl collections …","url":["https://www.researchgate.net/profile/Benyamin_Ahmadnia/publication/336485784_Augmenting_Neural_Machine_Translation_through_Round-Trip_Training_Approach/links/5da2b06d92851c6b4bd100ab/Augmenting-Neural-Machine-Translation-through-Round-Trip-Training-Approach.pdf"]} -{"year":"2019","title":"AutoEncoder Guided Bootstrapping of Semantic Lexicon","authors":["C Hu, M Nakano, M Okumura - Pacific Rim International Conference on Artificial …, 2019"],"snippet":"… The top-5 best candidate instances were then added to the expanded seed list for the next iteration. For the inputs of the AutoEncoder model, Glove embeddings (300 dimensions) trained on Common Crawl were used (Pennington et al …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29894-4_17"]} -{"year":"2019","title":"Automated Dictionary Creation for Analyzing Text: An Illustration from Stereotype Content","authors":["G Nicolas, X Bai, ST Fiske - 2019"],"snippet":"… The Glove word embeddings used here were trained using around 840 billion words from the common crawl (a very large database of web text), and it has word-vectors with 300 dimensions for 2.2 million words (available …","url":["https://psyarxiv.com/afm8k/download?format=pdf"]} -{"year":"2019","title":"Automated extraction of attributes from natural language attribute-based access control (ABAC) Policies","authors":["M Alohaly, H Takabi, E Blanco - Cybersecurity, 2019"],"snippet":"The National Institute of Standards and Technology (NIST) has identified natural language policies as the preferred expression of policy and implicitly called for an automated translation of ABAC...","url":["https://link.springer.com/article/10.1186/s42400-018-0019-2"]} -{"year":"2019","title":"Automated Grading of Short Text Answers: Preliminary Results in a Course of Health Informatics","authors":["G De Gasperis, S Menini, S Tonelli, P Vittorini - International Conference on Web …, 2019"],"snippet":"… vectors representing both words and sub-words. To generate these embeddings we start from the pre-computed Italian language model 3 , trained on Common Crawl and Wikipedia. The latter, in particular, is suitable for our …","url":["https://link.springer.com/chapter/10.1007/978-3-030-35758-0_18"]} -{"year":"2019","title":"Automated lifelog moment retrieval based on image segmentation and similarity scores","authors":["S Taubert, S Kahl, D Kowerko, M Eibl - CLEF2019 Working Notes. CEUR Workshop …, 2019"],"snippet":"… Page 8. 4 Resources We only used resources which were open source. Our word vectors were pretrained GloVe vectors from Common Crawl which had 300 dimensions and a vocabulary of 2.2 million tokens [27]. Furthermore …","url":["http://ceur-ws.org/Vol-2380/paper_83.pdf"]} -{"year":"2019","title":"Automated organ-level classification of free-text pathology reports to support a radiology follow-up tracking engine","authors":["JM Steinkamp, CM Chambers, D Lalevic, HM Zafar… - Radiology: Artificial …, 2019"],"snippet":"… GloVe word embeddings pretrained on the Common Crawl dataset of web pages were used as input to both networks; no performance benefit was observed from continuing to train word embeddings during the experiments …","url":["https://pubs.rsna.org/doi/abs/10.1148/ryai.2019180052"]} -{"year":"2019","title":"Automatic Knowledge Extraction to build Semantic Web of Things Applications","authors":["M Noura, A Gyrard, S Heil, M Gaedke - IEEE Internet of Things Journal, 2019"],"snippet":"… The naming process used additional hypernym information derived from WebIsALOD12 the Linked Open Data version of the WebIsA Database, a database containing 11.7 million hypernymy re- lations extracted from the CommonCrawl web corpus …","url":["http://knoesis.wright.edu/sites/default/files/IEEE_IoT_Journal_2019_Concept_Extraction_Paper_Extended.pdf"]} -{"year":"2019","title":"Automatic stance detection on political discourse in Twitter","authors":["E Zotova - 2019"],"snippet":"Page 1. Automatic Stance Detection on Political Discourse in Twitter Author: Elena Zotova Advisors: Rodrigo Agerri and German Rigau Hizkuntzaren Azterketa eta Prozesamendua Language Analysis and Processing Master's Thesis …","url":["https://addi.ehu.es/bitstream/handle/10810/36184/MAL-Elena_Zotova.pdf?sequence=1&isAllowed=y"]} -{"year":"2019","title":"Automatic Text Difficulty Estimation Using Embeddings and Neural Networks","authors":["A Filighera, T Steuer, C Rensing - European Conference on Technology Enhanced …, 2019"],"snippet":"… Next, the resulting tokens are embedded. The following pre-trained embedding models were used in our experiments: the word2vec [14], the uncased Common Crawl GloVe [15], the original ELMo [16], the uncased …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29736-7_25"]} -{"year":"2019","title":"Automatic Text Summarization of News Articles in Serbian Language","authors":["D Kosmajac, V Kešelj - 2019 18th International Symposium INFOTEH …, 2019"],"snippet":"… 1). A. Auxiliary word2vec generated from Bosnian Wikidumps For input representation we used pre-trained glove word embeddings for Bosnian language2. They were trained on Common Crawl and Wikipedia using fastText [20] …","url":["https://ieeexplore.ieee.org/abstract/document/8717655/"]} -{"year":"2019","title":"Automating Analysis and Feedback to Improve Mathematics' Teachers' Classroom Discourse","authors":["A Suresh, T Sumner, J Jacobs, B Foland, W Ward - Paper submitted to the ninth …, 2019"],"snippet":"… GloVe or Global vectors for word representation is an unsupervised learning algorithm trained on aggregated word-word co-occurrence statistics from a corpus. In our model, we use the vectors trained on Common Crawl with 840 billion tokens and 300 dimensions …","url":["https://www.researchgate.net/profile/Jennifer_Jacobs8/publication/332233671_Automating_Analysis_and_Feedback_to_Improve_Mathematics%27_Teachers%27_Classroom_Discourse/links/5ca7c7394585157bd32535fc/Automating-Analysis-and-Feedback-to-Improve-Mathematics-Teachers-Classroom-Discourse.pdf"]} -{"year":"2019","title":"Automating the Fact-Checking Task: Challenges and Directions","authors":["DNE da Silva - 2019"],"snippet":"Page 1. Automating the Fact-Checking Task: Challenges and Directions Dissertation zur Erlangung des Doktorgrades (Dr. rer. nat.) der Mathematisch-Naturwissenschaftlichen Fakultät der Rheinischen Friedrich-Wilhelms-Universität Bonn …","url":["http://hss.ulb.uni-bonn.de/2019/5500/5500.pdf"]} -{"year":"2019","title":"Backlink Analyser using Apache Spark","authors":["M Zeeshan, S Asim, A Nadeem Anwar - 2019"],"snippet":"… Google Page Rank is assigned by Google based on different website factors (Design, Visitors, and Quality of content). Common Crawl [1] is an open repository for web crawl data … We will use subset of Common Crawl dataset good enough to demonstrate our system …","url":["http://dspace.cuilahore.edu.pk/xmlui/bitstream/handle/123456789/1438/SE29_Backlink%20Analyzer%20using%20Apache%20Spark.pdf?sequence=1&isAllowed=y"]} -{"year":"2019","title":"BEING PROFILED: COGITAS ERGO SUM: 10 Years of Profiling the European Citizen","authors":["I Baraliuc, E Bayamlioglu, M Hildebrandt, L Janssens - 2019"],"snippet":""} -{"year":"2019","title":"Beyond Bag-of-Concepts: Vectors of Locally Aggregated Concepts","authors":["M Grootendorst, J Vanschoren"],"snippet":"… Word2Vec pre-trained embeddings were trained on the Google News data set and contain vectors for 3 million English words.1 GloVe pre-trained embeddings were trained on the Common Crawl data set and contain vectors for 1.9 million English words.2 Pre-trained …","url":["https://ecmlpkdd2019.org/downloads/paper/489.pdf"]} -{"year":"2019","title":"Bidirectional Text Compression in External Memory","authors":["P Dinklage, J Ellert, J Fischer, D Köppl, M Penschuck - arXiv preprint arXiv …, 2019"],"snippet":"Page 1. Bidirectional Text Compression in External Memory Patrick Dinklage Technische Universität Dortmund, Department of Computer Science patrick.dinklage@ tu-dortmund.de Jonas Ellert Technische Universität Dortmund …","url":["https://arxiv.org/pdf/1907.03235"]} -{"year":"2019","title":"Big Bidirectional Insertion Representations for Documents","authors":["L Li, W Chan - arXiv preprint arXiv:1910.13034, 2019"],"snippet":"… Eu- Page 3. Figure 1: Big Bidirectional Insertion Representations for Documents roparl, Rapid, News-Commentary) and parallel sentence-level data (WikiTitles, Common Crawl, Paracrawl). The test set is newstest2019. The …","url":["https://arxiv.org/pdf/1910.13034"]} -{"year":"2019","title":"Big BiRD: A Large, Fine-Grained, Bigram Relatedness Dataset for Examining Semantic Composition","authors":["S Asaadi, SM Mohammad, S Kiritchenko"],"snippet":"… Word representations: We use GloVe word embeddings pre-trained on 840B-token CommonCrawl corpus16 and fastText word embeddings pre-trained on Common Crawl and Wikipedia using CBOW.17 For the …","url":["http://saifmohammad.com/WebDocs/BiRD-NAACL2019.pdf"]} -{"year":"2019","title":"Big Data Competence Center ScaDS Dresden/Leipzig: Overview and selected research activities","authors":["E Rahm, WE Nagel, E Peukert, R Jäkel, F Gärtner… - Datenbank-Spektrum"],"snippet":"… devise multiple research activities. The most promising direction resulted in the publication of the “Dresden WebTable Corpus” (DWTC) 2 [9] based on the freely available web crawl “CommonCrawl”. The DWTC corpus consists …","url":["https://link.springer.com/article/10.1007/s13222-018-00303-6"]} -{"year":"2019","title":"Boosting Implicit Discourse Relation Recognition with Connective-based Word Embeddings","authors":["C Wu, J Su, Y Chen, X Shi - Neurocomputing, 2019"],"snippet":"Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0925231219312196"]} -{"year":"2019","title":"Bornholmsk Natural Language Processing: Resources and Tools","authors":["L Derczynski, ITU Copenhagen, AS Kjeldsen - Proceedings of the Nordic Conference …, 2019"],"snippet":"… of compensating for the high data sparsity. Embeddings are induced with 300 dimensions, in order to be compatible with the public Common Crawl-based FastText embeddings. Having induced these embeddings for Bornholmsk …","url":["http://www.derczynski.com/papers/bornholmsk.pdf"]} -{"year":"2019","title":"BTC-2019: The 2019 Billion Triple Challenge Dataset","authors":["JM Herrera, A Hogan, T Käfer - International Semantic Web Conference, 2019"],"snippet":"… Meusel et al. [43] have published the WebDataCommons, extracting RDFa, Microdata and Microformats from the massive Common Crawl dataset; the result is a collection of 17,241,313,916 RDF triples, which, to the best of …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30796-7_11"]} -{"year":"2019","title":"Building and using parallel text for translation","authors":["M Simard - The Routledge Handbook of Translation and …, 2019"]} -{"year":"2019","title":"Building Knowledge Base through Deep Learning Relation Extraction and Wikidata","authors":["P Subasic, H Yin, X Lin"],"snippet":"… Very large high-quality training data set is then generated automatically by matching Common Crawl data with relation keywords extracted from knowledge database … We solve this problem by matching Common Crawl …","url":["http://ceur-ws.org/Vol-2350/paper5.pdf"]} -{"year":"2019","title":"BUILDING TYPE CLASSIFICATION FROM SOCIAL MEDIA TEXTS VIA GEO-SPATIAL TEXT MINING","authors":["M Häberle, M Werner, XX Zhu"],"snippet":"… We applied the ReLU activation function [18] after each hidden layer and the softmax function after the output layer. The neural network has been trained for 100 epochs and a batch size of 64 samples. 2http …","url":["https://elib.dlr.de/127637/1/preprint.pdf"]} -{"year":"2019","title":"Building Unbiased Comment Toxicity Classification Model with Natural Language Processing","authors":["LQ Huang, MJ Yu"],"snippet":"… For this project, we investigated word embeddings GloVe300D (11) and fastText300D (12), where both are pretrained on Common Crawl. We also implemented character level embedding layer introduced in the paper (13) …","url":["http://cs229.stanford.edu/proj2019spr/report/79.pdf"]} -{"year":"2019","title":"Building Unbiased Comment Toxicity Classification Model","authors":["LQ Huang, MJ Yu"],"snippet":"… combined them together to mitigate the potential biases in any embeddings. We experimented with GloVe 300D trained on Common Crawl [3] and fastText 300D trained on Common Crawl [2]. We also implemented …","url":["http://cs229.stanford.edu/proj2019spr/poster/79.pdf"]} -{"year":"2019","title":"CamemBERT: a Tasty French Language Model","authors":["L Martin, B Muller, PJO Suárez, Y Dupont, L Romary… - arXiv preprint arXiv …, 2019"],"snippet":"… Later (Grave et al., 2018) trained fastText word embeddings for 157 languages using Common Crawl and showed that using crawled data significantly increased the performance of the embeddings relatively to those trained only on Wikipedia …","url":["https://arxiv.org/pdf/1911.03894"]} -{"year":"2019","title":"Can Character Embeddings Improve Cause-of-Death Classification for Verbal Autopsy Narratives?","authors":["Z Yan, S Jeblee, G Hirst"],"snippet":"… Page 3. Figure 1: Embedding concatenation model architecture. d1 is the dimensionality of the word embedding (100), and d2 is the dimensionality of the character em- bedding (24). rived from GloVe vectors (Pennington et al., 2014) trained on Common Crawl …","url":["ftp://ftp.db.toronto.edu/public_html/cs/ftp/public_html/pub/gh/Yan-etal-2019.pdf"]} -{"year":"2019","title":"Capturing and measuring thematic relatedness","authors":["M Kacmajor, JD Kelleher - Language Resources and Evaluation, 2019"],"snippet":"Page 1. ORIGINAL PAPER Capturing and measuring thematic relatedness Magdalena Kacmajor1 • John D. Kelleher2 © The Author(s) 2019 Abstract In this paper we explain the difference between two aspects of semantic …","url":["https://link.springer.com/article/10.1007/s10579-019-09452-w"]} -{"year":"2019","title":"Capturing Discriminative Attributes Using Convolution Neural Network Over ConceptNet Numberbatch Embedding","authors":["V Vinayan, MA Kumar, KP Soman - … Research in Electronics, Computer Science and …, 2019"],"snippet":"… network. There the model is represented over the various dimensions of GloVe embedding, of those the 300 dimensions (trained over a common crawl corpus of size 840B) embedding performed the best for the task. Building …","url":["https://link.springer.com/chapter/10.1007/978-981-13-5802-9_69"]} -{"year":"2019","title":"Cardiff University at SemEval-2019 Task 4: Linguistic Features for Hyperpartisan News Detection","authors":["C Pérez-Almendros, LE Anke, S Schockaert - … of the 13th International Workshop on …, 2019"],"snippet":"… GloVe vectors (Pennington et al., 2014) for all the words occurring in them. To this end, we used the un- cased Common Crawl pretrained GloVe embeddings, with 300 dimensions and a vocabulary of 1.9 million words. The …","url":["https://www.aclweb.org/anthology/S19-2158"]} -{"year":"2019","title":"CatchPhish: detection of phishing websites by inspecting URLs","authors":["RS Rao, T Vaishnavi, AR Pais - Journal of Ambient Intelligence and Humanized …, 2019"],"snippet":"… 4.1 Dataset We have collected the dataset from three different sources. Legitimate sites are collected from common-crawl and Alexa database whereas phishing sites are collected from PhishTank … D2: Legitimate sites from common-crawl and phishing sites from PhishTank …","url":["https://link.springer.com/article/10.1007/s12652-019-01311-4"]} -{"year":"2019","title":"Categorising AWS Common Crawl Dataset using MapReduce","authors":["A Chiniah, A Chummun, Z Burkutally - 2019 Conference on Next Generation …, 2019"],"snippet":"Keeping track of websites connected to the Web is an impossible task given the amplitude and fluctuation of new sites being created and those going offline. In this paper we took the task to create a directory by categorising the websites using …","url":["https://ieeexplore.ieee.org/abstract/document/8883665/"]} -{"year":"2019","title":"Categorizing Comparative Sentences","authors":["A Panchenko, A Bondarenko, M Franzek, M Hagen…"],"snippet":"… containing both items from a web-scale corpus. Our sentence source is the publicly available in- dex of the DepCC (Panchenko et al., 2018), an index of more then 14 billion dependency-parsed English sentences from the Common Crawl filtered for duplicates …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2019-panchenkoetal-argminingws-compsent.pdf"]} -{"year":"2019","title":"Categorizing Emails Using Machine Learning with Textual Features","authors":["F Rudzicz, K Malikov - Advances in Artificial Intelligence: 32nd Canadian …","H Zhang, J Rangrej, S Rais, M Hillmer, F Rudzicz… - Canadian Conference on …, 2019"],"snippet":"… Lastly, domain-specific email inboxes such as DDM-Support will contain highly specific subject-matter terminology such as organization acronyms or hospital names, which are generally not present in large text corpora such as common crawl …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=7GiZDwAAQBAJ&oi=fnd&pg=PA3&dq=commoncrawl&ots=pxG3RFx1gg&sig=XpsK2sgWrDKsYko4OCiNHIo5Meg","https://link.springer.com/chapter/10.1007/978-3-030-18305-9_1"]} -{"year":"2019","title":"CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB","authors":["H Schwenk, G Wenzek, S Edunov, E Grave, A Joulin - arXiv preprint arXiv …, 2019"],"snippet":"… We are using ten snapshots of a curated common crawl corpus (Wenzek et al., 2019), totaling 32.7 billion unique sentences … we use the same underlying mining approach based on LASER and scale to a much larger …","url":["https://arxiv.org/pdf/1911.04944"]} -{"year":"2019","title":"CCMT 2019 Machine Translation Evaluation Report","authors":["M Yang, X Hu, H Xiong, J Wang, Y Jiaermuhamaiti… - China Conference on …, 2019","Y Jiaermuhamaiti, Z He, W Luo, S Huang - … , China, September 27–29, 2019, Revised …, 2019"],"snippet":"… (2) English and Chinese monolingual Corpus (Europarl v7/v8, News Commentary, Common Crawl, News Crawl, News Discussions, etc.); LDC for English and Gigaword for Chinese (LDC2011T07, LDC2009T13, LDC2007T07, LDC2009T27) …","url":["https://link.springer.com/chapter/10.1007/978-981-15-1721-1_11","https://link.springer.com/content/pdf/10.1007/978-981-15-1721-1.pdf#page=117"]} -{"year":"2019","title":"CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data","authors":["G Wenzek, MA Lachaux, A Conneau, V Chaudhary… - arXiv preprint arXiv …, 2019"],"snippet":"… (2019) used a large scale dataset based on Common Crawl to train text … Russian, Chinese and Urdu (∆ for average) for BERT-BASE models trained either on Wikipedia or CommonCrawl … We preprocess Common Crawl by …","url":["https://arxiv.org/pdf/1911.00359"]} -{"year":"2019","title":"Characterizing the impact of geometric properties of word embeddings on task performance","authors":["B Whitaker, D Newman-Griffis, A Haldar… - arXiv preprint arXiv …, 2019"],"snippet":"… 1 3M 300-d GoogleNews vectors from https:// code.google.com/archive/p/ word2vec/ 2 2M 300-d 840B Common Crawl vectors from https: //nlp.stanford.edu/projects/glove/ 3 1M 300-d WikiNews vectors with subword …","url":["https://arxiv.org/pdf/1904.04866"]} -{"year":"2019","title":"CITIZENS IN DATA LAND","authors":["AP DE VRIES"],"snippet":"… And Indie music'. 3 https://github.com/webis-de/wasp/. 4 Consider a new service provided by The Common Crawl Foundation, http://commoncrawl org/, or, alternatively, a new community service provided via public libraries …","url":["https://www.jstor.org/stable/pdf/j.ctvhrd092.19.pdf"]} -{"year":"2019","title":"CLaC at clpsych 2019: Fusion of neural features and predicted class probabilities for suicide risk assessment based on online posts","authors":["E Mohammadi, H Amini, L Kosseim - Proceedings of the Sixth Workshop on …, 2019"],"snippet":"… 2.1 Word Embeddings As shown in Figure 1, GloVe (Pennington et al., 2014) and ELMo (Peters et al., 2018) have been used as pretrained word embeddings. The 300d GloVe word embedder has been pretrained on 840B tokens of web data from Common Crawl …","url":["https://www.aclweb.org/anthology/W19-3004"]} -{"year":"2019","title":"Classification Approaches to Identify Informative Tweets","authors":["P Aggarwal - Proceedings of the Student Research Workshop …, 2019"],"snippet":"… After these preprocessing steps, we represent each posting by a dense embedding, created by the mean of the individual words embeddings. We use the pretrained embeddings provided by (Mikolov et al., 2018) …","url":["https://www.researchgate.net/profile/Piush_Aggarwal/publication/335243720_Classification_approaches_to_identify_Informative_Tweets/links/5d5af768a6fdcc55e8198141/Classification-approaches-to-identify-Informative-Tweets.pdf"]} -{"year":"2019","title":"Classification of Anti-phishing Solutions","authors":["S Chanti, T Chithralekha - SN Computer Science, 2020"],"snippet":"… WestPac. PhishTank. PIRT report. Legitimate. –. Google whitelist. Manual. Alexa, Yahoo. Web crawler. World Wide Web. WestPac. Common crawl Google search. Data set size. Phishing. 203 Archives. 200 websites. 600 …","url":["https://link.springer.com/article/10.1007/s42979-019-0011-2"]} -{"year":"2019","title":"Classification of the Answers of the OMT","authors":["F Meyer, C Biemann"],"snippet":"Page 1. Universität Hamburg Fachbereich Informatik Fakultät für Mathematik, Informatik und Naturwissenschaften Classi cation of the Answers of the OMT Bachelor-Thesis Mensch-Computer-Interaktion Arbeitsbereich …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/teaching/theses/completed-theses/2019-ba-meyer-omt.pdf"]} -{"year":"2019","title":"Classification of virtual patent marking web-pages using machine learning techniques","authors":["A Calvo Ibañez - 2018"],"snippet":"… Having found the best model, this was then used to predict Common Crawl instances (nonseen observations). Figure 6.1 shows a schema of the first approach. Figure 6.1 – First Approach Framework Adapted from https://www.analyticsvidhya.com …","url":["https://upcommons.upc.edu/bitstream/handle/2117/127682/133741.pdf"]} -{"year":"2019","title":"Classification of Web History Tools Through Web Analysis","authors":["JRG Evangelista, DD de Oliveira Gatto, RJ Sassi - International Conference on …, 2019"],"snippet":"… 02. Archive.fo. http://archive.fo. 03. CashedPages. http://www.cachedpages. com. 04. CachedView. http://cachedview.com. 05. Common Crawl http://commoncrawl.org. 06. Screenshots.com. http://www.screenshots.com …","url":["https://link.springer.com/chapter/10.1007/978-3-030-22351-9_18"]} -{"year":"2019","title":"Classifying Pastebin Content Through the Generation of PasteCC Labeled Dataset","authors":["A Riesco, E Fidalgo, MW Al-Nabki, F Jáñez-Martino… - International Conference on …, 2019"],"snippet":"… Panchenko et al. [17] took English text from Common Crawl and constructed a large web-scale corpus using text classification … Panchenko, A., Ruppert, E., Faralli, S., Ponzetto, SP, Biemann, C.: Building a web-scale dependency-parsed corpus from commoncrawl …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29859-3_39"]} -{"year":"2019","title":"Classifying Websites Using Word Vectors and Other Techniques: An Application of Zipf's Law","authors":["A Robles - 2019"],"snippet":"Page 1. CLASSIFYING WEBSITES USING WORD VECTORS AND OTHER TECHNIQUES: AN APPLICATION OF ZIPF'S LAW A THESIS Presented to the Department of Mathematics and Statistics California State University, Long Beach In Partial Fulfillment …","url":["http://search.proquest.com/openview/7a6b97a2b1aa841198182ea77c38efb6/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2019","title":"Clinical Data Extraction and Normalization of Cyrillic Electronic Health Records Via Deep-Learning Natural Language Processing","authors":["B Zhao - JCO Clinical Cancer Informatics, 2019"],"snippet":"… The medical text in the hematic was modified from the actual patient record for the purposes of illustration. (B) Word embeddings used were fastText embeddings pretrained on Wikipedia and Common Crawl data for English (EN) and Bulgarian (BG) …","url":["https://ascopubs.org/doi/pdfdirect/10.1200/CCI.19.00057"]} -{"year":"2019","title":"Cloze-driven Pretraining of Self-attention Networks","authors":["A Baevski, S Edunov, Y Liu, L Zettlemoyer, M Auli - arXiv preprint arXiv:1903.07785, 2019"],"snippet":"… We pretrain on individual examples as they oc- cur in the training corpora (§5.1). For News Crawl this is individual sentences while on Wikipedia, Bookcorpus, and Common Crawl examples are paragraph length … Common Crawl …","url":["https://arxiv.org/pdf/1903.07785"]} -{"year":"2019","title":"ClustCrypt: Privacy-Preserving Clustering of Unstructured Big Data in the Cloud","authors":["SM Zobaed, S Ahmad, R Gottumukkala, MA Salehi"],"snippet":"Page 1. ClustCrypt: Privacy-Preserving Clustering of Unstructured Big Data in the Cloud SM Zobaed∗, Sahan Ahmad∗, Raju Gottumukkala†, and Mohsen Amini Salehi∗ ∗ School of Computing & Informatics †Informatics Research …","url":["https://www.researchgate.net/profile/Mohsen_Salehi2/publication/333561272_ClustCrypt_Privacy-Preserving_Clustering_of_Unstructured_Big_Data_in_the_Cloud/links/5cfaa9f0299bf13a38457fe9/ClustCrypt-Privacy-Preserving-Clustering-of-Unstructured-Big-Data-in-the-Cloud.pdf"]} -{"year":"2019","title":"CluWords: Exploiting Semantic Word Clustering Representation for Enhanced Topic Modeling","authors":["F Viegas, S Canuto, C Gomes, W Luiz, T Rosa, S Ribas… - Proceedings of the Twelfth …, 2019"],"snippet":"… In this section we compare the proposed CluWords with three pre-trained word em- beddings spaces: (i) Word2Vec trained with GoogleNews [21]; (ii) FastText trained with WikiNews [22] and (iii) Fasttext trained on Common Crawl [22] …","url":["https://dl.acm.org/citation.cfm?id=3291032"]} -{"year":"2019","title":"CoFiF: A Corpus of Financial Reports in French Language","authors":["T Daudert, S Ahmadi - The First Workshop on Financial Technology and …, 2019"],"snippet":"… com/CoFiF/Corpus [Merity et al., 2016], or CommonCrawl 2. Considering the domain of business and economics, especially for English, corpora such as the Wall Street Journal (WSJ) Corpus [Paul and Baker, 1992], the …","url":["https://www.aclweb.org/anthology/W19-55#page=31"]} -{"year":"2019","title":"Cognition and the Structure of Bias","authors":["GM Johnson - 2019"],"snippet":"Page 1. UCLA UCLA Electronic Theses and Dissertations Title Cognition and the Structure of Bias Permalink https://escholarship.org/uc/item/7hf582vz Author Johnson, Gabbrielle Michelle Publication Date 2019 Peer reviewed|Thesis/dissertation eScholarship.org …","url":["https://cloudfront.escholarship.org/dist/prd/content/qt7hf582vz/qt7hf582vz.pdf"]} -{"year":"2019","title":"CogniVal: A Framework for Cognitive Word Embedding Evaluation","authors":["N Hollenstein, A de la Torre, N Langer, C Zhang - arXiv preprint arXiv:1909.09001, 2019"],"snippet":"… et al., 2018). We evaluate the embeddings with and without subword information trained on 16 billion to- kens of Wikipedia sentences as well as the ones trained on 600 billion tokens of Common Crawl. • ELMo models both …","url":["https://arxiv.org/pdf/1909.09001"]} -{"year":"2019","title":"Collaborative Attention Network with Word and N-Gram Sequences Modeling for Sentiment Classification","authors":["J Bao, L Zhang, B Han - International Conference on Artificial Neural Networks, 2019"],"snippet":"… words in text. In our experiments, we adopt global vectors for word representation (GloVe) [17] in the embedding layer of our model, which consists of 840 billion 300-dimension tokens trained on Common Crawl. We don't use …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30490-4_8"]} -{"year":"2019","title":"Combating Fake News with Adversarial Domain Adaptation and Neural Models","authors":["B Xu - 2019"],"snippet":"Page 1. Combating Fake News with Adversarial Domain Adaptation and Neural Models by Brian Xu BS, Massachusetts Institute of Technology (2018) Submitted to the Department of Electrical Engineering and Computer Science …","url":["https://groups.csail.mit.edu/sls/publications/2019/BrianXu_MEng-Thesis.pdf"]} -{"year":"2019","title":"Combining and learning word embedding with WordNet for semantic relatedness and similarity measurement","authors":["YY Lee, H Ke, TY Yen, HH Huang, HH Chen - Journal of the Association for …, 2019"],"snippet":"… ahttps://nlp.stanford.edu/projects/glove/. bhttp://dumps.wikimedia.org/enwiki/20140102/. chttps://catalog.ldc.upenn.edu/LDC2011T07. dThe Common Crawl corpus contains raw web page data, extracted metadata and text extractions. http://commoncrawl.org …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/asi.24289"]} -{"year":"2019","title":"Comparison of Machine Learning Approaches for Industry Classification Based on Textual Descriptions of Companies","authors":["A Tagarev, N Tulechki, S Boytcheva"],"snippet":"… GloVe vector embeddings are used. Specifically the 300dimensional GloVe vectors trained on the large Common Crawl corpus of 840 billion tokens with a vocabulary of 2.2 million words. While there is no additional training …","url":["https://acl-bg.org/proceedings/2019/RANLP%202019/pdf/RANLP134.pdf"]} -{"year":"2019","title":"Complex Security Policy? A Longitudinal Analysis of Deployed Content Security Policies","authors":["S Roth, T Barron, S Calzavara, N Nikiforakis, B Stock"],"snippet":"… server response. To determine this IA-specific influence, we chose a second archive service to corroborate the IA's data. In particular, Common Crawl (CC) [10] has been collecting snapshots of popular sites since 2013. For each …","url":["https://swag.cispa.saarland/papers/roth2020csp.pdf"]} -{"year":"2019","title":"Comprehensive Analysis of Aspect Term Extraction Methods using Various Text Embeddings","authors":["Ł Augustyniak, T Kajdanowicz, P Kazienko - arXiv preprint arXiv:1909.04917, 2019"],"snippet":"… 1. word2vec - protoplast model of any neural word embedding trained on Google News. 2. glove.840B - Global Vectors for Word Representation proposed by Stanford NLP Group, trained based on Common Crawl with …","url":["https://arxiv.org/pdf/1909.04917"]} -{"year":"2019","title":"Comprehensive trait attributions show that face impressions are organized in four dimensions","authors":["C Lin, U Keles, R Adolphs - 2019"],"snippet":"… we represented each of them with a vector of 300 computationally extracted semantic features (describing word embeddings and text classification) using a state-of-the-art neural network provided within the FastText library (61) …","url":["https://psyarxiv.com/87nex/download?format=pdf"]} -{"year":"2019","title":"Compressing Inverted Indexes with Recursive Graph Bisection: A Reproducibility Study","authors":["J Mackenzie, A Mallia, M Petri, JS Culpepper, T Suel"],"snippet":"… tool8, – Gov2 is a crawl of .gov domains from 2004, – ClueWeb09 and ClueWeb12 both correspond to the 'B' portion of the 2009 and 2012 ClueWeb crawls of the world wide web, respectively, and – CC-News contains English …","url":["https://jmmackenzie.io/pdf/mm+19-ecir.pdf"]} -{"year":"2019","title":"Computational Argumentation Synthesis as a Language Modeling Task","authors":["R El Baff, H Wachsmuth, K Al-Khatib, M Stede, B Stein"],"snippet":"Page 1. Computational Argumentation Synthesis as a Language Modeling Task Roxanne El Baff 1 Henning Wachsmuth 2 Khalid Al-Khatib 1 Manfred Stede 3 Benno Stein 1 1 Bauhaus-Universität Weimar, Weimar, Germany …","url":["https://webis.de/downloads/publications/papers/stein_2019y.pdf"]} -{"year":"2019","title":"Conceptor Debiasing of Word Representations Evaluated on WEAT","authors":["S Karve, L Ungar, J Sedoc - arXiv preprint arXiv:1906.05993, 2019"],"snippet":"… For context-independent embeddings, we used off-the-shelf Fasttext subword embeddings6, which were trained with subword information on the Common Crawl (600B tokens), the GloVe embeddings 7 trained on Wikipedia and …","url":["https://arxiv.org/pdf/1906.05993"]} -{"year":"2019","title":"Constructing the Wavelet Tree and Wavelet Matrix in Distributed Memory","authors":["P Dinklage, J Fischer, F Kurpicz - 2020 Proceedings of the Twenty-Second Workshop on …"],"snippet":"Page 1. Constructing the Wavelet Tree and Wavelet Matrix in Distributed Memory ∗ Patrick Dinklage † Johannes Fischer† Florian Kurpicz† Abstract The wavelet tree (Grossi et al. [SODA,2003]) is a compact index for texts …","url":["https://epubs.siam.org/doi/abs/10.1137/1.9781611976007.17"]} -{"year":"2019","title":"Content Similarity Analysis of Written Comments under Posts in Social Media","authors":["M Mozafari, R Farahbakhsh, N Crespi"],"snippet":"… The GloVe vectors were trained from 840 billion tokens of Common Crawl web data and have 300 dimensions [23]. This feature is extracted similar to the Google-word2vec similarity by using equation 6 for each post and comment pair …","url":["http://servicearchitecture.wp.imtbs-tsp.eu/files/2019/09/RC_SNAMS2019_37.pdf"]} -{"year":"2019","title":"Context Matters: Recovering Human Semantic Structure from Machine Learning Analysis of Large-Scale Text Corpora","authors":["MC Iordan, T Giallanza, CT Ellis, N Beckage, JD Cohen - arXiv preprint arXiv …, 2019"],"snippet":"… We also compared performance of the four Word2Vec embedding spaces to another commonly used embedding space known as GloVe28 for two main reasons; first, the GloVe embeddings are learned from the Common …","url":["https://arxiv.org/pdf/1910.06954"]} -{"year":"2019","title":"Context-Aware Crosslingual Mapping","authors":["H Aldarmaki, M Diab - arXiv preprint arXiv:1903.03243, 2019"],"snippet":"… data. We trained monolingual ELMo and FastText with de- fault parameters. We used the WMT'13 commoncrawl data for cross-lingual mapping, and the WMT'13 test sets for evaluating sentence translation retrieval. For all …","url":["https://arxiv.org/pdf/1903.03243"]} -{"year":"2019","title":"Continual Learning for Sentence Representations Using Conceptors","authors":["T Liu, L Ungar, J Sedoc - arXiv preprint arXiv:1904.09187, 2019"],"snippet":"… zero-shot CA. Best results are in boldface and the second best results are underscored. dimensional GloVe vectors (trained on the 840 billion token Common Crawl) (Pennington et al., 2014). Additional experiments with Word2Vec …","url":["https://arxiv.org/pdf/1904.09187"]} -{"year":"2019","title":"CONTRIBUTIONS TO CLINICAL INFORMATION EXTRACTION IN PORTUGUESE: CORPORA, NAMED ENTITY RECOGNITION, WORD EMBEDDINGS","authors":["FA da Costa Lopes - 2019"],"snippet":"Page 1. Fábio André da Costa Lopes CONTRIBUTIONS TO CLINICAL INFORMATION EXTRACTION IN PORTUGUESE:CORPORA,NAMED ENTITY RECOGNITION,WORD EMBEDDINGS Thesis submitted to the Faculty of Science …","url":["https://www.researchgate.net/profile/Fabio_Lopes17/publication/335639414_Contributions_to_Clinical_Information_Extraction_in_Portuguese_Corpora_Named_Entity_Recognition_Word_Embeddings/links/5d717af2a6fdcc9961b1facd/Contributions-to-Clinical-Information-Extraction-in-Portuguese-Corpora-Named-Entity-Recognition-Word-Embeddings.pdf"]} -{"year":"2019","title":"Controlling Grammatical Error Correction Using Word Edit Rate","authors":["K Hotate, M Kaneko, S Katsumata, M Komachi - … of the 57th Conference of the …, 2019"],"snippet":"… the lowest WER). In the ranking experiment, we used a 5-gram KenLM (Heafield, 2011) with Kneser-Ney smoothing trained on the web-scale Common Crawl corpus (Junczys-Dowmunt and Grundkiewicz, 2016). As an evaluation …","url":["https://www.aclweb.org/anthology/P19-2020"]} -{"year":"2019","title":"Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading","authors":["L Qin, M Galley, C Brockett, X Liu, X Gao, B Dolan… - arXiv preprint arXiv …, 2019"],"snippet":"… informative response. To enable reproducibility of our experiments, we crawled web pages using Common Crawl (http://commoncrawl.org), a service that crawls web pages and makes its historical crawls available to the public. We …","url":["https://arxiv.org/pdf/1906.02738"]} -{"year":"2019","title":"Correlation Coefficients and Semantic Textual Similarity","authors":["V Zhelezniak, A Savkov, A Shen, NY Hammerla - arXiv preprint arXiv:1905.07790, 2019"],"snippet":"… In all experiments we rely on the following publicly available word embeddings: GloVe (Pennington et al., 2014) trained on Common Crawl (840B tokens), fastText (Bojanowski et al., 2017) trained on Common Crawl …","url":["https://arxiv.org/pdf/1905.07790"]} -{"year":"2019","title":"Correlations between Word Vector Sets","authors":["V Zhelezniak, A Shen, D Busbridge, A Savkov…"],"snippet":"… For methods involving pretrained word embeddings, we use fastText (Bo- janowski et al., 2017) trained on Common Crawl (600B tokens), as previous evaluations have in- dicated that fastText vectors have uniformly the best …","url":["https://www.april.sh/assets/files/emnlp2019.pdf"]} -{"year":"2019","title":"Creation of Sentence Embeddings Based on Topical Word Representations","authors":["P Wenig"],"snippet":"Page 1. Topical Sentence Embeddings Creation of Sentence Embeddings Based on Topical Word Representations Phillip Wenig 160361 Master's thesis to obtain the degree of Master of Science in Information Systems University of Liechtenstein …","url":["https://www.researchgate.net/profile/Phillip_Wenig/publication/330761695_Creation_of_Sentence_Embeddings_Based_on_Topical_Word_Representations/links/5c531d44458515a4c74d4719/Creation-of-Sentence-Embeddings-Based-on-Topical-Word-Representations.pdf"]} -{"year":"2019","title":"Cross-collection Multi-aspect Sentiment Analysis","authors":["H Kaporo - Computer Science On-line Conference, 2019"],"snippet":"… 3.1, we set \\(n=10\\), \\(m=5\\) and run the algorithm over topic-words returned by CPTM and word vectors from glove pre-trained embeddings 2 . These embeddings are trained over common crawl (Google data) and contain 2.2 Million words …","url":["https://link.springer.com/chapter/10.1007/978-3-030-19810-7_11"]} -{"year":"2019","title":"Cross-Domain Sentiment Classification With Bidirectional Contextualized Transformer Language Models","authors":["B Myagmar, J Li, S Kimura - IEEE Access, 2019"],"snippet":"… For pre-training data, in addition to the BookCorpus and English Wikipedia datasets, cased XLNet-Large model, refered to simply as XLNet henceforth, uses Giga5 (16GB text) [35], ClueWeb 2012-B [36] and Common Crawl [37] as part of its pre-training data …","url":["https://ieeexplore.ieee.org/iel7/6287639/8600701/08894409.pdf"]} -{"year":"2019","title":"Cross-Layer Optimization of Big Data Transfer Throughput and Energy Consumption","authors":["L Di Tacchio, MDSQZ Nine, T Kosar, MF Bulut… - 2019 IEEE 12th …, 2019"],"snippet":"… The algorithms have been compared using four different datasets: i) a small files dataset, including 20,000 HTML files form the Common Crawl project [1]; ii) a medium files dataset, consisting of 5,000 image files from Flickr …","url":["https://ieeexplore.ieee.org/abstract/document/8814571/"]} -{"year":"2019","title":"Cross-Lingual Alignment of Word & Sentence Embeddings","authors":["H Aldarmaki - 2019"],"snippet":"Page 1. Cross-Lingual Alignment of Word & Sentence Embeddings by Hanan Aldarmaki B.Sc. in Computer Engineering, May 2008, The American University of Sharjah M.Phil. in Computer Speech, Text, and Internet Technology, May 2009, University of Cambridge …","url":["http://search.proquest.com/openview/97f58b5d99e2ed81065054a170f2dcda/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2019","title":"Cross-lingual Data Transformation and Combination for Text Classification","authors":["J Jiang, S Pang, X Zhao, L Wang, A Wen, H Liu… - arXiv preprint arXiv …, 2019"],"snippet":"… There are word vectors for 157 languages1, trained on Common Crawl and Wikipedia, and these models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window size of 5 and 10 negatives …","url":["https://arxiv.org/pdf/1906.09543"]} -{"year":"2019","title":"Cross-lingual Parsing with Polyglot Training and Multi-treebank Learning: A Faroese Case Study","authors":["J Barry, J Wagner, J Foster - arXiv preprint arXiv:1910.07938, 2019"],"snippet":"… the source languages. We use the precomputed Word2Vec embeddings11 released as part of the 2017 CoNLL shared task on UD parsing (Zeman et al., 2017) which were trained on CommonCrawl and Wikipedia. In order to …","url":["https://arxiv.org/pdf/1910.07938"]} -{"year":"2019","title":"Cross-Platform Evaluation for Italian Hate Speech Detection","authors":["M Corazza, S Menini, E Cabrio, S Tonelli, S Villata…"],"snippet":"… Generic embeddings: we use embedding spaces obtained directly from the Fasttext website4 for Italian. In particular, we use the Italian embeddings trained on Common Crawl and Wikipedia (Grave et al., 2018) with size 300 …","url":["http://ceur-ws.org/Vol-2481/paper22.pdf"]} -{"year":"2019","title":"CrossLang: the system of cross-lingual plagiarism detection","authors":["O Bakhteev, A Ogaltsov, A Khazov, K Safin… - 2019"],"snippet":"… They were obtained from open-source parallel OPUS [46] corpora, but also we mine parallel sentences from Common Crawl.4 Algo … the machine translation stage generates texts that differ too much from 3https://tensorflow …","url":["http://ml4ed.cc/attachments/Bakhteev.pdf"]} -{"year":"2019","title":"CUNI Submission for Low-Resource Languages in WMT News 2019","authors":["T Kocmi, O Bojar - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… Words in English Commoncrawl Russian-English 878k 17.4M 18.8M … Words News crawl 2018 EN 15.4M 344.3M Common Crawl KK 12.5M 189.2M News commentary KK 13.0k 218.7k News crawl Kk 772.9k 10.3M Common Crawl …","url":["https://www.aclweb.org/anthology/W19-5322"]} -{"year":"2019","title":"CUNI System for the WMT19 Robustness Task","authors":["J Helcl, J Libovický, M Popel - arXiv preprint arXiv:1906.09246, 2019"],"snippet":"… Corpus # Sentences P arallel 109 English-French Corpus 22,520k Europarl 2,007k News Commentary 200k UN Corpus 12,886k Common Crawl 3,224k M onoFrench News Crawl ('08–'14) 37,320k English News Crawl ('11–'17) 127,554k …","url":["https://arxiv.org/pdf/1906.09246"]} -{"year":"2019","title":"Curriculum Learning for Domain Adaptation in Neural Machine Translation","authors":["X Zhang, P Shapiro, G Kumar, P McNamee, M Carpuat… - arXiv preprint arXiv …, 2019"],"snippet":"… and WMT 2017 (Bojar et al., 2017), which contains data from several domains, eg parliamentary proceedings (Europarl, UN Parallel Corpus), political/economic news (news commentary, Rapid corpus), and …","url":["https://arxiv.org/pdf/1905.05816"]} -{"year":"2019","title":"Customizing Neural Machine Translation for Subtitling","authors":["E Matusov, P Wilken, Y Georgakopoulou - Proceedings of the Fourth Conference on …, 2019"],"snippet":"… These data included all other publicly available training data, including ParaCrawl, CommonCrawl, EUbookshop, JRCAcquis, EMEA, and other corpora from the OPUS collection … This was done to avoid oversampling …","url":["https://www.aclweb.org/anthology/W19-5209"]} -{"year":"2019","title":"D-NET: A Simple Framework for Improving the Generalization of Machine Reading Comprehension","authors":["H Li, X Zhang, Y Liu, Y Zhang, Q Wang, X Zhou, J Liu…"],"snippet":"… by introducing two-stream self attention. Besides BooksCorpus and Wikipedia, on which the BERT is trained, XLNET uses more corpus in its pretraining, including Giga5, ClueWeb and Common Crawl. In our system, we use …","url":["https://mrqa.github.io/assets/papers/64_Paper.pdf"]} -{"year":"2019","title":"Détection automatique de la thématique et adaptation des modèles de langage","authors":["S ZHANG"],"snippet":"Page 1. Si ZHANG SIGMA 2018/2019 AUTHÔT 52Av. Pierre Sémard 94200 Ivry-sur-Seine Détection automatique de la thématique et adaptation des modèles de langage from 01/03/2019 to 16/08/2019 Confidentiality : yes …","url":["ftp://ftp.irit.fr/IRIT/SAMOVA/INTERNSHIPS/zhang_si_2019.pdf"]} -{"year":"2019","title":"Danish Stance Classification and Rumour Resolution","authors":["AE Lillie, ER Middelboe - arXiv preprint arXiv:1907.01304, 2019"],"snippet":"Page 1. IT University of Copenhagen MSc in Software Development Thesis project KISPECI1SE Danish Stance Classification and Rumour Resolution Authors: Anders E. Lillie aedl@itu.dk Emil R. Middelboe erem@itu.dk …","url":["https://arxiv.org/pdf/1907.01304"]} -{"year":"2019","title":"Data Augmentation in Deep Learning for Hate Speech Detection in Lower Resource Settings","authors":["M Benk"],"snippet":"Page 1. Masterarbeit zur Erlangung des akademischen Grades Master of Arts der Philosophischen Fakultät der Universität Zürich Data Augmentation in Deep Learning for Hate Speech Detection in Lower Resource Settings …","url":["https://www.cl.uzh.ch/dam/jcr:57406b34-02c8-496d-9b95-9968cee3a134/benk_ma_data_augmentation.pdf"]} -{"year":"2019","title":"Data4UrbanMobility: Towards Holistic Data Analytics for Mobility Applications in Urban Regions","authors":["N Tempelmeier, Y Rietz, I Lishchuk, T Kruegel… - arXiv preprint arXiv …, 2019"],"snippet":"… Web Event-centric Web markup Annotated Web pages, eg us- ing schema.org. Web Data Commons event subset: 263 × 106 facts until November 2017 Common Crawl ToU RDFa, MicroData Focused crawls Event-centric crawls, news11 …","url":["https://arxiv.org/pdf/1903.12064"]} -{"year":"2019","title":"Debiasing Embeddings for Reduced Gender Bias in Text Classification","authors":["F Prost, N Thain, T Bolukbasi - Proceedings of the First Workshop on Gender Bias in …, 2019"],"snippet":"… 2 Classification Task This work utilizes the BiosBias dataset introduced in (De-Arteaga et al., 2019). This dataset consists of biographies identified within the Common Crawl 397,340 biographies were extracted from sixteen crawls from 2014 to 2018 …","url":["https://www.aclweb.org/anthology/W19-3810"]} -{"year":"2019","title":"DebiasingWord Embeddings Improves Multimodal Machine Translation","authors":["T Hirasawa, M Komachi - arXiv preprint arXiv:1905.10464, 2019"],"snippet":"Page 1. Debiasing Word Embeddings Improves Multimodal Machine Translation Tosho Hirasawa Tokyo Metropolitan University hirasawa-tosho@ed.tmu.ac.jp Mamoru Komachi Tokyo Metropolitan University komachi@tmu.ac.jp Abstract …","url":["https://arxiv.org/pdf/1905.10464"]} -{"year":"2019","title":"Deca: A Garbage Collection Optimizer for In-Memory Data Processing","authors":["X Shi, Z Ke, Y Zhou, H Jin, L Lu, X Zhang, L He, Z Hu… - ACM Transactions on …, 2019"],"snippet":"Page 1. 3 Deca: A Garbage Collection Optimizer for In-Memory Data Processing XUANHUA SHI and ZHIXIANG KE, Huazhong University of Science and Technology, China YONGLUAN ZHOU, University of Copenhagen, Denmark …","url":["https://dl.acm.org/ft_gateway.cfm?id=3310361&type=pdf"]} -{"year":"2019","title":"DECO: A Dataset of Annotated Spreadsheets for Layout and Table Recognition","authors":["E Koci, M Thiele, J Rehak, O Romero, W Lehner - the 15th IAPR International …, 2019"],"snippet":"… 50 files). The performance is manually assessed per file. 1http://info.nuix.com/Enron. html 2http://commoncrawl.org/ 3http://lemurproject.org/clueweb09.php/ Koci et al. [15] use a dataset of 216 annotated spreadsheets. Unlike …","url":["https://wwwdb.inf.tu-dresden.de/wp-content/uploads/deco_paper.pdf"]} -{"year":"2019","title":"Deep Contextualized Word Embeddings in Transition-Based and Graph-Based Dependency Parsing--A Tale of Two Parsers Revisited","authors":["A Kulmizev, M de Lhoneux, J Gontrum, E Fano, J Nivre - arXiv preprint arXiv …, 2019"],"snippet":"… 2018), who train ELMo on 20 million words randomly sampled from raw WikiDump and Common Crawl datasets for … In other words, while the standalone ELMo models were trained on the tokenized WikiDump and …","url":["https://arxiv.org/pdf/1908.07397"]} -{"year":"2019","title":"Deep Learning for NLP and Speech Recognition","authors":["U Kamath, J Liu, J Whitaker"],"snippet":"Page 1. Uday Kamath · John Liu · James Whitaker Deep Learning for NLP and Speech Recognition Page 2. Deep Learning for NLP and Speech Recognition Page 3. Uday Kamath • John Liu • James Whitaker Deep …","url":["https://link.springer.com/content/pdf/10.1007/978-3-030-14596-5.pdf"]} -{"year":"2019","title":"Deep learning for pollen allergy surveillance from twitter in Australia","authors":["J Rong, S Michalska, S Subramani, J Du, H Wang - BMC Medical Informatics and …, 2019"],"snippet":"… embeddings - as alternative. The pre-trained Common Crawl 840B tokens GloVe embeddings were downloaded from the website 2 . Both 50 dimensions (min) and 300 dimensions (max) options were tested. The HF embeddings …","url":["https://link.springer.com/article/10.1186/s12911-019-0921-x"]} -{"year":"2019","title":"Deep learning models for speech recognition","authors":["A Hannun, C Case, J Casper, B Catanzaro, G DIAMOS… - US Patent App. 16/542,243, 2019"],"snippet":"US20190371298A1 - Deep learning models for speech recognition - Google Patents. Deep learning models for speech recognition. Download PDF Info. Publication number US20190371298A1. US20190371298A1 US16/542,243 …","url":["https://patents.google.com/patent/US20190371298A1/en"]} -{"year":"2019","title":"Deep Learning vs. Classic Models on a New Uzbek Sentiment Analysis Dataset","authors":["E Kuriyozov, S Matlatipov, MA Alonso…"],"snippet":"… We use as input the FastText pre-trained word embeddings of size 300 (Grave et al., 2018) for Uzbek language, that were created from Wiki pages and CommonCrawl, 9 which, to our knowledge, are the only available pre-trained …","url":["http://www.grupolys.org/biblioteca/KurMatAloGom2019a.pdf"]} -{"year":"2019","title":"Deep Learning-based Categorical and Dimensional Emotion Recognition for Written and Spoken Text","authors":["BT Atmaja, K Shirai, M Akagi - INA-Rxiv. June, 2019"],"snippet":"… meaning. Glove captured the global corpus statistics from the corpus, for example, a Wikipedia document or a common crawl document. In GloVe model, the cost function is given by V ∑ i,j=1 f(Xi,j)(uT i,jvj + bi + cj − log Xi,j)2 (2) …","url":["https://osf.io/fhu29/download/?format=pdf"]} -{"year":"2019","title":"Deep Structured Semantic Model for Recommendations in E-commerce","authors":["A Larionova, P Kazakova, N Nikitinsky - International Conference on Hybrid Artificial …, 2019"],"snippet":"… We generated a vector representation for each text by inferring FastText embeddings [4] from their tokens and averaging them (FastText model is pretrained on the Russian language subset of the Common Crawl corpus [10]) …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29859-3_8"]} -{"year":"2019","title":"Deepening Hidden Representations from Pre-trained Language Models for Natural Language Understanding","authors":["J Yang, H Zhao - arXiv preprint arXiv:1911.01940, 2019","JYH Zhao"],"snippet":"… objective during pre-training on the other hand. In addition to BooksCorpus and English Wikipedia, it also uses Giga5, ClueWeb 2012-B and Common Crawl for pre-training. Trained with dynamic masking, large mini-batches …","url":["https://arxiv.org/pdf/1911.01940","https://deeplearn.org/arxiv/101390/deepening-hidden-representations-from-pre-trained-language-models-for-natural-language-understanding"]} -{"year":"2019","title":"Defending Against Neural Fake News","authors":["R Zellers, A Holtzman, H Rashkin, Y Bisk, A Farhadi… - arXiv preprint arXiv …, 2019"],"snippet":"… Dataset. We present RealNews, a large corpus of news articles from Common Crawl … Thus, we construct one by scraping dumps from Common Crawl, limiting ourselves to the 5000 news domains indexed by Google News …","url":["https://arxiv.org/pdf/1905.12616"]} -{"year":"2019","title":"Deliverable 4.2: Data Integration (v. 1)","authors":["A Haller, JD Fernández, A Polleres, MR Kamdar - Work, 2019"],"snippet":"Page 1. Cyber-Physical Social Systems for City-wide Infrastructures Deliverable 4.2: Data Integration (v.1) Authors : Armin Haller, Javier D. Fernández, Axel Polleres, Maulik R. Kamdar Dissemination Level : Public Due date …","url":["http://cityspin.net/wp-content/uploads/2017/10/D4.2-Data-Integration.pdf"]} -{"year":"2019","title":"Design and implementation of an open source Greek POS Tagger and Entity Recognizer using spaCy","authors":["E Partalidou, E Spyromitros-Xioufis, S Doropoulos… - IEEE/WIC/ACM International …, 2019"],"snippet":"… 3.4 Evaluation and comparison of results In the first experiment the model was trained using pretrained vectors extracted from two different sources, Common Crawl and Wikipedia and can be found at the official FastText …","url":["https://dl.acm.org/citation.cfm?id=3352543"]} -{"year":"2019","title":"Detecting Aggression and Toxicity using a Multi Dimension Capsule Network","authors":["S Srivastava, P Khurana - Proceedings of the Third Workshop on Abusive …, 2019"],"snippet":"… The code for tokenization was taken from (Devlin et al., 2018) which seems to properly separate the word tokens and special characters. For training all our classification models, we have used fastText embeddings of dimension 300 trained on a common crawl …","url":["https://www.aclweb.org/anthology/W19-3517"]} -{"year":"2019","title":"Detecting associations between dietary supplement intake and sentiments within mental disorder tweets","authors":["Y Wang, Y Zhao, J Zhang, J Bian, R Zhang - Health Informatics Journal, 2019"],"snippet":"Many patients with mental disorders take dietary supplement, but their use patterns remain unclear. In this study, we developed a method to detect signals of associations between dietary supplement...","url":["https://journals.sagepub.com/doi/full/10.1177/1460458219867231"]} -{"year":"2019","title":"Detecting Clitics Related Orthographic Errors in Turkish","authors":["U Arıkan, O Güngör, S Uskudarli"],"snippet":"… For this task, GloVe was used with the dimension size of 300 and window size of 15. The pretrained word vectors for Turkish were obtained from the model trained on Common Crawl and Wikipedia using fastText (Grave et al., 2018) …","url":["https://www.researchgate.net/profile/Onur_Guengoer2/publication/337425054_Detecting_Clitics_Related_Orthographic_Errors_in_Turkish/links/5dd7e92792851c1feda68471/Detecting-Clitics-Related-Orthographic-Errors-in-Turkish.pdf"]} -{"year":"2019","title":"Detecting Hacker Threats: Performance of Word and Sentence Embedding Models in Identifying Hacker Communications","authors":["AL Queiroz, S Mckeever, B Keegan"],"snippet":"… model Source Dim. size MDL-1 SVM WEMB Word2vec Google News 300 MDL-2 SVM WEMB Glove Common Crawl 300 … MDL-4 SVM SEMB InferSent Wikipedia 4096 MDL-5 SVM SEMB SentEncoder Wiki, Web News, SNLI …","url":["http://aics2019.datascienceinstitute.ie/papers/aics_13.pdf"]} -{"year":"2019","title":"Detecting Incivility and Impoliteness in Online Discussions. Classification Approaches for German User Comments.","authors":["A Stoll, M Ziegele, O Quiring - 2019"],"snippet":"Page 1. DETECTING INCIVILITY AND IMPOLITENESS Preprint on SSRN, 22.11.2019 Anke Stoll HHU Düsseldorf anke.stoll@hhu.de Marc Ziegele HHU Düsseldorf marc.ziegele@hhu.de Oliver Quiring JGU Main quiring@uni-mainz.de ABSTRACT …","url":["https://osf.io/preprints/socarxiv/a47ch/download"]} -{"year":"2019","title":"Detecting offensive language using transfer learning","authors":["A de Bruijn, V Muhonen, T Albinonistraat, W Fokkink… - 2019"],"snippet":"Page 1. Detecting offensive language using transfer learning Alissa de Bruijn September 2019 Page 2. Master Thesis Business Analytics Detecting offensive language using transfer learning Author: Alissa de Bruijn …","url":["https://beta.vu.nl/nl/Images/stageverslag-bruijn_tcm235-926516.pdf"]} -{"year":"2019","title":"Detecting Relational States in Online Social Networks","authors":["J Zhang, L Tan, X Tao, T Pham, X Zhu, H Li, L Chang - 2018 5th International …, 2019"],"snippet":"… Since the social network we obtain from the repositories of common crawl contains missing links and partial information, stochastic estimations are used to measure the accuracy and reliability of our experimental MVVA results [19] …","url":["https://ieeexplore.ieee.org/abstract/document/8697237/"]} -{"year":"2019","title":"Detecting Topic-Oriented Speaker Stance in Conversational Speech}}","authors":["C Lai, B Alex, JD Moore, L Tian, T Hori, G Francesca - Proc. Interspeech 2019, 2019"],"snippet":"… 3.3. Lexical Features We build lexical representations over turns and topic segments using 300 dimensional GloVe word embeddings (Common Crawl, 840B tokens) [26]. We perform basic tokenization to map between the CallHome transcripts and word embeddings …","url":["https://www.isca-speech.org/archive/Interspeech_2019/pdfs/2632.pdf"]} -{"year":"2019","title":"Detection of contradictions in pairs of texts in Kazakh","authors":["Y Yamalutdinova - 2019"],"snippet":"Page 1. BACHELOR THESIS Yuliya Yamalutdinova Detection of contradictions in pairs of texts in Kazakh Institute of Formal and Applied Linguistics Supervisor of the bachelor thesis: Mgr. Rudolf Rosa, Ph.D. Study programme: Computer Science …","url":["https://dspace.cuni.cz/bitstream/handle/20.500.11956/109076/130266752.pdf?sequence=1"]} -{"year":"2019","title":"Determining How Citations Are Used in Citation Contexts","authors":["M Färber, A Sampath - International Conference on Theory and Practice of …, 2019"],"snippet":"… 2. See https://fasttext.cc/. The pretrained vectors were trained on Common Crawl and Wikipedia using the CBOW model of fastText. fastText operates at the character level, and therefore can generate vectors for words not seen in the training corpus …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30760-8_38"]} -{"year":"2019","title":"Development of a Song Lyric Corpus for the English Language","authors":["MAG Rodrigues, A de Paiva Oliveira, A Moreira - International Conference on …, 2019"],"snippet":"… languages. According to the authors, the texts that compose the corpus were extracted from CommonCrawl (commoncrawl.org), the largest publicly available general Web crawl to date with about 2 billion crawled URLs. The …","url":["https://link.springer.com/chapter/10.1007/978-3-030-23281-8_33"]} -{"year":"2019","title":"Development of an End-to-End Deep Learning Pipeline","authors":["M Nitsche, S Halbritter"],"snippet":"Page 1. Development of an End-to-End Deep Learning Pipeline Matthias Nitsche, Stephan Halbritter {matthias.nitsche, stephan.halbritter}@hawhamburg.de Hamburg University of Applied Sciences, Department of …","url":["https://users.informatik.haw-hamburg.de/~ubicomp/projekte/master2019-proj/nitsche-halbritter.pdf"]} -{"year":"2019","title":"Digital audio track suggestions for moods identified using analysis of objects in images from video content","authors":["N Brochu - US Patent App. 15/392,705, 2019"],"snippet":"… the response from the natural language model 240. Exemplary training corpuses of words can include, for example, Common Crawl (eg, 840B tokens, 2.2M vocabulary terms). The natural language model 240 can be improved …","url":["http://www.freepatentsonline.com/10276189.html"]} -{"year":"2019","title":"Diversicon: Pluggable Lexical Domain Knowledge","authors":["G Bella, F McNeill, D Leoni, FJQ Real, F Giunchiglia - Journal on Data Semantics, 2019"],"snippet":"Page 1. Journal on Data Semantics https://doi.org/10.1007/s13740-019-00107-1 ORIGINAL ARTICLE Diversicon: Pluggable Lexical Domain Knowledge Gábor Bella1 · Fiona McNeill2 · David Leoni1 · Francisco José Quesada Real3 · Fausto Giunchiglia1 …","url":["https://link.springer.com/article/10.1007/s13740-019-00107-1"]} -{"year":"2019","title":"Do It Like a Syntactician: Using Binary Gramaticality Judgements to Train Sentence Encoders and Assess Their Sensitivity to Syntactic Structure","authors":["P Gonzalez Martinez - 2019"],"snippet":"Page 1. City University of New York (CUNY) CUNY Academic Works Dissertations, Theses, and Capstone Projects Graduate Center 9-2019 Do It Like a Syntactician: Using Binary Gramaticality Judgements to Train …","url":["https://academicworks.cuny.edu/cgi/viewcontent.cgi?article=4521&context=gc_etds"]} -{"year":"2019","title":"Do It Like a Syntactician: Using Binary Grammaticality Judgements to Train Sentence Encoders and Assess Their Sensitivity to Syntactic Structure","authors":["PG Martinez - 2019"],"snippet":"Page 1. Do it like a syntactician: using binary grammaticality judgments to train sentence encoders and assess their sensitivity to syntactic structure by Pablo González Mart´ınez A dissertation submitted to the Graduate Faculty in Linguistics in partial fulfillment of the …","url":["http://search.proquest.com/openview/f9bad35a6cc78921f32a9f8f6e0efde3/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2019","title":"Do We Really Need Fully Unsupervised Cross-Lingual Embeddings?","authors":["I Vulić, G Glavaš, R Reichart, A Korhonen - arXiv preprint arXiv:1909.01638, 2019"],"snippet":"… Monolingual Embeddings. We use the 300-dim vectors of Grave et al. (2018) for all 15 languages, pretrained on Common Crawl and Wikipedia with fastText (Bojanowski et al., 2017).7 We trim all 5While BLI is an intrinsic task, as discussed by Glavaš et al …","url":["https://arxiv.org/pdf/1909.01638"]} -{"year":"2019","title":"Document Embedding Models on Environmental Legal Documents","authors":["S Kralj, Ž Urbancic, E Novak, K Kenda"],"snippet":"… Instead of having aa large vocabulary of pre-computed word embeddings trained on Wikipedia and Common Crawl, this newly trained model is trained on documents from a more specific domain - resulting in a vocabulary …","url":["http://ailab.ijs.si/dunja/SiKDD2019/Papers/Kralj_Urbancic_Final.pdf"]} -{"year":"2019","title":"Document Summarization Using Sentence-Level Semantic Based on Word Embeddings","authors":["K Al-Sabahi, Z Zuping - International Journal of Software Engineering and …, 2019"],"snippet":"… Training is performed on aggregated global word-word co-occurrence statistics from a corpus. In this work, we use the one trained on Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download): glove.840B. 300d.zip …","url":["https://www.worldscientific.com/doi/abs/10.1142/S0218194019500086"]} -{"year":"2019","title":"Domain adaptation for part-of-speech tagging of noisy user-generated text","authors":["L März, D Trautmann, B Roth - arXiv preprint arXiv:1905.08920, 2019"],"snippet":"… The pretrained vectors for German are based on Wikipedia articles and data from Common Crawl3. We obtain 97.988 different embeddings for the tokens in TIGER and the Twitter corpus of which 75.819 were already contained …","url":["https://arxiv.org/pdf/1905.08920"]} -{"year":"2019","title":"Domain-specific word embeddings for patent classification","authors":["J Risch, R Krestel - Data Technologies and Applications, 2019"],"snippet":"… used to train word embeddings. It contains more than twice the number of tokens of the English Wikipedia (16bn) and is only exceeded by the Common Crawl data set, which consists of 600bn tokens. We assume that the embeddings …","url":["https://www.emeraldinsight.com/doi/abs/10.1108/DTA-01-2019-0002"]} -{"year":"2019","title":"Dual Monolingual Cross-Entropy Delta Filtering of Noisy Parallel Data","authors":["A Axelrod, A Kumar, S Sloto - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… gual English corpus comparable in size and content to the Sinhala one by randomly selecting 150k lines from Wikipedia and 6M lines from Common Crawl … Each SentencePiece model was trained on 1M lines of monolin …","url":["https://www.aclweb.org/anthology/W19-5433"]} -{"year":"2019","title":"Dynamic Packed Compact Tries Revisited","authors":["K Tsuruta, D Köppl, S Kanda, Y Nakashima, S Inenaga… - arXiv preprint arXiv …, 2019"],"snippet":"… name column. • commoncrawl is a web crawl containing the ASCII-encoded content (without HTML tags) of random web pages extracted from Common Crawl. • vital is the main text extracted from the most vital Wikipedia articles …","url":["https://arxiv.org/pdf/1904.07467"]} -{"year":"2019","title":"Dynamically Route Hierarchical Structure Representation to Attentive Capsule for Text Classification","authors":["W Zheng, Z Zheng, H Wan, C Chen"],"snippet":"Page 1. Dynamically Route Hierarchical Structure Representation to Attentive Capsule for Text Classification Wanshan Zheng1,2 , Zibin Zheng1,2 , Hai Wan1 , Chuan Chen1,2 1School of Data and Computer Science, Sun Yat …","url":["https://www.ijcai.org/proceedings/2019/0759.pdf"]} -{"year":"2019","title":"EasyChair Preprint","authors":["NS Resolution - 2019"],"snippet":"… Fancellu et al. (2016) show in their work that Page 4. Crawl data set with 840B tokens. Additionally we will try 300-dimensional pre-trained fastText2 that were also trained on Common Crawl but on a subset of 600B tokens. This differs from Fancellu et al …","url":["https://easychair.org/publications/preprint_download/QHml"]} -{"year":"2019","title":"EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks","authors":["JW Wei, K Zou - arXiv preprint arXiv:1901.11196, 2019"],"snippet":"… We suspect that EDA will work with any thesaurus. Word embeddings. We use 300-dimensional Common-Crawl word embeddings trained using GloVe (Pennington et al., 2014). We suspect that EDA will work with any pre-trained word embeddings. CNN …","url":["https://arxiv.org/pdf/1901.11196"]} -{"year":"2019","title":"Edge Computing for User-Centric Secure Search on Cloud-Based Encrypted Big Data","authors":["S Ahmad, SM Zobaed, R Gottumukkala, MA Salehi - arXiv preprint arXiv:1908.03668, 2019"],"snippet":"… We used two datasets, namely Amazon Common Crawl Corpus (ACCC) [35] and Request For Comments (RFC) [36], that have distinct characteristics and volumes. ACCC is ≈ 150 terabytes, contains web contents, and is not domain-specific …","url":["https://arxiv.org/pdf/1908.03668"]} -{"year":"2019","title":"EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction","authors":["D Bouchacourt, L Denoyer - arXiv preprint arXiv:1905.11852, 2019"],"snippet":"… We test on the full test dataset composed of 7, 600 samples. We use pre-trained word vectors trained on Common Crawl [8], and keep them fixed … We use pre-trained word vectors trained on Common Crawl [8], and keep them fixed …","url":["https://arxiv.org/pdf/1905.11852"]} -{"year":"2019","title":"Efficient Classification and Unsupervised Keyphrase Extraction for Web Pages","authors":["T Haarman - 2019"],"snippet":"Page 1. MASTER'S THESIS Efficient Classification and Unsupervised Keyphrase Extraction for Web Pages Tim Haarman s2404184 Department of Artificial Intelligence University of Groningen, The Netherlands Primary …","url":["https://www.ai.rug.nl/~mwiering/Thesis_Tim_Haarman.pdf"]} -{"year":"2019","title":"Efficient Contextual Representation Learning With Continuous Outputs","authors":["LH Li, PH Chen, CJ Hsieh, KW Chang - Transactions of the Association for …, 2019"],"snippet":"Create a new account. Email. Returning user. Can't sign in? Forgot your password? Enter your email address below and we will send you the reset instructions. Email. Cancel. If the address matches an existing account you will …","url":["https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00289"]} -{"year":"2019","title":"Efficient Contextual Representation Learning Without Softmax Layer","authors":["LH Li, PH Chen, CJ Hsieh, KW Chang - arXiv preprint arXiv:1902.11269, 2019"],"snippet":"… The output layer is a sampled softmax with 8192 negative samples per batch. This model is provided in AllenNLP by Peters et al. (2018a). • ELMO-S: The input layer is the FastText embedding trained on Common Crawl (Mikolovetal …","url":["https://arxiv.org/pdf/1902.11269"]} -{"year":"2019","title":"Efficient Sentence Embedding using Discrete Cosine Transform","authors":["N Almarwani, H Aldarmaki, M Diab - arXiv preprint arXiv:1909.03104, 2019"],"snippet":"… CoordInv Coordination Inversion Table 1: Probing Tasks 3.2 Experimental setup For the word embeddings, we use pre-trained FastText embeddings of size 300 (Mikolov et al., 2018) trained on Common-Crawl. We generate DCT …","url":["https://arxiv.org/pdf/1909.03104"]} -{"year":"2019","title":"Embedding Imputation with Grounded Language Information","authors":["Z Yang, C Zhu, V Sachidananda, E Darve - arXiv preprint arXiv:1906.03753, 2019"],"snippet":"… KG2Vec 0.02% 7% 0.04% 12% 58.6 56.9 60.1 54.3 GloVe Common Crawl 1% 29% 2% 44% 44.0 33.0 45.1 27.3 … We test on two types of pre-trained word vectors GloVe (Common crawl, cased 300d) and ConceptNet Numberbatch (300d) …","url":["https://arxiv.org/pdf/1906.03753"]} -{"year":"2019","title":"EmbNum+: Effective, Efficient, and Robust Semantic Labeling for Numerical Values","authors":["P Nguyen, K Nguyen, R Ichise, H Takeda - New Generation Computing, 2019"],"snippet":"… Data portals. For example, 233 million tables were extracted from the July 2015 version of the Common Crawl [10].1 Additionally, 200,000 tables from 232 Open Data portals were analyzed by Mitlohner et al. [12]. These resources …","url":["https://link.springer.com/article/10.1007/s00354-019-00076-w"]} -{"year":"2019","title":"Emerging Cross-lingual Structure in Pretrained Language Models","authors":["A Conneau, S Wu, H Li, L Zettlemoyer, V Stoyanov - … of the 58th Annual Meeting of …, 2020","S Wu, A Conneau, H Li, L Zettlemoyer, V Stoyanov - arXiv preprint arXiv:1911.01464, 2019"],"snippet":"… We consider domain difference by training on Wikipedia for English and a random subset of Common Crawl of the same size for the other languages (Wiki-CC). We also consider a model trained with Wikipedia only, the same as XLM (Default) for comparison …","url":["https://arxiv.org/pdf/1911.01464","https://www.aclweb.org/anthology/2020.acl-main.536.pdf"]} -{"year":"2019","title":"Emoji Powered Capsule Network to Detect Type and Target of Offensive Posts in Social Media","authors":["H Hettiarachchi, T Ranasinghe"],"snippet":"… Also character embeddings handle in- frequent words better than word2vec embedding as later one suffers from lack of enough training opportunity for those rare words. We used fasttext embeddings pre trained on Common Crawl (Mikolov et al., 2018) …","url":["https://www.researchgate.net/profile/Tharindu_Ranasinghe2/publication/336775156_Emoji_Powered_Capsule_Network_to_Detect_Type_and_Target_of_Offensive_Posts_in_Social_Media/links/5db1a79992851c577eba8219/Emoji-Powered-Capsule-Network-to-Detect-Type-and-Target-of-Offensive-Posts-in-Social-Media.pdf"]} -{"year":"2019","title":"EmoLabel: Semi-Automatic Methodology for Emotion Annotation of Social Media Text","authors":["L Canales, W Daelemans, E Boldrini, P Martínez-Barco - IEEE Transactions on …, 2019"],"snippet":"Page 1. 1949-3045 (c) 2019 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8758380/"]} -{"year":"2019","title":"EMOMINER at SemEval-2019 Task 3: A Stacked BiLSTM Architecture for Contextual Emotion Detection in Text","authors":["N Chakravartula, V Indurthi - Proceedings of the 13th International Workshop on …, 2019"],"snippet":"… them with GloVe vectors. As a result, Glove vectors will have syntactic information of words (Rezaeinia et al., 2017). 3.2 Feature Extraction • Word Embeddings: Glove840B - common crawl (Pennington et al., 2014) pre-trained …","url":["https://www.aclweb.org/anthology/S19-2033"]} -{"year":"2019","title":"Emotional Embeddings: Refining Word Embeddings to Capture Emotional Content of Words","authors":["A Seyeditabari, N Tabari, S Gholizadeh, W Zadrozny - arXiv preprint arXiv …, 2019"],"snippet":"… vector spaces used here are: • Word2Vec trained full English Wikipedia dump • GloVe from their own website • fastText trained with subword information on Common Crawl • ConceptNet Numberbatch It is clear that each emotionally …","url":["https://arxiv.org/pdf/1906.00112"]} -{"year":"2019","title":"Encoder-Decoder Network with Cross-Match Mechanism for Answer Selection","authors":["Z Xie, X Yuan, J Wang, S Ju - China National Conference on Chinese Computational …, 2019"],"snippet":"… 4.2 Implementation Details. We initialized word embedding with 300d-GloVe vectors pre-trained from the 840B Common Crawl corpus [8], while the word embeddings for the out-of-vocabulary words were initialized randomly …","url":["https://link.springer.com/chapter/10.1007/978-3-030-32381-3_6"]} -{"year":"2019","title":"End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots","authors":["Y Yoon, WR Ko, M Jang, J Lee, J Kim"],"snippet":"… We used the pretrained word embedding model GloVe, trained on the Common Crawl corpus [5]. The dimension of word embedding is 300, and a zero vector is used for unknown words. A gesture is represented as a sequence of human poses …","url":["http://robotics.auckland.ac.nz/wp-content/uploads/2018/06/Final_ETRI_YoungwooYoon.pdf"]} -{"year":"2019","title":"End-to-end Neural Information Retrieval","authors":["W Yang - 2019"],"snippet":"Page 1. End-to-end Neural Information Retrieval by Wei Yang A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master in Computer Science Waterloo, Ontario, Canada, 2019 c Wei Yang 2019 Page 2 …","url":["https://uwspace.uwaterloo.ca/bitstream/handle/10012/14597/Yang_Wei.pdf?sequence=4&isAllowed=y"]} -{"year":"2019","title":"End-to-End Speech Recognition","authors":["U Kamath, J Liu, J Whitaker - Deep Learning for NLP and Speech Recognition, 2019"],"snippet":"… of certain words. Therefore, an n-gram language model was trained using the KenLM [Hea+ 13] toolkit on the Common Crawl Repository, 1 using the 400,000 most frequent words from 250 million lines of text. The decoding …","url":["https://link.springer.com/chapter/10.1007/978-3-030-14596-5_12"]} -{"year":"2019","title":"English-Czech Systems in WMT19: Document-Level Transformer","authors":["M Popel, D Macháček, M Auersperger, O Bojar… - arXiv preprint arXiv …, 2019"],"snippet":"… brevity. sentence words (k) data set pairs (k) EN CS CzEng 1.7 57 065 618 424 543 184 Europarl v7 647 15 625 13 000 News Commentary v12 211 4 544 4 057 CommonCrawl 162 3 349 2 927 WikiTitles 361 896 840 EN NewsCrawl …","url":["https://arxiv.org/pdf/1907.12750"]} -{"year":"2019","title":"Enhancing AMR-to-Text Generation with Dual Graph Representations","authors":["LFR Ribeiro, C Gardent, I Gurevych - arXiv preprint arXiv:1909.00352, 2019"],"snippet":"… 5 Experiments and Discussion Implementation Details We extract vocabularies (size of 20,000) from the training sets and initialize the node embeddings from GloVe word em- beddings (Pennington et al., 2014) on Common Crawl …","url":["https://arxiv.org/pdf/1909.00352"]} -{"year":"2019","title":"Enhancing Semantic Word Representations by Embedding Deeper Word Relationships","authors":["A Nugaliyadde, KW Wong, F Sohel, H Xie - arXiv preprint arXiv:1901.07176, 2019"],"snippet":"… 2 https://commoncrawl.org/ differentiating similarity from association and relatedness which is reflected in table 1. The context to create the word embedding in order to test on SimLex-999 is created based on Common …","url":["https://arxiv.org/pdf/1901.07176"]} -{"year":"2019","title":"Environmental hazards, rigid institutions, and transformative change: How drought affects the consideration of water and climate impacts in infrastructure management","authors":["N Ulibarri, TA Scott - Global Environmental Change, 2019"],"snippet":"Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0959378019302213"]} -{"year":"2019","title":"eTranslation's Submissions to the WMT 2019 News Translation Task","authors":["C Oravecz, K Bontcheva, A Lardilleux, L Tihanyi… - Proceedings of the Fourth …, 2019"],"snippet":"… En→De the reduction in ParaCrawl was from 31M to 18M segments and in CommonCrawl from 2.3M to 1.4M segments with a drop of 0.2 BLEU points compared to us- ing the full sets3. No additional cleaning was ap- plied to …","url":["https://www.aclweb.org/anthology/W19-5334"]} -{"year":"2019","title":"Europarl-ST: A Multilingual Corpus For Speech Translation Of Parliamentary Debates","authors":["J Iranzo-Sánchez, JA Silvestre-Cerdà, J Jorge… - arXiv preprint arXiv …, 2019"],"snippet":"… De↔Fr eubookshop, JRC-Acquis, 14.3 TildeModel En↔Es commoncrawl, eubookshop, 21.1 EU-TT2, UN, Wikipedia En↔Fr commoncrawl, giga, 38.2 undoc, news-commentary Es↔Fr DGT, eubookshop, 37.2 JRC-Acquis, UNPC …","url":["https://arxiv.org/pdf/1911.03167"]} -{"year":"2019","title":"Evaluating Commonsense in Pre-trained Language Models","authors":["X Zhou, Y Zhang, L Cui, D Huang - arXiv preprint arXiv:1911.11931, 2019"],"snippet":"… Note that XLNet-base is trained with the same data as BERT, while XLNet-large is trained with a larger dataset that consists of 32.98B subword pieces coming from Wiki, BookCorpus, Giga5, ClueWeb, and Common Crawl. RoBERTa (Liu et al …","url":["https://arxiv.org/pdf/1911.11931"]} -{"year":"2019","title":"Evaluating KGR10 Polish word embeddings in the recognition of temporal expressions using BiLSTM-CRF.","authors":["J Kocoń, M Gawor - arXiv preprint arXiv:1904.04055, 2019"],"snippet":"… The second one, called FASTTEXT4, is original FastText word embeddings set, created for 157 languages (including Polish). Authors used Wikipedia and Common Crawl5 as the linguistic data source … C2 Common Crawl …","url":["https://arxiv.org/pdf/1904.04055"]} -{"year":"2019","title":"Evaluating the Supervised and Zero-shot Performance of Multi-lingual Translation Models","authors":["C Hokamp, J Glover, D Gholipour - arXiv preprint arXiv:1906.09675, 2019"],"snippet":"… We use all available parallel data from the WMT19 news-translation task for training, with the exception of commoncrawl, which we found to be very noisy after manually checking a sample of the data, and paracrawl, which …","url":["https://arxiv.org/pdf/1906.09675"]} -{"year":"2019","title":"Evaluation of basic modules for isolated spelling error correction in Polish texts","authors":["S Rutkowski - arXiv preprint arXiv:1905.10810, 2019"],"snippet":"… How this representation is constructed is informed by the whole corpus on which the embedder was trained. The pretrained ELMo model that we used (Che et al., 2018) was trained on Wikipedia and Common Crawl corpora of Polish …","url":["https://arxiv.org/pdf/1905.10810"]} -{"year":"2019","title":"Evaluation of Czech Distributional Thesauri","authors":["P Rychlý - RASLAN 2019 Recent Advances in Slavonic Natural …, 2019"],"snippet":"… The results are summarized in Table 2. The czTenTen12 corpus was evaluated with Sketch Engine thesaurus and also with word vectors compiled by FastText. We have also included prebuild model from Common Crawl. Table …","url":["http://raslan2019.nlp-consulting.net/proceedings/raslan19.pdf#page=145","https://nlp.fi.muni.cz/raslan/raslan19.pdf#page=145"]} -{"year":"2019","title":"Evaluation of State Of Art Open-source ASR Engines with Local Inferencing","authors":["B Rizk"],"snippet":"Page 1. Institute Of Information Systems (iisys) Hof University in exchange for Media Engineering and Technology Faculty German University in Cairo Evaluation of State Of Art Open-source ASR Engines with Local …","url":["https://www.researchgate.net/profile/Basem_Rizk/publication/335524542_Evaluation_of_State_Of_Art_Open-source_ASR_Engines_with_Local_Inferencing/links/5d6aa4ae299bf1808d5c87dd/Evaluation-of-State-Of-Art-Open-source-ASR-Engines-with-Local-Inferencing.pdf"]} -{"year":"2019","title":"Evaluation of vector embedding models in clustering of text documents","authors":["T Walkowiak, M Gniewkowski"],"snippet":"… The second group of sources of word2vec models for Polish are web pages of word embedding tools like fastText, ELMo and BERT. They were trained on Polish Common Crawl and Wikipedia. However, the BERT …","url":["https://acl-bg.org/proceedings/2019/RANLP%202019/pdf/RANLP149.pdf"]} -{"year":"2019","title":"Event-Argument Linking in Hindi for Information Extraction in Disaster Domain","authors":["SK Sahoo, S Saha, A Ekbal, P Bhattacharyya…"],"snippet":"… 3.3 Word Embedding For word embedding (WE) of each word, we use pre-trained fastText [5] word vectors. These embeddings were trained on Hindi Common Crawl and Wikipedia dataset. The size of the word embedding used in our experiments is 300 …","url":["https://www.iitp.ac.in/~ai-nlp-ml/papers/Sovan_CICLing_2019.pdf"]} -{"year":"2019","title":"Evolution of the PAN Lab on Digital Text Forensics","authors":["P Rosso, M Potthast, B Stein, E Stamatatos, F Rangel… - … Retrieval Evaluation in a …, 2019"],"snippet":"… The static web search environment is comprised of the web search engine ChatNoir, which indexes the ClueWeb 2009, the ClueWeb 2012, and (as of 2017) the CommonCrawl, delivering search results in milliseconds while …","url":["https://link.springer.com/chapter/10.1007/978-3-030-22948-1_19"]} -{"year":"2019","title":"Example-Driven Question Answering","authors":["D Wang - 2019"],"snippet":"Page 1. August 19, 2019 DRAFT Example-Driven Question Answering Di Wang August 2019 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee: Eric Nyberg (chair) (Carnegie Mellon …","url":["http://www.cs.cmu.edu/~diw1/thesis.pdf"]} -{"year":"2019","title":"Explicit Discourse Argument Extraction for German","authors":["P Bourgonje, M Stede - International Conference on Text, Speech, and …, 2019"],"snippet":"… The word embeddings are trained on Common Crawl and Wikipedia [6]. We generated the part-of-speech embeddings from the TIGER corpus [4]. We use a CNN with four fully connected layers. Training this on all classes from Table 1 results in an accuracy of 94.52 …","url":["https://link.springer.com/chapter/10.1007/978-3-030-27947-9_3"]} -{"year":"2019","title":"Exploitation vs. exploration—computational temporal and semantic analysis explains semantic verbal fluency impairment in Alzheimer's disease","authors":["J Tröger, N Linz, A König, P Robert, J Alexandersson… - Neuropsychologia, 2019"],"snippet":"… For deriving semantic metrics, the semantic distance between produced words was calculated based on a fastText (Joulin et al., 2016) neural word embedding, pre-trained on the French Common Crawl and Wikipedia corpora (Grave et al., 2018; Linz et al., 2017) …","url":["https://www.sciencedirect.com/science/article/pii/S0028393218305116"]} -{"year":"2019","title":"Exploiting EuroVoc's Hierarchical Structure for Classifying Legal Documents","authors":["E Filtz, S Kirrane, A Polleres, G Wohlgenannt - … \" On the Move to Meaningful Internet …, 2019","G Wohlgenannt - On the Move to Meaningful Internet Systems: OTM …"],"snippet":"… First, we tested large-scale pre-trained language models trained with generalpurpose text corpora such as GoogleNews and the CommonCrawl, but as expected both performed badly on the legal dataset, for example the Common …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=hm21DwAAQBAJ&oi=fnd&pg=PA164&dq=commoncrawl&ots=pdUzWNZpkR&sig=nBv58MNJuj5jkkROfHpNIYLbyTs","https://link.springer.com/chapter/10.1007/978-3-030-33246-4_10"]} -{"year":"2019","title":"Exploiting knowledge graphs for entity-centric prediction","authors":["S Jiang - 2018"],"snippet":"Page 1. © 2018 Shan Jiang Page 2. EXPLOITING KNOWLEDGE GRAPHS FOR ENTITY-CENTRIC PREDICTION BY SHAN JIANG DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of …","url":["https://www.ideals.illinois.edu/bitstream/handle/2142/102463/JIANG-DISSERTATION-2018.pdf?sequence=1"]} -{"year":"2019","title":"Exploiting Temporal Relationships in Video Moment Localization with Natural Language","authors":["S Zhang, J Su, J Luo - arXiv preprint arXiv:1908.03846, 2019"],"snippet":"… extracted from VGG [24] fc7 layer, optical flow features are extracted from the penultimate layer [27] and the 300-d Glove feature [21] pretrained on Common Crawl (42 billion tokens) are used as the word embedding. The segment …","url":["https://arxiv.org/pdf/1908.03846"]} -{"year":"2019","title":"Exploiting the Hierarchical Structure of a Thesaurus for Document Classification","authors":["E Filtz, S Kirrane, A Polleres, G Wohlgenannt"],"snippet":"… First, we tested large-scale pre-trained language models trained with generalpropose text corpora such as GoogleNews and the CommonCrawl, but as ex- pected both performed badly on the legal dataset, for example the …","url":["https://aic.ai.wu.ac.at/~polleres/publications/filt-etal-2019COOPIS.pdf"]} -{"year":"2019","title":"Explore FREDDY","authors":["M Günther, M Thiele, W Lehner, Z Yanakiev - BTW 2019, 2019"],"snippet":"… The configuration of the search function is defined in the sidebar (Figure 3b) just as in the query view. 4 Screencast on our FREDDY website https://wwwdb.inf.tu-dresden.de/research-projects/freddy/ 5 https://dblp.uni …","url":["https://dl.gi.de/bitstream/handle/20.500.12116/21558/E08-1.pdf?sequence=1&isAllowed=y"]} -{"year":"2019","title":"Explore FREDDY: Fast Word Embeddings in Database Systems","authors":["M Günther, Z Yanakiev, M Thiele, W Lehner"],"snippet":"… The configuration of the search function is defined in the sidebar (Figure 3b) just as in the query view. 4 Screencast on our FREDDY website https://wwwdb.inf.tu-dresden.de/research-projects/freddy/ 5 https://dblp.uni …","url":["https://btw.informatik.uni-rostock.de/download/tagungsband/E08-1.pdf"]} -{"year":"2019","title":"Exploring Numeracy in Word Embeddings","authors":["A Naik, A Ravichander, C Rose, E Hovy - Proceedings of the 57th Conference of the …, 2019"],"snippet":"… FastText (Bojanowski et al., 2017): Extended Skipgram model representing words as character n-grams to incorporate sub-word information. We evaluate Wikipedia and Common Crawl variants. 3.1 Retrained Word Vectors …","url":["https://www.aclweb.org/anthology/P19-1329"]} -{"year":"2019","title":"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer","authors":["C Raffel, N Shazeer, A Roberts, K Lee, S Narang… - arXiv preprint arXiv …, 2019"],"snippet":"… unsupervised pre-training for NLP is particularly attractive because unlabeled text data is available en masse thanks to the Internet – for example, the Common Crawl project2 produces about 20TB of text data extracted from web pages each month …","url":["https://arxiv.org/pdf/1910.10683"]} -{"year":"2019","title":"Extending Cross-Domain Knowledge Bases with Long Tail Entities using Web Table Data","authors":["Y Oulabi, C Bizer - genre, 2019"],"snippet":"… In a second experiment, we apply the system to a large corpus of web tables extracted from the Common Crawl. This experiment allows us to get an overall im- pression of the potential of web tables for augmenting knowledge bases with long tail entities …","url":["https://www.uni-mannheim.de/media/Einrichtungen/dws/Files_Research/Web-based_Systems/pub/OulabiBizer-LongTailEntities-EDBT2019.pdf"]} -{"year":"2019","title":"Extracting and Analyzing Context Information in User-Support Conversations on Twitter","authors":["D Martens, W Maalej - arXiv preprint arXiv:1907.13395, 2019"],"snippet":"… As the list of marketing names also includes common words (eg, 'five', 'go', or 'plus'), we used the natural language processing library spaCy [33] to remove words that appear in the vocabulary of the included …","url":["https://arxiv.org/pdf/1907.13395"]} -{"year":"2019","title":"Extracting Novel Facts from Tables for Knowledge Graph Completion (Extended version)","authors":["B Kruit, P Boncz, J Urbani - arXiv preprint arXiv:1907.00083, 2019"],"snippet":"… The first one is the T2D dataset [23], which contains a subset of the WDC Web Tables Corpus – a set of tables extracted from the CommonCrawl web scrape6. We use the latest available version of this dataset (v2, released 2017/02). In …","url":["https://arxiv.org/pdf/1907.00083"]} -{"year":"2019","title":"Extracting Novel Facts from Tables for Knowledge Graph Completion","authors":["B Kruit, P Boncz, J Urbani - International Semantic Web Conference, 2019"],"snippet":"… The first one is the T2D dataset [25], which contains a subset of the WDC Web Tables Corpus – a set of tables extracted from the CommonCrawl web scrape 2 . We use the latest available version of this dataset (v2, released 2017/02) …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30793-6_21"]} -{"year":"2019","title":"Facebook AI's WAT19 Myanmar-English Translation Task Submission","authors":["PJ Chen, J Shen, M Le, V Chaudhary, A El-Kishky… - arXiv preprint arXiv …, 2019"],"snippet":"… For Myanmar language, we take five snapshots of the Commoncrawl dataset and combine them with the raw data from Buck et al. (2014) … The Myanmar monolingual data we collect from Commoncrawl contains text in both Unicode and Zawgyi encodings …","url":["https://arxiv.org/pdf/1910.06848"]} -{"year":"2019","title":"Facebook FAIR's WMT19 News Translation Task Submission","authors":["N Ng, K Yee, A Baevski, M Ott, M Auli, S Edunov - arXiv preprint arXiv:1907.06616, 2019"],"snippet":"… We train two language models LI and LN on Newscrawl and Commoncrawl respectively, then score every sentence s in Commoncrawl by HI(s)−HN (s). We select a cu- toff of 0.01, and use all sentences that score higher than …","url":["https://arxiv.org/pdf/1907.06616"]} -{"year":"2019","title":"Facilitating access to health web pages with different language complexity levels","authors":["M Alfano, B Lenzitti, D Taibi, M Helfert - 2019"],"snippet":"… The Web Data Commons (WDC) (Meusel, 2014) contains all Microformat, Microdata and RDFa data extracted from the open repository of web crawl data named Common Crawl (CC)16 … 15 http://webdatacommons.org/ 16 http://commoncrawl.org …","url":["http://doras.dcu.ie/23104/1/ICT4AWE_2019_30_CR.pdf"]} -{"year":"2019","title":"Fast and Accurate Network Embeddings via Very Sparse Random Projection","authors":["H Chen, SF Sultan, Y Tian, M Chen, S Skiena - arXiv preprint arXiv:1908.11512, 2019"],"snippet":"… WWW-200K and WWW-10K [11]: these graphs are derived from the Web graph provided by Common Crawl, where the nodes are hostnames and the edges are the hyperlinks between these websites. For simplicity, we treat this graph as an undirected graph …","url":["https://arxiv.org/pdf/1908.11512"]} -{"year":"2019","title":"Faster Neural Network Training with Data Echoing","authors":["D Choi, A Passos, CJ Shallue, GE Dahl - arXiv preprint arXiv:1907.05550, 2019"],"snippet":"… 2http://commoncrawl.org/2017/07/june-2017-crawl-archive-now-available/ 3Each time a training example is read from disk, it counts as a fresh example. 420k steps for LM1B, 60k for Common Crawl, 110k for ImageNet, 150k for CIFAR-10, and 30k for COCO …","url":["https://arxiv.org/pdf/1907.05550"]} -{"year":"2019","title":"FastSV: A Distributed-Memory Connected Component Algorithm with Fast Convergence","authors":["Y Zhang, A Azad, Z Hu - arXiv preprint arXiv:1910.05971, 2019"],"snippet":"Page 1. FastSV: A Distributed-Memory Connected Component Algorithm with Fast Convergence Yongzhe Zhang ∗ Ariful Azad † Zhenjiang Hu ‡ Abstract This paper presents a new distributed-memory algorithm called FastSV …","url":["https://arxiv.org/pdf/1910.05971"]} -{"year":"2019","title":"FastText-Based Intent Detection for Inflected Languages","authors":["K Balodis, D Deksne - Information, 2019"],"snippet":"… For the word embeddings released by Facebook, we used the ones trained on Wikipedia (https: //fasttext.cc/docs/en/pretrained-vectors.html) because the ones trained on Common Crawl (https: //fasttext.cc/docs/en/crawl-vectors.html) showed inferior results in our tests …","url":["https://www.mdpi.com/2078-2489/10/5/161/pdf"]} -{"year":"2019","title":"Feature Engineering for Text Representation","authors":["D Sarkar - Text Analytics with Python, 2019"],"snippet":"In the previous chapters, we saw how to understand, process, and wrangle text data. However, all machine learning or deep learning models are limited because they cannot understand text data directly...","url":["https://link.springer.com/chapter/10.1007/978-1-4842-4354-1_4"]} -{"year":"2019","title":"Feature-Dependent Confusion Matrices for Low-Resource NER Labeling with Noisy Labels","authors":["L Lange, MA Hedderich, D Klakow - arXiv preprint arXiv:1910.06061, 2019"],"snippet":"… clustering. While the Brown clustering was trained on the relatively small Europarl corpus, k- Means clustering seems to benefit from the word embeddings trained on documents from the much larger common crawl. 7 Analysis …","url":["https://arxiv.org/pdf/1910.06061"]} -{"year":"2019","title":"Feature2Vec: Distributional semantic modelling of human property knowledge","authors":["S Derby, P Miller, B Devereux - arXiv preprint arXiv:1908.11439, 2019"],"snippet":"… For our experiments, we make use of the pretrained GloVe embeddings (Pennington et al., 2014) provided in the Spacy1 package trained on the Common Crawl2. The GloVe model includes 685,000 tokens … 1https://spacy …","url":["https://arxiv.org/pdf/1908.11439"]} -{"year":"2019","title":"Feeling Anxious? Perceiving Anxiety in Tweets using Machine Learning","authors":["D Gruda, S Hasan - Computers in Human Behavior, 2019"],"snippet":"… tweets. Words-to-vectors mapping is based on the deep neural network learning GloVe (Pennington, Socher, & Manning, 2014) embedding space built from the Common Crawl Web Data (42 Billion tokens, 1.9M vocab). The …","url":["https://www.sciencedirect.com/science/article/pii/S0747563219301608"]} -{"year":"2019","title":"FIESTA: Fast IdEntification of State-of-The-Art models using adaptive bandit algorithms","authors":["HB Moss, A Moore, DS Leslie, P Rayson - arXiv preprint arXiv:1906.12230, 2019"],"snippet":"… optimiser settings and the same regularisation. All words are lower cased and we use the same Glove common crawl 840B token 300 dimension word embedding (Pennington et al., 2014). We use variational (Gal and Ghahramani …","url":["https://arxiv.org/pdf/1906.12230"]} -{"year":"2019","title":"Figurative Usage Detection of Symptom Words to Improve Personal Health Mention Detection","authors":["A Iyer, A Joshi, S Karimi, R Sparks, C Paris - arXiv preprint arXiv:1906.05466, 2019"],"snippet":"… The first four are a random initialisation, and three pre-trained embeddings. The pretrained embeddings are: (a) word2vec (Mikolov et al., 2013); (b) GloVe (trained on Common Crawl) (Pennington et al., 2014); and, (c) Numberbatch (Speer et al., 2017) …","url":["https://arxiv.org/pdf/1906.05466"]} -{"year":"2019","title":"Finding Generalizable Evidence by Learning to Convince Q&A Models","authors":["E Perez, S Karamcheti, R Fergus, J Weston, D Kiela… - arXiv preprint arXiv …, 2019"],"snippet":"… fastText We define a function BoWFT that computes the average bag-of-words representation of some text using fastText embeddings (Joulin et al., 2017). We use 300-dimensional fastText word vectors pretrained on Common Crawl …","url":["https://arxiv.org/pdf/1909.05863"]} -{"year":"2019","title":"Findings of the First Shared Task on Machine Translation Robustness","authors":["X Li, P Michel, A Anastasopoulos, Y Belinkov… - arXiv preprint arXiv …, 2019"],"snippet":"… To explore effective approaches to leverage abundant out-of-domain parallel data. • To explore novel approaches to leverage abundant monolingual data on the Web (eg, tweets, Reddit comments, commoncrawl, etc.). • To …","url":["https://arxiv.org/pdf/1906.11943"]} -{"year":"2019","title":"Findings of the WMT 2019 Shared Task on Parallel Corpus Filtering for Low-Resource Conditions","authors":["P Koehn, F Guzmán, V Chaudhary, J Pino - Proceedings of the Fourth Conference on …, 2019"],"snippet":"… Corpus Sentences Words Wikipedia Sinhala 155,946 4,695,602 Nepali 92,296 2,804,439 English 67,796,935 1,985,175,324 CommonCrawl Sinhala 5,178,491 110,270,445 Nepali 3,562,373 102,988,609 English 380,409,891 8,894,266,960 …","url":["https://www.aclweb.org/anthology/W19-5404"]} -{"year":"2019","title":"FlauBERT: Unsupervised Language Model Pre-training for French","authors":["H Le, L Vial, J Frej, V Segonne, M Coavoux… - arXiv preprint arXiv …, 2019"],"snippet":"… Common Crawl).11 The data were collected from three main sources: (1) monolingual data for French provided in WMT19 shared tasks (Li et al., 2019, 4 sub-corpora); (2) French text corpora offered in the OPUS collection …","url":["https://arxiv.org/pdf/1912.05372"]} -{"year":"2019","title":"Frame Augmented Alternating Attention Network for Video Question Answering","authors":["W Zhang, S Tang, Y Cao, S Pu, F Wu, Y Zhuang - IEEE Transactions on Multimedia, 2019"],"snippet":"Page 1. 1520-9210 (c) 2019 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8811730/"]} -{"year":"2019","title":"Frequency, acceptability, and selection: A case study of clause-embedding","authors":["AS White, K Rawlins"],"snippet":"Page 1. Frequency, acceptability, and selection: A case study of clause-embedding Aaron Steven White University of Rochester aaron.white@rochester.edu Kyle Rawlins Johns Hopkins University kgr@jhu.edu Abstract We investigate …","url":["https://ling.auf.net/lingbuzz/004596/current.pdf"]} -{"year":"2019","title":"From Legal to Technical Concept: Towards an Automated Classification of German Political Twitter Postings as Criminal Offenses","authors":["F Zufall, T Horsmann, T Zesch"],"snippet":"… We use a bi-directional LSTM (Hochreiter and Schmidhuber, 1997) for classification.30 We use the 300-dimensional German pre-trained word embeddings provided by Grave et al. (2018), which are trained on the German common crawl …","url":["https://www.researchgate.net/profile/Frederike_Zufall/publication/331475806_From_Legal_to_Technical_Concept_Towards_an_Automated_Classification_of_German_Political_Twitter_Postings_as_Criminal_Offenses/links/5ccbe9b0a6fdcc4719838905/From-Legal-to-Technical-Concept-Towards-an-Automated-Classification-of-German-Political-Twitter-Postings-as-Criminal-Offenses.pdf"]} -{"year":"2019","title":"Frontiersinpatternrecognitionandartificialintelligence","authors":["B Marleah, N Nicola, SC Yee - 2019"],"snippet":""} -{"year":"2019","title":"Frowning Frodo, Wincing Leia, and a Seriously Great Friendship: Learning to Classify Emotional Relationships of Fictional Characters","authors":["E Kim, R Klinger - arXiv preprint arXiv:1903.12453, 2019"],"snippet":"… We obtain word vectors for the embedding layer from GloVe (pre-trained on Common Crawl, d = 300, Pennington et al., 2014) and initialize out- of-vocabulary terms with zeros (including the po- sition indicators). 4 Experiments Experimental Setting …","url":["https://arxiv.org/pdf/1903.12453"]} -{"year":"2019","title":"Fusing Vector Space Models for Domain-Specific Applications","authors":["L Rettig, J Audiffren, P Cudré-Mauroux - arXiv preprint arXiv:1909.02307, 2019"],"snippet":"… Despite the convenience they bring, using such readilyavailable, pre-trained models is often suboptimal in vertical applications [2], [3]; as these models are pre-trained on large, non-specific sources (eg, Wikipedia and the Common …","url":["https://arxiv.org/pdf/1909.02307"]} -{"year":"2019","title":"Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study","authors":["JA Balazs, Y Matsuo - arXiv preprint arXiv:1904.05584, 2019"],"snippet":"Page 1. Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study Jorge A. Balazs and Yutaka Matsuo Graduate School of Engineering The University of Tokyo {jorge, matsuo}@weblab.tu-tokyo.ac.jp Abstract …","url":["https://arxiv.org/pdf/1904.05584"]} -{"year":"2019","title":"General Purpose Vector Representation for Swedish Documents: An application of Neural Language Models","authors":["S Hedström - 2019"],"snippet":"Page 1. General Purpose Vector Representation for Swedish Documents An application of Neural Language Models Simon Hedström Master's Thesis in Engineering Physics, Department of Physics, Umeå University, 2019 Page …","url":["https://umu.diva-portal.org/smash/get/diva2:1323994/FULLTEXT01.pdf"]} -{"year":"2019","title":"Generalizable prediction of academic performance from short texts on social media","authors":["I Smirnov - arXiv preprint arXiv:1912.00463, 2019"],"snippet":"… We obtained significantly better results with a model that used word-embeddings (see Methods). We also find that embeddings trained on the VK corpus outperform models trained on the Wikipedia and Common Crawl corpora (Table 1). 3 Page 4 …","url":["https://arxiv.org/pdf/1912.00463"]} -{"year":"2019","title":"Generalizing Question Answering System with Pre-trained Language Model Fine-tuning","authors":["D Su, Y Xu, GI Winata, P Xu, H Kim, Z Liu, P Fung - … of the 2nd Workshop on Machine …, 2019"],"snippet":"… (2009)) and Common Crawl (Buck et al., 1https://github.com/mrqa/MRQA-SharedTask-2019 2014) for pre-training … 2014. N-gram counts and language models from the common crawl. In LREC, volume 2, page 4. Citeseer …","url":["https://mrqa.github.io/assets/papers/63_Paper.pdf"]} -{"year":"2019","title":"Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model","authors":["P Vijayaraghavan, D Roy - arXiv preprint arXiv:1909.07873, 2019"],"snippet":"… These paraphrase datasets together contains text from various sources: Common Crawl, CzEng1.6, Europarl, News Commentary, Quora questions, and Twitter trending topic tweets. We do not use all the data for our pretraining …","url":["https://arxiv.org/pdf/1909.07873"]} -{"year":"2019","title":"Generating composite SQL queries from natural language","authors":["M De Groote - 2018"],"snippet":"… of the questions. We decided to use the Common Crawl embedding that is trained on 42 billion tokens, consists of a vocabulary of 1.9 million tokens and embeds these tokens in the 300-dimensional vector space5. All the words …","url":["https://lib.ugent.be/fulltxt/RUG01/002/494/903/RUG01-002494903_2018_0001_AC.pdf"]} -{"year":"2019","title":"Generating Language-Independent Neural Sentence Embeddings for Natural Language Classification Tasks","authors":["S Erhardt"],"snippet":"… [Rud17] At the time this thesis was written, there are Word Embeddings for more than 150 languages, trained on Common Crawl1 and Wikipedia, available. [Rud17] 1An open repository of web crawl data that can be …","url":["https://www.social.in.tum.de/fileadmin/w00bwc/www/Gerhard_Hagerer/thesis.pdf"]} -{"year":"2019","title":"Generic Web Content Extraction with Open-Source Software","authors":["A Barbaresi"],"snippet":"… Because of the vastly increasing variety of corpora, text types and use cases, it becomes more and more difficult to assess the usefulness and appropriateness of certain web texts 1https://commoncrawl.org for given research objectives …","url":["https://corpora.linguistik.uni-erlangen.de/data/konvens/proceedings/papers/kaleidoskop/camera_ready_barbaresi.pdf"]} -{"year":"2019","title":"Geo-spatial text-mining from Twitter–a feature space analysis with a view toward building classification in urban regions","authors":["M Häberle, M Werner, XX Zhu - European Journal of Remote Sensing, 2019"],"snippet":"Skip to Main Content …","url":["https://www.tandfonline.com/doi/full/10.1080/22797254.2019.1586451"]} -{"year":"2019","title":"Ghmerti at SemEval-2019 Task 6: A Deep Word-and Character-based Approach to Offensive Language Identification","authors":["E Doostmohammadi, H Sameti, A Saffar - … of the 13th International Workshop on …, 2019"],"snippet":"… The indices include 256 of the most common characters, plus 0 for padding and 1 for un- known characters. 2. xw which is the embeddings of the words in the input tweet based on FastText's 600Btoken common crawl model (Mikolov et al., 2018) …","url":["https://www.aclweb.org/anthology/S19-2110"]} -{"year":"2019","title":"GLOSS: Generative Latent Optimization of Sentence Representations","authors":["SP Singh, A Fan, M Auli - arXiv preprint arXiv:1907.06385, 2019"],"snippet":"… representations. This could be as simple as using a bag-of-words averaging of Glove (Pennington et al., 2014) word embeddings trained on a corpus such as CommonCrawl, which we re- fer to as Glove-BoW. Methods such …","url":["https://arxiv.org/pdf/1907.06385"]} -{"year":"2019","title":"GraphVite: A High-Performance CPU-GPU Hybrid System for Node Embedding","authors":["Z Zhu, S Xu, M Qu, J Tang - arXiv preprint arXiv:1903.00757, 2019"],"snippet":"Page 1. GraphVite: A High-Performance CPU-GPU Hybrid System for Node Embedding Zhaocheng Zhu Mila - Québec AI Institute Université de Montréal zhaocheng.zhu@ umontreal.ca Shizhen Xu Tsinghua University xsz12@mails.tsinghua.edu.cn …","url":["https://arxiv.org/pdf/1903.00757"]} -{"year":"2019","title":"Green AI","authors":["R Schwartz, J Dodge, NA Smith, O Etzioni - arXiv preprint arXiv:1907.10597, 2019"],"snippet":"… For example, the June 2019 Common Crawl contains 242 TB of uncompressed data,12 so even simple filtering to extract usable text is difficult … 11https://opensource.google.com/ projects/open-images-dataset 12http://commoncrawl.org/2019/07 …","url":["https://arxiv.org/pdf/1907.10597"]} -{"year":"2019","title":"Grounded Response Generation Task at DSTC7","authors":["M Galley, C Brockett, X Gao, J Gao, B Dolan"],"snippet":"… Turn 4 still pretty incredible , but quite a bit different that 10,000 meters . Table 1: Sample of the DSTC7 Sentence Generation data, which combines Reddit data (Turns 1-4) along with documents (extracted from Common Crawl) discussed in the conversations …","url":["http://workshop.colips.org/dstc7/papers/DSTC7_Task_2_overview_paper.pdf"]} -{"year":"2019","title":"Happy Together: Learning and Understanding Appraisal From Natural Language","authors":["A Rajendran, C Zhang, M Abdul-Mageed"],"snippet":"… language models (ULMFiT). Exploiting Simple GloVe Embeddings For the embedding layer, we obtain the 300-dimensional embedding vector for tokens using GloVe's Common Crawl pre-trained model [13]. GloVe embeddings …","url":["https://mageed.sites.olt.ubc.ca/files/2019/01/AffCon_aaai2019_happyDB.pdf"]} -{"year":"2019","title":"HARE: a Flexible Highlighting Annotator for Ranking and Exploration","authors":["D Newman-Griffis, E Fosler-Lussier - arXiv preprint arXiv:1908.11302, 2019"],"snippet":"… ated three commonly used benchmark embedding sets: word2vec skipgram (Mikolov et al., 2013) using GoogleNews,6 FastText skipgram with subword information on WikiNews,7 and GloVe (Pennington et al., 2014) on 840 …","url":["https://arxiv.org/pdf/1908.11302"]} -{"year":"2019","title":"HATEMINER at SemEval-2019 Task 5: Hate speech detection against Immigrants and Women in Twitter using a Multinomial Naive Bayes Classifier","authors":["N Chakravartula - Proceedings of the 13th International Workshop on …, 2019"],"snippet":"… Word Embeddings: Glove840B - common crawl, GloveTwitter27B - twitter crawl (Pennington et al., 2014) and fasttext - common crawl (Mikolov et al., 2018) pre-trained word embeddings are used to analyze their impact on the classification …","url":["https://www.aclweb.org/anthology/S19-2071"]} -{"year":"2019","title":"HealthSuggestions: moving beyond the beta version","authors":["PMP dos Santos - 2019"],"snippet":"Page 1. FACULDADE DE ENGENHARIA DA UNIVERSIDADE DO PORTO Health Suggestions: Moving Beyond the Beta Version Paulo Miguel Pereira dos Santos Master in Informatics and Computing Engineering Supervisor …","url":["https://repositorio-aberto.up.pt/bitstream/10216/121948/2/347008.2.pdf"]} -{"year":"2019","title":"Hierarchical Meta-Embeddings for Code-Switching Named Entity Recognition","authors":["GI Winata, Z Lin, J Shin, Z Liu, P Fung - arXiv preprint arXiv:1909.08504, 2019"],"snippet":"… We use FastText word embeddings trained from Common Crawl and Wikipedia (Grave et al., 2018) for English (es), Spanish (es), including four Romance languages: Catalan (ca), Portuguese (pt), French (fr), Italian …","url":["https://arxiv.org/pdf/1909.08504"]} -{"year":"2019","title":"High Quality ELMo Embeddings for Seven Less-Resourced Languages","authors":["M Ulčar, M Robnik-Šikonja - arXiv preprint arXiv:1911.10049, 2019"],"snippet":"… They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings (Ginter et al., 2017), which is a combination of Wikipedia dump and common crawl …","url":["https://arxiv.org/pdf/1911.10049"]} -{"year":"2019","title":"Hitachi at MRP 2019: Unified Encoder-to-Biaffine Network for Cross-Framework Meaning Representation Parsing","authors":["Y Koreeda, G Morio, T Morishita, H Ozaki, K Yanai - arXiv preprint arXiv:1910.01299, 2019"],"snippet":"… Named entity label Named entity (NE) recognition is applied to the input text (see Section 7.1). GloVe We use 300-dimensional GloVe (Pennington et al., 2014) pretrained on Common Crawl2 which are kept fixed during the training …","url":["https://arxiv.org/pdf/1910.01299"]} -{"year":"2019","title":"HMM, Is This Ethical? Predicting the Ethics of Reddit Life Protips","authors":["M Coots, P Lu, L Wang"],"snippet":"… large corpus of text. GloVe representations have been trained on several large datasets that are publicly available, including corpuses from Wikipedia, Gigaword, Twitter, and Common Crawl [4]. 3. Task Definition Our problem is …","url":["https://madisoncoots.com/files/ethics.pdf"]} -{"year":"2019","title":"How Decoding Strategies Affect the Verifiability of Generated Text","authors":["L Massarelli, F Petroni, A Piktus, M Ott, T Rocktäschel… - arXiv preprint arXiv …, 2019"],"snippet":"… consisting of roughly 3 Billion Words; (iv) CC- NEWS, a de-duplicated subset of the English portion of the CommonCrawl news dataset (Nagel, 2016; Bakhtin et al., 2019; Liu et al., 2019a), which totals around 16 Billion words …","url":["https://arxiv.org/pdf/1911.03587"]} -{"year":"2019","title":"How to Ask Better Questions? A Large-Scale Multi-Domain Dataset for Rewriting Ill-Formed Questions","authors":["Z Chu, M Chen, J Chen, M Wang, K Gimpel, M Faruqui… - arXiv preprint arXiv …, 2019"],"snippet":"… and En↔Fr. The English-German translation models are trained on WMT datasets, including News Commentary 13, Europarl v7, and Common Crawl, and evaluated on newstest2013 for early stopping. On the newstest2013 …","url":["https://arxiv.org/pdf/1911.09247"]} -{"year":"2019","title":"How Well Do Embedding Models Capture Non-compositionality? A View from Multiword Expressions","authors":["N Nandakumar, T Baldwin, B Salehi - Proceedings of the 3rd Workshop on Evaluating …, 2019"],"snippet":"… It tokenises text at the character level. fastText We used the 300-dimensional fastText model pre-trained on Common Crawl and Wikipedia using CBOW (fastTextpre), as well as one trained over the same Wikipedia corpus4 us- ing skip-gram (fastText) …","url":["https://www.aclweb.org/anthology/W19-2004"]} -{"year":"2019","title":"Hybrid Rule-Based Model for Phishing URLs Detection","authors":["KS Adewole, AG Akintola, SA Salihu, N Faruk… - International Conference for …, 2019","N Faruk, RG Jimoh - … International Conference, iCETiC 2019, London, UK …, 2019"],"snippet":"… 1. From this figure, data collected from different servers such as Yahoo, Alexa, Common Crawl, PhishTank and OpenPhish are preprocessed in order to extract meaningful features that can be used for categorizing phishing websites from legitimate ones …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=QF6mDwAAQBAJ&oi=fnd&pg=PA119&dq=commoncrawl&ots=T7vreYeKah&sig=sO3M90XucnzXO7OeF6horBncwb4","https://link.springer.com/chapter/10.1007/978-3-030-23943-5_9"]} -{"year":"2019","title":"Hybrid Words Representation for Airlines Sentiment Analysis","authors":["U Naseem, SK Khan, I Razzak, IA Hameed"],"snippet":"… GloVe uses ratios of co-occurrence probabilities. It is favourable to concatenate ELMo embeddings with traditional word embeddings. In this work, we have used pre-trained GloVe embedding (trained on 840 billion token from common crawl) of 300 dimensions …","url":["https://www.researchgate.net/profile/Ibrahim_Hameed/publication/336579383_Hybrid_Words_Representation_for_Airlines_Sentiment_Analysis/links/5da6e53892851caa1ba6f8c6/Hybrid-Words-Representation-for-Airlines-Sentiment-Analysis.pdf"]} -{"year":"2019","title":"Hyper: Distributed Cloud Processing for Large-Scale Deep Learning Tasks","authors":["D Buniatyan - arXiv preprint arXiv:1910.07172, 2019"],"snippet":"… [4] MinIO high performance object storage server compatible with Amazon S3 API. https://github.com/minio/minio, 2018. [Online; accessed 31- May-2019]. [5] Common Crawl Dataset. https://commoncrawl.org, 2019. [Online; accessed 31-May-2019] …","url":["https://arxiv.org/pdf/1910.07172"]} -{"year":"2019","title":"Hyperparameter Tuning for Deep Learning in Natural Language Processing","authors":["A Aghaebrahimian, M Cieliebak - 2019"],"snippet":"… on the Common Crawl, one on 42 and the other on 840 billion tokens), FastText (Bojanowski et al., 2016), dependency based (Levy and Goldberg, 2014), and ELMo (Peters et al., 2018). As shown in Ta- ble 1, the Glove …","url":["http://ceur-ws.org/Vol-2458/paper5.pdf"]} -{"year":"2019","title":"Identification Of Bot Accounts In Twitter Using 2D CNNs On User-generated Contents","authors":["M Polignano, MG de Pinto, P Lops, G Semeraro - 2019"],"snippet":"… FastTextEmb)8: 300 dimensionality vectors, composed by a vocabulary of 2 million words and n-grams of the words, case sensitive and obtained from 600 billion of tokens trained on data crawled from generic Internet web pages by Common Crawl nonprofit organization; …","url":["https://www.researchgate.net/profile/Marco_Polignano/publication/334636395_Identification_Of_Bot_Accounts_In_Twitter_Using_2D_CNNs_On_User-generated_Contents/links/5d373c10a6fdcc370a59e892/Identification-Of-Bot-Accounts-In-Twitter-Using-2D-CNNs-On-User-generated-Contents.pdf"]} -{"year":"2019","title":"Identification of Good and Bad News on Twitter","authors":["P Aggarwal, A Aker"],"snippet":"… We use tf-idf representation for each vocabulary term. 5.2.3 Embeddings Finally, we also use fasttext based embedding (Mikolov et al., 2018) vectors which are trained on common crawl having 600 billion tokens. 5.3 Classifiers …","url":["https://www.researchgate.net/profile/Ahmet_Aker3/publication/334825190_Identification_of_Good_and_Bad_News_on_Twitter/links/5d42c34992851cd04697548a/Identification-of-Good-and-Bad-News-on-Twitter.pdf"]} -{"year":"2019","title":"Identifying and Addressing Structural Inequalities in the Representativeness of Geographic Technologies","authors":["IL Johnson - 2019"],"snippet":"… knowledge graphs (Wikipedia and Google [289]), word embeddings (Wikipedia, Twitter, and Common Crawl in GloVe embeddings [238]), object detection (Instagram hashtags and Facebook [292])—and adding …","url":["http://search.proquest.com/openview/dccae6679751f41f283b33f555947aa8/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2019","title":"Identifying transfer models for machine learning tasks","authors":["P Watson, B Bhattacharjee, NC CODELLA… - US Patent App. 15/982,622, 2019"],"snippet":"US20190354850A1 - Identifying transfer models for machine learning tasks - Google Patents. Identifying transfer models for machine learning tasks. Download PDF Info. Publication number US20190354850A1. US20190354850A1 …","url":["https://patents.google.com/patent/US20190354850A1/en"]} -{"year":"2019","title":"Idiap Abstract Text Summarization System for German Text Summarization Task","authors":["S Parida, P Motlicek - 2019"],"snippet":"… The experiments performed over 1http://opennmt.net/OpenNMT-py/ Summarization.html 2https://www.swisstext.org/ 3http://commoncrawl.org/ these datasets are described in the Section 4 (de- noted as S1 experimental …","url":["http://ceur-ws.org/Vol-2458/paper9.pdf"]} -{"year":"2019","title":"IIT Varanasi at HASOC 2019: Hate Speech and Offensive Content Identification in Indo-European Languages","authors":["A Mishra, S Pal - Proceedings of the 11th annual meeting of the Forum …"],"snippet":"… embedding. One of the pretrained glove embeddings is based on the common crawl which represents each word in the dimension of 300, and the other one is based on Twitter data which represents each word in the dimension of 200 …","url":["http://irlab.daiict.ac.in/~Parth/T3-22.pdf"]} -{"year":"2019","title":"IIT-BHU at CIQ 2019: Classification of Insincere Questions","authors":["A Mishra, S Pal"],"snippet":"… Different versions of glove pre-trained em- bedding exist; however, we use embedding trained of dimension 300 on common crawl using 840B tokens and 2.2M vocabulary3. We generated random embedding of dimension 300 for out of vocabulary words …","url":["http://irlab.daiict.ac.in/~Parth/T5-4.pdf"]} -{"year":"2019","title":"Impact of Debiasing Word Embeddings on Information Retrieval","authors":["E Gerritse - 2019"],"snippet":"… Bolukbasi et al. [1] show that there is a high correlation in bias in Word2Vec trained on Google News and Glove trained on the common crawl, so we still cannot infer whether the method or the dataset is more important for creating the bias …","url":["http://www.emmagerritse.com/pdfs/FDIA_2019_paper.pdf"]} -{"year":"2019","title":"Improved Quality Estimation of Machine Translation with Pre-trained Language Representation","authors":["G Miao, H Di, J Xu, Z Yang, Y Chen, K Ouchi - CCF International Conference on …, 2019"],"snippet":"… The former is mainly obtained from the open news datasets of the WMT17 and WMT18 MT evaluation tasks, including five data sets: Europarl v7, Europarl v12, Europarl v13, Common Crawl corpus, and Rapid corpus of EU press releases …","url":["https://link.springer.com/chapter/10.1007/978-3-030-32233-5_32"]} -{"year":"2019","title":"Improving Conditioning in Context-Aware Sequence to Sequence Models","authors":["X Wang, J Weston, M Auli, Y Jernite - arXiv preprint arXiv:1911.09728, 2019"],"snippet":"… 2019) for LFQA. The dataset consists of 272,000 complex questions and answer pairs, along with supporting documents created by gathering and concatenating passages from CommonCrawl web pages which are relevant to the question …","url":["https://arxiv.org/pdf/1911.09728"]} -{"year":"2019","title":"Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data","authors":["W Zhao, L Wang, K Shen, R Jia, J Liu - arXiv preprint arXiv:1903.00138, 2019"],"snippet":"… We do not use reranking when evaluating the CoNLL-2014 data sets. But we rerank the top 12 hypothesizes us- ing the language model trained on Common Crawl (Junczys-Dowmunt and Grundkiewicz, 2016) for …","url":["https://arxiv.org/pdf/1903.00138"]} -{"year":"2019","title":"Improving Implicit Stance Classification in Tweets Using Word and Sentence Embeddings","authors":["R Schaefer, M Stede - Joint German/Austrian Conference on Artificial …, 2019"],"snippet":"… combinations. 4.2 fastText Embeddings. We use pre-trained 300-dimensional fastText [11] word vectors that have been trained on Wikipedia and Common Crawl data. For training, an extension of the CBOW model has been used …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30179-8_26"]} -{"year":"2019","title":"Improving Named Entity Recognition with Commonsense Knowledge Pre-training","authors":["G Dekhili, NT Le, F Sadat - Pacific Rim Knowledge Acquisition Workshop, 2019"],"snippet":"… which is the concatenation of ConceptNet PPMI embeddings with Word2Vec embeddings trained on 100 billion words of Google News using skip-grams with negative sampling [14] and GloVe 1.2 embeddings trained on 840 billion words of the Common Crawl [16] …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30639-7_2"]} -{"year":"2019","title":"Improving Neural Machine Translation of Subtitles with Finetun-ing","authors":["S Reinsperger - 2019"],"snippet":"… 3 Results 53 3.1 ParallelCorpora . . . . . 54 3.1.1 Europarl. . . . . 54 3.1.2 Common Crawl . . . . . 54 3.1.3 NewsCommentary . . . . . 56 3.1.4 Subtitles …","url":["http://www.simonrsp.com/masterthesis.pdf"]} -{"year":"2019","title":"Improving Neural Machine Translation Robustness via Data Augmentation: Beyond Back Translation","authors":["Z Li, L Specia - arXiv preprint arXiv:1910.03009, 2019"],"snippet":"… 3.1 Corpora We used all parallel corpora from the WMT19 Robustness Task on Fr↔En. For out-of-domain training, we used the WMT15 Fr↔En News Translation Task data, including Europarl v7, Common Crawl, UN, News Commentary v10, and Gigaword Corpora …","url":["https://arxiv.org/pdf/1910.03009"]} -{"year":"2019","title":"Improving Neural Machine Translation with Pre-trained Representation","authors":["R Weng, H Yu, S Huang, W Luo, J Chen - arXiv preprint arXiv:1908.07688, 2019"],"snippet":"… We use newstest2015 (NST15) as our validation set, and newstest2016 (NST16) as test sets 4. We use 40 million monolingual sentences from WMT-16 Common Crawl data-set … We use 5 million monolingual sentences …","url":["https://arxiv.org/pdf/1908.07688"]} -{"year":"2019","title":"Improving orienteering-based tourist trip planning with social sensing","authors":["F Persia, G Pilato, M Ge, P Bolzoni, D D'Auria… - Future Generation …, 2019"],"snippet":"… This is a popular technique in machine learning for uncovering subsymbolic meanings, such as word analogies. We utilized a pre-trained word vector encoding for Italian provided by fastText [32], which was trained on Common Crawl and Wikipedia …","url":["https://www.sciencedirect.com/science/article/pii/S0167739X19303929"]} -{"year":"2019","title":"Improving Quality Estimation of Machine Translation by Using Pre-trained Language Representation","authors":["G Miao, H Di, J Xu, Z Yang, Y Chen, K Ouchi - China Conference on Machine …, 2019","Y Chen, K Ouchi - Machine Translation: 15th China Conference, CCMT …, 2019"],"snippet":"… Metrics We first train the bilingual expert model [9] with large-scale parallel corpus released for the WMT17/WMT18 News Machine Translation Task, which mainly consists of five data sets, including Europarl v7, Europarl v12 …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=WuK_DwAAQBAJ&oi=fnd&pg=PA11&dq=commoncrawl&ots=XTi4UL5q8i&sig=lCeqF4TBBuqQrg4EE0rN09FZeVs","https://link.springer.com/chapter/10.1007/978-981-15-1721-1_2"]} -{"year":"2019","title":"Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader","authors":["W Xiong, M Yu, S Chang, X Guo, WY Wang - arXiv preprint arXiv:1905.07098, 2019"],"snippet":"… Page 6. A Implementation Details Throughout our experiments, we use the 300-dimension GloVe embeddings trained on the Common Crawl corpus. The hidden dimension of LSTM and the dimension of entity embeddings are both 100 …","url":["https://arxiv.org/pdf/1905.07098"]} -{"year":"2019","title":"In-call virtual assistant","authors":["R Raanani, R Levy, MY Breakstone - US Patent App. 16/165,566, 2019"],"snippet":"… At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (eg, CommonCrawl), as well as freely …","url":["https://patentimages.storage.googleapis.com/b2/cd/2c/a7fa39e3002b4f/US20190057698A1.pdf"]} -{"year":"2019","title":"Incendiary News Detection","authors":["EB Coban, E Filatova - 2019"],"snippet":"… features. We run classification experiments for unigrams, and combination of uniand bi-grams. 10https://www.nltk.org/ 11http://scikit-learn.org 12https://github.com/ otuncelli/turkish-stemmer-python 13http://commoncrawl.org/ For …","url":["https://pdfs.semanticscholar.org/8c78/f9da879fc5936ef84dc7128db691d7042fef.pdf"]} -{"year":"2019","title":"Incorporating Domain Knowledge into Natural Language Inference on Clinical Texts","authors":["M Lu, Y Fang, F Yan, M Li - IEEE Access, 2019"],"snippet":"… two domain-specific corpus: • GloVe[CC]: GloVe embeddings [21], trained on Common Crawl. • fastText[BioASQ]: fastText embeddings [22], trained on PubMed abstracts from the BioASQ challenge [23]. • fastText[MIMIC-III]: fastText …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/08701433.pdf"]} -{"year":"2019","title":"Incorporating Syntactic Knowledge in Neural Quality Estimation for Machine Translation","authors":["N Ye, Y Wang, D Cai - China Conference on Machine Translation, 2019"],"snippet":"… One is the large-scale bilingual dataset for training the feature extraction module. It comes from the parallel corpus of WMT machine translation task, including Europarl v7, Common Crawl corpus, News Commentary v11 and so on …","url":["https://link.springer.com/chapter/10.1007/978-981-15-1721-1_3"]} -{"year":"2019","title":"Inducing Relational Knowledge from BERT","authors":["Z Bouraoui, J Camacho-Collados, S Schockaert - arXiv preprint arXiv:1911.12753, 2019"],"snippet":"… As static word embeddings for the baselines, we will use the Skip-gram word vectors that were pre-trained from the 100B words Google News data set6 (SG-GN) and GloVe word vectors which were pre-trained from the …","url":["https://arxiv.org/pdf/1911.12753"]} -{"year":"2019","title":"Inducing Schema. org Markup from Natural Language Context","authors":["GK Shahi, D Nandini, S Kumari - Kalpa Publications in Computing, 2019"],"snippet":"… extension, in 2012 another data hub called Web Data Commons [5] came up with structured data extracted from the Common Crawl … 5http:// commoncrawl.org/ 6http://webdatacommons.org/ 7The WARC file format …","url":["https://easychair.org/publications/download/DXGr"]} -{"year":"2019","title":"Inferring Concept Hierarchies from Text Corpora via Hyperbolic Embeddings","authors":["M Le, S Roller, L Papaxanthos, D Kiela, M Nickel - arXiv preprint arXiv:1902.00913, 2019"],"snippet":"Page 1. Inferring Concept Hierarchies from Text Corpora via Hyperbolic Embeddings Matt Le1 and Stephen Roller1 and Laetitia Papaxanthos2 Douwe Kiela1 and Maximilian Nickel1 1Facebook AI Research, New York …","url":["https://arxiv.org/pdf/1902.00913"]} -{"year":"2019","title":"Information extraction","authors":["S Razniewski"],"snippet":"… 8 Page 9. Taxi [Panchenko et al., 2016] 1. Crawl domain-specific text corpora in addition to WP, Commoncrawl 2. Candidate hypernymy extraction 1. Via substrings • “biomedical science” isA “science” • “microbiology” isA “biology” • “toast with bacon” isA “toast” …","url":["https://www.mpi-inf.mpg.de/fileadmin/inf/d5/teaching/ws19-20_ie/5_Taxonomy_induction_coreference_disambiguation.pdf"]} -{"year":"2019","title":"InriaFBK Drawing Attention to Offensive Language at Germeval2019","authors":["M Corazza, S Menini, E Cabrio, S Tonelli, S Villata…"],"snippet":"… This is the main reason why we chose to use FastText embeddings (Bojanowski et al., 2016), pretrained on Common Crawl and Wikipedia 3. 4.3 Recurrent model We develop a simple recurrent neural network model and use it for all subtasks …","url":["https://corpora.linguistik.uni-erlangen.de/data/konvens/proceedings/papers/germeval/Germeval_Task_2_2019_paper_1.INRIA.pdf"]} -{"year":"2019","title":"Integrating Grammatical Features into CNN Model for Emotion Classification","authors":["AC Le - 2018 5th NAFOSTED Conference on Information and …, 2018"],"snippet":"… a sentence s = 11 In this study we used the vector set GloVe [16], it is pretrained word vectors for Common Crawl (glove.42B.300d) with 300 dimensions for word embeddings to use for English data. For Vietnamese emotion …","url":["https://ieeexplore.ieee.org/abstract/document/8606875/"]} -{"year":"2019","title":"Integrating UMLS for Early Detection of Sings of Anorexia","authors":["FM Plaza-del-Arco, P López-Úbeda, MC Dıaz-Galiano… - 2019"],"snippet":"… Specifically, we use Page 6. the available pre-trained statistical models for English ”en core web md” wich version is 1.2.0. It is composed of 685k keys, 20k unique vectors (300 dimensions) and it was trained on OntoNotes …","url":["http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2019/paper_76.pdf"]} -{"year":"2019","title":"Integrating word embeddings and document topics with deep learning in a video classification framework","authors":["Z Kastrati, AS Imran, A Kurti - Pattern Recognition Letters, 2019"],"snippet":"… GloVe contains word embeddings for a vocabulary of 400K words trained on 42 billion words from Wikipedia pages and newswire, and fastText includes word embeddings for a vocabulary of 2 million words trained on 600 billion tokens from Common Crawl …","url":["https://www.sciencedirect.com/science/article/pii/S0167865519302326"]} -{"year":"2019","title":"Intelligent sentiment analysis approach using edge computing‐based deep learning technique","authors":["H Sankar, V Subramaniyaswamy, V Vijayakumar… - Software: Practice and Experience"],"snippet":"… Word2Vec, 300d, 3 Million, 100 Billion. Common Crawl, 300d, 42 Billion, 1.9 Million. Common Crawl, 300d, 840 Billion, 2.2 Million. The main drawback of unsupervised word embedding learning is that it does not hold the sentiment …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.2687"]} -{"year":"2019","title":"Interactive Language Learning by Question Answering","authors":["X Yuan, MA Cote, J Fu, Z Lin, C Pal, Y Bengio… - arXiv preprint arXiv …, 2019"],"snippet":"Page 1. Interactive Language Learning by Question Answering Xingdi Yuan♥∗ Marc-Alexandre Côté♥∗ Jie Fu♣♠ Zhouhan Lin♦♠ Christopher Pal♣♠ Yoshua Bengio♦♠ Adam Trischler♥ ♥Microsoft Research, Montréal ♣Polytechnique …","url":["https://arxiv.org/pdf/1908.10909"]} -{"year":"2019","title":"Interactive Machine Comprehension with Information Seeking Agents","authors":["X Yuan, J Fu, MA Cote, Y Tay, C Pal, A Trischler - arXiv preprint arXiv:1908.10449, 2019"],"snippet":"… Word embeddings are initialized by the 300-dimension fastText (Mikolov et al. 2018) vectors trained on Common Crawl (600B tokens), and are fixed during training. Character embeddings are initialized by 200-dimension random vectors …","url":["https://arxiv.org/pdf/1908.10449"]} -{"year":"2019","title":"Internet of Things Anomaly Detection using Multivariate Analysis","authors":["S Ezekiel, AA Alshehri, L Pearlstein, XW Wu, A Lutz - The 3rd ICICPE 2019 Conference …"],"snippet":"… Our model uses the GloVe (Pennington et al., 2014) 300-dimensional vectors trained on the Common Crawl corpus with 42B tokens as word level features, as this resulted in the best performance in preliminary experiments …","url":["http://icicpe.org/wp-content/uploads/2019/12/ICICPE-2019-vol.31.pdf#page=90"]} -{"year":"2019","title":"Iot-based call assistant device","authors":["R Raanani, R Levy, MY Breakstone - US Patent App. 16/168,663, 2019"],"snippet":"… At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (eg, CommonCrawl), as well as freely …","url":["https://patentimages.storage.googleapis.com/c3/a1/97/799532a8db7406/US20190057079A1.pdf"]} -{"year":"2019","title":"Iterative Keyword Optimization","authors":["A Elyashar, M Reuben, R Puzis"],"snippet":"… The model was trained on Common Crawl 4 and Wikipedia 5 using the fastText library 6. We used Euclidean as the distance measure … 4 http://commoncrawl.org/ 5 https://www.wikipedia.org/ 6 https://fasttext …","url":["http://sbp-brims.org/2019/proceedings/papers/working_papers/Elyashar.pdf"]} -{"year":"2019","title":"JHU 2019 Robustness Task System Description","authors":["M Post, K Duh - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… the best million lines each of CommonCrawl, Gigaword, and the UN corpus; and • the MTNT training data. Data sizes are indicated in Table 1. dataset segments words Europarl 2.0m 50.2m News Commentary 200k 4.4m …","url":["https://www.aclweb.org/anthology/W19-5366"]} -{"year":"2019","title":"Johns Hopkins University Submission for WMT News Translation Task","authors":["K Marchisio, YK Lal, P Koehn - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… sampled bitext (x2). ParaCrawl1 and Common Crawl2 are filtered similarly, and added to form the training set for the final models. We … Crawl. ParaCrawl and Common Crawl were combined into a single corpus before filtering …","url":["https://www.aclweb.org/anthology/W19-5329"]} -{"year":"2019","title":"Joint Training for Neural Machine Translation","authors":["Y Cheng"],"snippet":"Page 1. Springer Recognizing Theses Outstanding Ph.D. Research Yong Cheng Joint Neural Translation Training Machine for Page 2. Springer Theses Recognizing Outstanding Ph.D. Research Page 3. Aims and Scope The …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=KIOrDwAAQBAJ&oi=fnd&pg=PR5&dq=commoncrawl&ots=vy1Stpb4X-&sig=1d6kjXbtaE3McDjxvY7O9-JJQOk"]} -{"year":"2019","title":"Jointly Learning to Align and Translate with Transformer Models","authors":["S Garg, S Peitz, U Nallasamy, M Paulik - arXiv preprint arXiv:1909.02074, 2019","SGSPU Nallasamy, M Paulik"],"snippet":"… by Vilar et al. (2006). We use all available bilingual data (Europarl v7, Common Crawl corpus, News Commentary v13 and Rapid corpus of EU press releases) excluding the ParalCrawl corpus. We remove sentences longer …","url":["https://arxiv.org/pdf/1909.02074","https://www.researchgate.net/profile/Stephan_Peitz/publication/336996532_Jointly_Learning_to_Align_and_Translate_with_Transformer_Models/links/5ec41124458515626cb813b1/Jointly-Learning-to-Align-and-Translate-with-Transformer-Models.pdf"]} -{"year":"2019","title":"JParaCrawl: A Large Scale Web-Based English-Japanese Parallel Corpus","authors":["M Morishita, J Suzuki, M Nagata - arXiv preprint arXiv:1911.10668, 2019"],"snippet":"… To select the candidate domains, we first identified the language of all the Common Crawl text data by CLD26 and counted how much … Since the crawled data stored on Common Crawl may not contain the entire website or might …","url":["https://arxiv.org/pdf/1911.10668"]} -{"year":"2019","title":"KaWAT: A Word Analogy Task Dataset for Indonesian","authors":["K Kurniawan - arXiv preprint arXiv:1906.09912, 2019"],"snippet":"… We used fastText pretrained embeddings introduced in (Bojanowskietal.,2017) and (Grave et al., 2018), which have been trained on Indonesian Wikipedia and Indonesian Wikipedia plus Common Crawl data respectively. We …","url":["https://arxiv.org/pdf/1906.09912"]} -{"year":"2019","title":"Keyphrase Extraction from Scholarly Articles as Sequence Labeling using Contextualized Embeddings","authors":["D Sahrawat, D Mahata, M Kulkarni, H Zhang… - arXiv preprint arXiv …, 2019"],"snippet":"… and OpenAI GPT-2 (small, medium). As a baseline, we also use 300 dimensional fixed embeddings from Glove2, Word2Vec3, and FastText4 (common-crawl, wiki-news). We also compare the proposed architecture against …","url":["https://arxiv.org/pdf/1910.08840"]} -{"year":"2019","title":"KiloGrams: Very Large N-Grams for Malware Classification","authors":["E Raff, W Fleming, R Zak, H Anderson, B Finlayson… - arXiv preprint arXiv …, 2019"],"snippet":"… A ccuracy s = 1 s = ⌈n/4⌉ Figure 1: Balanced Accuracy results (y-axis) on the Public PDF dataset as we increase then-gram size (x-axis, log-scale), and alter the hashing stride s. Using a hashing-stride retains more …","url":["https://arxiv.org/pdf/1908.00200"]} -{"year":"2019","title":"KIT's Submission to the IWSLT 2019 Shared Task on Text Translation","authors":["F Schneider, A Waibel"],"snippet":"… We made use of all allowed data, which is broken down in table 1. The allowed parallel data from WMT consists of Commoncrawl, CzEng (which makes up the vast majority of the parallel training data), Europarl, news commentrary and paracrawl …","url":["https://zenodo.eu/record/3525496/files/IWSLT2019_paper_30.pdf"]} -{"year":"2019","title":"Knowledge empowered prominent aspect extraction from product reviews","authors":["Z Luo, S Huang, KQ Zhu - Information Processing & Management, 2019"],"snippet":"Skip to main content …","url":["https://www.sciencedirect.com/science/article/pii/S0306457318305193"]} -{"year":"2019","title":"Knowledge Graph-Driven Conversational Agents","authors":["J Bockhorst, D Conathan, G Fung"],"snippet":"… We use a CNN with max pooling and pretrained Glove embeddings trained on the Common Crawl 840B dataset [6] [7]. By applying our CNN classifier as a straightforward 1-of-k document classification task, we are able to achieve …","url":["https://kr2ml.github.io/2019/papers/KR2ML_2019_paper_42.pdf"]} -{"year":"2019","title":"Knowledge-based Conversational Search","authors":["S Vakulenko - arXiv preprint arXiv:1912.06859, 2019"],"snippet":"Page 1. arXiv:1912.06859v1 [cs.IR] 14 Dec 2019 Page 2. Page 3. Knowledge-based Conversational Search DISSERTATION submitted in partial fulfillment of the requirements for the degree of Doktorin der Technischen Wissenschaften by Svitlana Vakulenko, MSc …","url":["https://arxiv.org/pdf/1912.06859"]} -{"year":"2019","title":"Kyoto University participation to the WMT 2019 news shared task","authors":["F Cromieres, S Kurohashi - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… Page 2. 164 3 Data preprocessing 3.1 Data used For bilingual data, we used the provided corpora: europarl (≈ 1.7M sentence pairs), common crawl(≈ 620k sentence pairs) and newscommentary (≈ 255k sentence pairs). We did not use the paracrawl corpus …","url":["https://www.aclweb.org/anthology/W19-5312"]} -{"year":"2019","title":"Language Modelling Makes Sense: Propagating Representations through WordNet for Full-Coverage Word Sense Disambiguation","authors":["D Loureiro, A Jorge - arXiv preprint arXiv:1906.10007, 2019"],"snippet":"… tokens in the sentence. We choose fastText (Bojanowski et al., 2017) embeddings (pretrained on CommonCrawl), which are biased towards morphology, and avoid Out-of-Vocabulary issues as explained in §2.1. We use fastText …","url":["https://arxiv.org/pdf/1906.10007"]} -{"year":"2019","title":"Language Models are Unsupervised Multitask Learners","authors":["A Radford, J Wu, R Child, D Luan, D Amodei…"],"snippet":"… A promising source of diverse and nearly unlimited text is web scrapes such as Common Crawl … Trinh & Le (2018) used Common Crawl in their work on commonsense reasoning but noted a large amount of documents “whose content are mostly unintelligible” …","url":["https://www.techbooky.com/wp-content/uploads/2019/02/Better-Language-Models-and-Their-Implications.pdf"]} -{"year":"2019","title":"Language Models with Pre-Trained (GloVe) Word Embeddings","authors":["L Rokach, B Shapira, V Makarenkov"],"snippet":"… Despite the huge size of the Common Crawl corpus, some words may not exist with the embeddings, so we set these words to random vectors, and use the same embeddings consistently if we encounter the same unseen word again in the text …","url":["https://deepai.org/publication/language-models-with-pre-trained-glove-word-embeddings"]} -{"year":"2019","title":"Large Memory Layers with Product Keys","authors":["G Lample, A Sablayrolles, MA Ranzato, L Denoyer… - arXiv preprint arXiv …, 2019","MA Ranzato, L Denoyer, H Jégou"],"snippet":"… Experiments Page 20. Dataset 20 ▶ Extracted from the public Common Crawl. ▶ 40 million English news articles in training set, 5000 in validation and test set each. ▶ Did not shuffle sentences, allowing the model to learn …","url":["https://arxiv.org/pdf/1907.05242","https://pdfs.semanticscholar.org/3a54/100803474df3b98e54a1693010d12c9718b5.pdf"]} -{"year":"2019","title":"Large Scale Linguistic Processing of Tweets to Understand Social Interactions among Speakers of Less Resourced Languages: The Basque Case","authors":["J Fernandez de Landa, R Agerri, I Alegria - Information, 2019"],"snippet":"… resourced languages such as Basque. However, FastText provides pre-trained models for many languages, including Basque [33] by using the common crawl data (http://commoncrawl.org). The Basque model they distribute …","url":["https://www.mdpi.com/2078-2489/10/6/212/pdf"]} -{"year":"2019","title":"Last-Mile TLS Interception: Analysis and Observation of the Non-Public HTTPS Ecosystem","authors":["XC de Carnavalet - 2019"],"snippet":"Page 1. Last-Mile TLS Interception: Analysis and Observation of the Non-Public HTTPS Ecosystem Xavier de Carné de Carnavalet A thesis in The Concordia Institute for Information Systems Engineering Presented …","url":["http://users.encs.concordia.ca/~mmannan/student-resources/Thesis-PhD-Carnavalet-2019.pdf"]} -{"year":"2019","title":"Latent Question Interpretation Through Parameter Adaptation","authors":["T Parshakova, F Rameau, A Serdega, I Kweon, DS Kim - IEEE/ACM Transactions on …, 2019"],"snippet":"… A. Implementation Details For the sake of reproducibility, we provide the technical details related to the implementation of our approach. First of all, the initial word embeddings are initialized with GloVe embeddings, which …","url":["https://www.researchgate.net/profile/Francois_Rameau/publication/334633405_Latent_Question_Interpretation_Through_Parameter_Adaptation/links/5d37e05ca6fdcc370a5a3a43/Latent-Question-Interpretation-Through-Parameter-Adaptation.pdf"]} -{"year":"2019","title":"Laying the foundations for benchmarking open data automatically: a method for surveying data portals from the whole web","authors":["A Sheffer Correa, F Soares Correa Da Silva - 20th Annual International Conference …, 2019"],"snippet":"… KEYWORDS Open Data, Common Crawl, CKAN, Socrata, ArcGIS, OpenDataSoft … Common Crawl conducts crawls once a month and persists all the content in Web Archive (WARC) file format to allow multibillion web page archives with hundreds of terabytes in size …","url":["https://dl.acm.org/citation.cfm?id=3325257"]} -{"year":"2019","title":"LCEval: Learned Composite Metric for Caption Evaluation","authors":["N Sharif, L White, M Bennamoun, W Liu, SAA Shah"],"snippet":"… Table 1: The details of pre-trained embeddings used in our experiments Name Source Dimensions Corpus Corpus Size Vocabulary Size GloVE 840B 300d [40] 300 Common Crawl 8.40E+11 2.20E+06 Word2vec Google 300d [34] …","url":["https://www.researchgate.net/profile/Naeha_Sharif2/publication/334760575_LCEval_Learned_Composite_Metric_for_Caption_Evaluation/links/5d429677a6fdcc370a715269/LCEval-Learned-Composite-Metric-for-Caption-Evaluation.pdf"]} -{"year":"2019","title":"Learning as the Unsupervised Alignment of Conceptual Systems","authors":["BD Roads, BC Love - arXiv preprint arXiv:1906.09012, 2019"],"snippet":"… We found that alignment correlations positively correlated with mapping accuracy across a variety of scenarios (Figure 3A-C). The three conceptual systems were derived from a Common Crawl text corpus (Pennington et …","url":["https://arxiv.org/pdf/1906.09012"]} -{"year":"2019","title":"Learning from Personal Longitudinal Dialog Data","authors":["C Welch, V Pérez-Rosas, JK Kummerfeld, R Mihalcea…"],"snippet":"… Message Embeddings: We also obtain word vector representations for each message using the GloVe Common Crawl pre-trained model.19 We chose this word embedding over other off-theshelf options because the Common …","url":["https://sentic.net/personal-longitudinal-dialog-data.pdf"]} -{"year":"2019","title":"Learning multilingual topics through aspect extraction from monolingual texts","authors":["J Huber, M Spiliopoulou - Proceedings of the Fifth International Workshop on …, 2019"],"snippet":"… Xu et al., 2018). It was trained on the CommonCrawl corpus, a general-purpose text corpus that includes text from several billion web pages; the GloVe embeddings were trained on 840 billion tokens. The GloVe set includes …","url":["http://www.aclweb.org/anthology/W19-0313"]} -{"year":"2019","title":"Learning Outside the Box: Discourse-level Features Improve Metaphor Identification","authors":["J Mu, H Yannakoudakis, E Shutova - arXiv preprint arXiv:1904.02246, 2019"],"snippet":"… To learn representations, we use several widelyused embedding methods:4 GloVe We use 300-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) trained on the Common Crawl corpus as representations of a lemma and its arguments …","url":["https://arxiv.org/pdf/1904.02246"]} -{"year":"2019","title":"Learning Relational Fractals for Deep Knowledge Graph Embedding in Online Social Networks","authors":["J Zhang, L Tan, X Tao, D Wang, JJC Ying, X Wang - International Conference on Web …, 2019"],"snippet":"… Our twitter dataset was live streamed from a twitter API account and contains a maximum of 1675882 nodes and 160799842 links. The Google dataset was obtained from the repositories of common crawl and was sentilyzed from the stripped down WET file contents …","url":["https://link.springer.com/chapter/10.1007/978-3-030-34223-4_42"]} -{"year":"2019","title":"Learning to Generate Personalized Product Descriptions","authors":["G Elad, I Guy, K Radinsky, S Novgorodov, B Kimelfeld - 2019"],"snippet":"… For the title representation, we used fastText word embeddings2 pre-trained on Common Crawl and Wikipedia [25, 33], weighted based on each word's TF-IDF score [4].3 In addition, we included as features the participant's demo …","url":["http://www.kiraradinsky.com/files/Learning_to_Generate_Personalized_Product_Descriptions.pdf"]} -{"year":"2019","title":"Learning to Speak and Act in a Fantasy Text Adventure Game","authors":["J Urbanek, A Fan, S Karamcheti, S Jain, S Humeau… - arXiv preprint arXiv …, 2019","JUA Fan, SKSJS Humeau, EDT Rocktäschel…"],"snippet":"Page 1. Learning to Speak and Act in a Fantasy Text Adventure Game Jack Urbanek1 Angela Fan1,2 Siddharth Karamcheti1 Saachi Jain1 Samuel Humeau1 Emily Dinan1 Tim Rocktäschel1,3 Douwe Kiela1 Arthur Szlam1 Jason …","url":["https://arxiv.org/pdf/1903.03094","https://research.fb.com/wp-content/uploads/2019/11/Learning-to-Speak-and-Act-in-a-Fantasy-Text-Adventure-Game.pdf"]} -{"year":"2019","title":"Learning Word Ratings for Empathy and Distress from Document-Level User Responses","authors":["J Sedoc, S Buechel, Y Nachmany, A Buffone, L Ungar - arXiv preprint arXiv …, 2019"],"snippet":"… (2013) using 10-fold crossvalidation. For word embeddings we used off-the-shelf Fasttext subword embeddings (Mikolov et al., 2018).4 The embeddings are trained with subword information on Common Crawl (600B tokens) …","url":["https://arxiv.org/pdf/1912.01079"]} -{"year":"2019","title":"Leveraging Distributional and Relational Semantics for Knowledge Extraction from Textual Corpora","authors":["G ROSSIELLO, G SEMERARO, M DI CIANO - 2019"],"snippet":"Page 1. Page 2 …","url":["https://www.researchgate.net/profile/Gaetano_Rossiello/publication/333448156_Leveraging_Distributional_and_Relational_Semantics_for_Knowledge_Extraction_from_Textual_Corpora/links/5cee4fcca6fdcc18c8e9913b/Leveraging-Distributional-and-Relational-Semantics-for-Knowledge-Extraction-from-Textual-Corpora.pdf"]} -{"year":"2019","title":"Leveraging End-to-End Speech Recognition with Neural Architecture Search","authors":["A Baruwa, M Abisiga, I Gbadegesin, A Fakunle - arXiv preprint arXiv:1912.05946, 2019"],"snippet":"… We train a 3-gram, 5-gram and a 7-gram language model on common crawl 1. The relative performances are summarised in tables 1 and 2. Decoding is done by beam-searching for the output y that maximizes φ(c) given by …","url":["https://arxiv.org/pdf/1912.05946"]} -{"year":"2019","title":"Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text","authors":["O Feyisetan, T Diethe, T Drake - arXiv preprint arXiv:1910.08917, 2019"],"snippet":"Page 1. Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text Oluwaseyi Feyisetan Amazon sey@amazon.com Tom Diethe Amazon tdiethe@amazon.co.uk Thomas Drake Amazon draket@amazon.com …","url":["https://arxiv.org/pdf/1910.08917"]} -{"year":"2019","title":"Leveraging Pretrained Image Classifiers for Language-Based Segmentation","authors":["D Golub, R Martín-Martín, A El-Kishky, S Savarese - arXiv preprint arXiv:1911.00830, 2019"],"snippet":"… With Word2Vec we first embed the target labels l and the labels in the set of possible proxy labels in a shared vector space using 300-dimensional GloVe embeddings [29] trained on the Common Crawl 840B word corpus. For labels that contains multiple words …","url":["https://arxiv.org/pdf/1911.00830"]} -{"year":"2019","title":"Leveraging Unpaired Out-of-Domain Data for Image Captioning","authors":["X Chen, M Zhang, Z Wang, L Zuo, B Li, Y Yang - Pattern Recognition Letters, 2018"],"snippet":"Skip to main content …","url":["https://www.sciencedirect.com/science/article/pii/S0167865518309358"]} -{"year":"2019","title":"Leveraging Web Semantic Knowledge in Word Representation Learning","authors":["H Liu, L Fang, JG Lou, Z Li - 2019"],"snippet":"… We extract a large collection of semantic lists from the Common Crawl data7 using the patterns defined in Table 1 and filter out entries that do not exist in the vocabulary of the training data … 6http://dumps.wikimedia.org/enwiki/ 7http://commoncrawl.org/ Page 5 …","url":["https://www.aaai.org/Papers/AAAI/2019/AAAI-LiuHaoyan.142.pdf"]} -{"year":"2019","title":"Limsi-multisem at the ijcai semdeep-5 wic challenge: Context representations for word usage similarity estimation","authors":["AG Soler, M Apidianaki, A Allauzen - Proceedings of the 5th Workshop on Semantic …, 2019"],"snippet":"… Di- mensionality reduction is applied to a weighted average of the vectors of words in a sentence. Weighting is based on word frequency in Common Crawl. We use SIF in combination with 300- d GloVe vectors trained …","url":["https://www.aclweb.org/anthology/W19-5802"]} -{"year":"2019","title":"Lingua Custodia at WMT'19: Attempts to Control Terminology","authors":["F Burlot - arXiv preprint arXiv:1907.04618, 2019"],"snippet":"… to the decoder. Page 2. 2 Baseline The training parallel data provided for the task consisted of nearly 10M sentences, including Europarl (Koehn, 2005), Common-crawl, Newscommentary and Bicleaner07. The former was …","url":["https://arxiv.org/pdf/1907.04618"]} -{"year":"2019","title":"Linked Open Data Validity--A Technical Report from ISWS 2018","authors":["TA Ghor, E Agrawal, M Alam, O Alqawasmeh… - arXiv preprint arXiv …, 2019"],"snippet":"Page 1. Linked Open Data Validity A Technical Report from ISWS 2018 April 1, 2019 Bertinoro, Italy arXiv:1903.12554v1 [cs.DB] 26 Mar 2019 Page 2. Authors Main Editors Mehwish Alam, Semantic Technology Lab, ISTC-CNR …","url":["https://arxiv.org/pdf/1903.12554"]} -{"year":"2019","title":"Linking artificial and human neural representations of language","authors":["J Gauthier, R Levy - arXiv preprint arXiv:1910.01244, 2019"],"snippet":"… contrasts between the 384 sentences tested. 9We use publicly available GloVe vectors computed on Common Crawl, available in the spaCy toolkit as en vectors web lg. Page 6. 3 Results We first present the performance of …","url":["https://arxiv.org/pdf/1910.01244"]} -{"year":"2019","title":"LINSPECTOR: Multilingual Probing Tasks for Word Representations","authors":["GG Şahin, C Vania, I Kuznetsov, I Gurevych - arXiv preprint arXiv:1903.09442, 2019"],"snippet":"Page 1. LINSPECTOR Multilingual Probing Tasks for Word Representations Gözde Gül Sahin∗ UKP Lab / TU Darmstadt Clara Vania∗∗ ILCC / University of Edinburgh Ilia Kuznetsov UKP Lab / TU Darmstadt Iryna Gurevych UKP Lab / TU Darmstadt …","url":["https://arxiv.org/pdf/1903.09442"]} -{"year":"2019","title":"LIUM's Contributions to the WMT2019 News Translation Task: Data and Systems for German-French Language Pairs","authors":["F Bougares, J Wottawa, A Baillot, L Barrault, A Bardet - … 2: Shared Task Papers, Day 1 …, 2019"],"snippet":"… As it can be seen from tables 1 and 2, the effect of the cleaning step is more pronounced for the noisy parallel corpora (ie ParaCrawl and Common Crawl) … Page 3. 131 #lines #token FR #token DE europarl-v7 1.7M 45.9M 40.9 …","url":["https://www.aclweb.org/anthology/W19-5307"]} -{"year":"2019","title":"Local bow-tie structure of the web","authors":["Y Fujita, Y Kichikawa, Y Fujiwara, W Souma, H Iyetomi - Applied Network Science, 2019"],"snippet":"… This fact means that the absence of self-similarity between page level and host/domain levels. Meusel et al. (2014, 2015) investigated the publicly accessible crawl of the web gathered by the Common Crawl Foundation in 2012 (CC12) (Meusel et al. 2014; 2015) …","url":["https://link.springer.com/article/10.1007/s41109-019-0127-2"]} -{"year":"2019","title":"Logical Layout Analysis using Deep Learning","authors":["A Zulfiqar, A Ul-Hasan, F Shafait"],"snippet":"… of the text zones. GloVE provides 300 dimensional vectors, one vector for each word. We have used the one trained on common crawl having 840 billion tokens and vectors for a total of 2.2 million words. Since we also want …","url":["https://tukl.seecs.nust.edu.pk/members/projects/conference/Logical-Layout-Analysis-using-Deep-Learning.pdf"]} -{"year":"2019","title":"Longitudinal Analysis of Misuse of Bitcoin⋆","authors":["K Eldefrawy, A Gehani, A Matton"],"snippet":"… its labels). Seed data was used from previously published onion data sets, references to onions in a large collection of DNS resolver logs, and an open repository of (non-onion) web crawl data, called the Common Crawl. The …","url":["http://www.csl.sri.com/users/gehani/papers/ACNS-2019.Bitcoin_Study.pdf"]} -{"year":"2019","title":"Look Who's Talking: Inferring Speaker Attributes from Personal Longitudinal Dialog","authors":["C Welch, V Pérez-Rosas, JK Kummerfeld, R Mihalcea - arXiv preprint arXiv …, 2019"],"snippet":"… The word embedding inputs to the context encoder are 300 dimensional. 8 Features Word Embeddings: We obtain word vector representations for each message using the GloVe Common Crawl pre-trained model [12]. We …","url":["https://arxiv.org/pdf/1904.11610"]} -{"year":"2019","title":"Low Resource Sequence Tagging with Weak Labels","authors":["E Simpson, J Pfeiffer, I Gurevych"],"snippet":"… For FAMULUS, we use 300-dimensional German fastText embeddings (Grave et al. 2018), and for NER and PICO we use 300-dimensional English GloVe 3 embeddings trained on 840 billion tokens from Common Crawl. To …","url":["https://public.ukp.informatik.tu-darmstadt.de/UKP_Webpage/publications/2020/2020_AAAI_SE_LowResourceSequence.pdf"]} -{"year":"2019","title":"Low Supervision, Low Corpus size, Low Similarity! Challenges in cross-lingual alignment of word embeddings: An exploration of the limitations of cross-lingual word …","authors":["A Dyer - 2019"],"snippet":"Page 1. Low Supervision, Low Corpus size, Low Similarity! Challenges in cross-lingual alignment of word embeddings An exploration of the limitations of cross-lingual word embedding alignment in truly low resource scenarios Andrew Dyer …","url":["http://www.diva-portal.org/smash/get/diva2:1365879/FULLTEXT01.pdf"]} -{"year":"2019","title":"LSTM for Dialogue Breakdown Detection: Exploration of Different Model Types and Word Embeddings","authors":["M Hendriksen, A Leeuwenberg, MF Moens"],"snippet":"… The words are uncased. GloVe Common Crawl … The results presented in the Table 2, allow to conclude that GloVe Common Crawl demonstrate the best performance, the GloVe Twitter being the second best, the word2vec Google News is the worst. Page 9 …","url":["http://workshop.colips.org/wochat/@iwsds2019/documents/dbdc4-mariya-hendriksen-etal.pdf"]} -{"year":"2019","title":"LTL-UDE at SemEval-2019 Task 6: BERT and Two-Vote Classification for Categorizing Offensiveness","authors":["P Aggarwal, T Horsmann, M Wojatzki, T Zesch - … of the 13th International Workshop on …, 2019"],"snippet":"… word representations. The resulting posting vector is re-scaled into the range zero to one. We use the pre-trained embeddings provided by Mikolov et al. (2018), which are trained on the common crawl corpus. Classifiers We …","url":["https://www.aclweb.org/anthology/S19-2121"]} -{"year":"2019","title":"ltl. uni-due at SemEval-2019 Task 5: Simple but Effective Lexico-Semantic Features for Detecting Hate Speech in Twitter","authors":["H Zhang, M Wojatzki, T Horsmann, T Zesch - … of the 13th International Workshop on …, 2019"],"snippet":"… of LSTMs and CNNs (LSTM + CNN). We initialize all setups with the 300-dimensional word embeddings provided by Mikolov et al. (2018), which were trained on the common crawl corpus. Furthermore, in all setups, we use …","url":["https://www.aclweb.org/anthology/S19-2078"]} -{"year":"2019","title":"Machine Reading of Clinical Notes for Automated ICD Coding","authors":["M Morisio, S Malacrino"],"snippet":"Page 1. Master degree course in Computer Engineering Master Degree Thesis Machine Reading of Clinical Notes for Automated ICD Coding Supervisor Prof. Maurizio Morisio Candidate Stefano Malacrin`o Internship tutors …","url":["https://webthesis.biblio.polito.it/10958/1/tesi.pdf"]} -{"year":"2019","title":"Machine Translation of Restaurant Reviews: New Corpus for Domain Adaptation and Robustness","authors":["A Bérard, I Calapodescu, M Dymetman, C Roux… - arXiv preprint arXiv …, 2019"],"snippet":"… data, we built a new training corpus named UGC (User Generated Content), closer to our domain, by combining: Multi UN, OpenSubtitles, Wikipedia, Books, Tatoeba, TED talks, ParaCrawl11 and Gourmet12 (See Table 3) …","url":["https://arxiv.org/pdf/1910.14589"]} -{"year":"2019","title":"Mapping languages and demographics with georeferenced corpora","authors":["J Dunn, B Adams - 2019"],"snippet":"… To answer this question, we collect and analyze two large global-scale datasets: web-crawled data from the Common Crawl (16.65 billion words) and social media data from Twitter (4.14 billion words). This paper evaluates demographic-type informa …","url":["https://ir.canterbury.ac.nz/bitstream/handle/10092/17132/GeoComputation_19.pdf?sequence=2"]} -{"year":"2019","title":"Massive vs. Curated Word Embeddings for Low-Resourced Languages. The Case of Yor\\ub\\'a and Twi","authors":["JO Alabi, K Amponsah-Kaakyire, DI Adelani… - arXiv preprint arXiv …, 2019"],"snippet":"… The resource par excellence is Wikipedia2, an online encyclopedia currently available in 307 languages3. Other initiatives such as Common Crawl4 or the Jehovahs Witnesses site5 are also repositories for multilingual …","url":["https://arxiv.org/pdf/1912.02481"]} -{"year":"2019","title":"Massively multilingual transfer for NER","authors":["A Rahimi, Y Li, T Cohn - Proceedings of the 57th Conference of the Association …, 2019"],"snippet":"Page 1. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151–164 Florence, Italy, July 28 - August 2, 2019. c 2019 Association for Computational Linguistics 151 Massively Multilingual Transfer for NER …","url":["https://www.aclweb.org/anthology/P19-1015"]} -{"year":"2019","title":"MASTER UNIVERSITARIO EN INGENIERÍA DE TELECOMUNICACION","authors":["DB SANCHEZ - 2019"],"snippet":"Page 1. M´ASTER UNIVERSITARIO EN INGENIERÍA DE TELECOMUNICACI´ON TRABAJO FIN DE M´ASTER DESING AND DEVELOPMENT OF A HATE SPEECH DETECTOR IN SOCIAL NETWORKS BASED ON DEEP LEARNING TECHNOLOGIES …","url":["http://oa.upm.es/55618/1/TESIS_MASTER_DIEGO_BENITO_SANCHEZ_2019.pdf"]} -{"year":"2019","title":"Measuring Gender Bias in Word Embeddings across Domains and Discovering New Gender Bias Word Categories","authors":["K Chaloner, A Maldonado - Proceedings of the First Workshop on Gender Bias in …, 2019"],"snippet":"… WEAT's authors applied these tests to the publicly-available GloVe embeddings trained on the English-language “Common Crawl” corpus (Pennington et al., 2014) as well as the Skip-Gram (word2vec) embeddings …","url":["https://www.aclweb.org/anthology/W19-3804"]} -{"year":"2019","title":"Medical Word Embeddings for Spanish: Development and Evaluation","authors":["F Soares, M Villegas, A Gonzalez-Agirre, M Krallinger… - Proceedings of the 2nd …, 2019"],"snippet":"… makes available Word2Vec models pre-trained on about 100 billion words from Google News corpus in English1. Regarding other languages, on FastText website2 one can download pre-trained embeddings for 157 lan …","url":["https://www.aclweb.org/anthology/W19-1916"]} -{"year":"2019","title":"Meemi: Finding the Middle Ground in Cross-lingual Word Embeddings","authors":["Y Doval, J Camacho-Collados, L Espinosa-Anke… - arXiv preprint arXiv …, 2019"],"snippet":"… 10 Page 11. the WaCky project [23], containing 2 and 0.8 billion words, respectively.6 For Finnish and Russian, we use their corresponding Common Crawl monolingual corpora from the Machine Translation of News Shared Task 20167, composed of …","url":["https://arxiv.org/pdf/1910.07221"]} -{"year":"2019","title":"Membership Inference Attacks on Sequence-to-Sequence Models","authors":["S Hisamoto, M Post, K Duh - arXiv preprint arXiv:1904.05506, 2019"],"snippet":"… For example, e (d) i with d = l1 and i = 1 might refer to the first sentence in the Europarl subcorpus, while e (d) i with d = l2 and i = 1 might refer to the first sentence in the CommonCrawl subcorpus … CommonCrawl 5,000 5,000 2,389,123 2,379,123 N/A …","url":["https://arxiv.org/pdf/1904.05506"]} -{"year":"2019","title":"Metaphor Interpretation Using Word Embeddings","authors":["K Bar, N Dershowitz, L Dankin"],"snippet":"… relatively large corpus. Specifically, we use DepCC,1 a dependency-parsed “web-scale corpus” based on CommonCrawl.2 There are 365 million documents in the corpus, comprising about 252B tokens. Among other preprocessing …","url":["https://pdfs.semanticscholar.org/2033/a3f7b8b53ea277a811ac450139422793b08b.pdf"]} -{"year":"2019","title":"Methods and apparatus for detection of malicious documents using machine learning","authors":["JD Saxe, R HARANG - US Patent App. 16/257,749, 2019"],"snippet":"… decision tree, etc.). The memory 120 includes one or more datasets 112 (eg, a VirusTotal dataset and/or a Common Crawl dataset, as described in further detail below) and one or more training models 124. The malware detection …","url":["https://patentimages.storage.googleapis.com/fa/f8/d7/5843fb31e01d95/US20190236273A1.pdf"]} -{"year":"2019","title":"Microsoft Research Asia's Systems for WMT19","authors":["Y Xia, X Tan, F Tian, F Gao, W Chen, Y Fan, L Gong…"],"snippet":"… Dataset We concatenate “Europarl v9”, “News Commentary v14”, “Common Crawl corpus” and “Document-split Rapid corpus” as the ba- sic bilingual … We merge the “commoncrawl”, “europarl-v7” and part of “de-fr.bicleaner07” …","url":["http://www.statmt.org/wmt19/pdf/WMT0048.pdf"]} -{"year":"2019","title":"MIDAS: A Dialog Act Annotation Scheme for Open Domain Human Machine Spoken Conversations","authors":["D Yu, Z Yu - arXiv preprint arXiv:1908.10023, 2019"],"snippet":"… An example can be seen in the last USER2 utterance in Table 1. Word em- beddings are pre-trained with fastText (Mikolov et al., 2018) using Common Crawl. We evaluate the segmentation model on human labeled 2K human utterances of collected data …","url":["https://arxiv.org/pdf/1908.10023"]} -{"year":"2019","title":"Mining Discourse Markers for Unsupervised Sentence Representation Learning","authors":["D Sileo, T Van-De-Cruys, C Pradel, P Muller - arXiv preprint arXiv:1903.11850, 2019"],"snippet":"… We use sentences from the Depcc corpus (Panchenko et al., 2017), which consists of En- glish texts harvested from commoncrawl web data … Word embeddings are fixed GloVe embeddings with 300 dimensions, trained …","url":["https://arxiv.org/pdf/1903.11850"]} -{"year":"2019","title":"Mix-review: Alleviate Forgetting in the Pretrain-Finetune Framework for Neural Language Generation Models","authors":["T He, J Liu, K Cho, M Ott, B Liu, J Glass, F Peng - arXiv preprint arXiv:1910.07117, 2019"],"snippet":"… For pre-training, we use the large-scale CCNEWS data (Bakhtin et al., 2019) which is a de- duplicated subset of the English portion of the CommonCrawl news data-set1. The dataset contains news articles published worldwide …","url":["https://arxiv.org/pdf/1910.07117"]} -{"year":"2019","title":"MLT-DFKI at CLEF eHealth 2019: Multi-label Classification of ICD-10 Codes with BERT","authors":["S Amin, G Neumann, K Dunfield, A Vechkaeva… - CLEF (Working Notes), 2019"],"snippet":"… have stronger linguistic signals to classify the classes where German models make mistakes [1]. The baseline proved to be a strong one, with the highest precision of all and outperforming HAN and CNN models, for both German …","url":["https://www.researchgate.net/profile/Saadullah_Amin2/publication/335681972_MLT-DFKI_at_CLEF_eHealth_2019_Multi-label_Classification_of_ICD-10_Codes_with_BERT/links/5d742a00299bf1cb809043cd/MLT-DFKI-at-CLEF-eHealth-2019-Multi-label-Classification-of-ICD-10-Codes-with-BERT.pdf"]} -{"year":"2019","title":"Mono-and Cross-lingual Semantic Word Similarity for Urdu Language","authors":["G Fatima - 2019"],"snippet":"Page 1. I Monoand Cross-lingual Semantic Word Similarity for Urdu Language By Ghazeefa Fatima CIIT/FA17-RCS-016/LHR MS Thesis In Computer Science COMSATS University Islamabad Lahore Campus Page …","url":["http://dspace.cuilahore.edu.pk/xmlui/bitstream/handle/123456789/1571/Thesis.pdf?sequence=1"]} -{"year":"2019","title":"MoRTy: Unsupervised Learning of Task-specialized Word Embeddings by Autoencoding","authors":["N Rethmeier, B Plank - Proceedings of the 4th Workshop on Representation …, 2019"],"snippet":"… Hence, we demonstrate the method's application for single-task, multi-task, small, medium and web-scale (common crawl) corpus-size settings (Section 4). Learning to scale-up by pretraining on more (un-)labeled data is both: (a) not always possible in low-resource …","url":["https://www.aclweb.org/anthology/W19-4307"]} -{"year":"2019","title":"Multi-class Document Classification Using Improved Word Embeddings","authors":["BA Rabut, AC Fajardo, RP Medina - Proceedings of the 2nd International Conference …, 2019"],"snippet":"… ACM ISBN 978-1-4503-7290-9/19/10…$15.00 https://doi.org/10.1145/3366650.3366661 42 Page 2. Common crawl)[7]. The pre-trained word embedding vectors serve as input in the classification algorithm for evaluation and prediction …","url":["https://dl.acm.org/citation.cfm?id=3366661"]} -{"year":"2019","title":"Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering","authors":["L Zhou, K Small - arXiv preprint arXiv:1911.06192, 2019"],"snippet":"… For experiments with GloVe embeddings, we use GloVe embeddings pre-trained on Common Crawl dataset.3 The dimension of GloVe embeddings is 300, and the dimension of character-level embeddings is 100, such that Dw = 400 …","url":["https://arxiv.org/pdf/1911.06192"]} -{"year":"2019","title":"Multi-Granular Text Encoding for Self-Explaining Categorization","authors":["Z Wang, Y Zhang, M Yu, W Zhang, L Pan, L Song, K Xu… - arXiv preprint arXiv …, 2019"],"snippet":"… for each set. Hyperparameters We use the 300-dimensional GloVe word vectors pre-trained from the 840B Common Crawl corpus (Pennington et al., 2014), and set the hidden size as 100 for node embeddings. We apply dropout …","url":["https://arxiv.org/pdf/1907.08532"]} -{"year":"2019","title":"Multi-Hop Paragraph Retrieval for Open-Domain Question Answering","authors":["Y Feldman, R El-Yaniv - arXiv preprint arXiv:1906.06606, 2019"],"snippet":"Page 1. Multi-Hop Paragraph Retrieval for Open-Domain Question Answering Yair Feldman and Ran El-Yaniv Department of Computer Science Technion – Israel Institute of Technology Haifa, Israel {yairf11, rani}@cs.technion.ac.il Abstract …","url":["https://arxiv.org/pdf/1906.06606"]} -{"year":"2019","title":"Multi-Resolution Models for Learning Multilevel Abstract Representation with Application to Information Retrieval","authors":["T Cakaloglu - 2019"],"snippet":"Page 1. MULTI-RESOLUTION MODELS FOR LEARNING MULTILEVEL ABSTRACT REPRESENTATION WITH APPLICATION TO INFORMATION RETRIEVAL A Dissertation Submitted to the Graduate School University of Arkansas at Little Rock …","url":["http://search.proquest.com/openview/4bce4201a6d742c4c771e08b17dec0cb/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2019","title":"Multi-Team: A Multi-attention, Multi-decoder Approach to Morphological Analysis.","authors":["A Ustün, R van der Goot, G Bouma, G van Noord"],"snippet":"… 2018). For FastText, two sets of pre-trained embeddings are available: one is trained only on Wikipedia (Bojanowski et al., 2017), whereas the newer versions are also trained on CommonCrawl (Grave et al., 2018). Whenever …","url":["http://www.robvandergoot.com/doc/sigmorphon2019.pdf"]} -{"year":"2019","title":"Multilingual Culture-Independent Word Analogy Datasets","authors":["M Ulčar, M Robnik-Šikonja - arXiv preprint arXiv:1911.10038, 2019"],"snippet":"… language is shown in the Table 6. Table 6: Percentage of constructed analogy pairs covered by the first 200,000 word vectors from common crawl fastText embeddings. Language Coverage (%) Croatian 81.67 English 97.05 …","url":["https://arxiv.org/pdf/1911.10038"]} -{"year":"2019","title":"Multilingual Fake News Detection with Satire","authors":["G Guibon, L Ermakova, H Seffih, A Firsov…"],"snippet":"… Detection of Deception. Non-verbal communication (2014), https://nvc.uvt.nl/pdf/7.pdf 6. Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic chatnoir: Search engine for the clueweb and the common crawl. In: Pasi, G., Piwowarski …","url":["https://www.researchgate.net/profile/Guillaume_Le_Noe-Bienvenu/publication/332803834_Multilingual_Fake_News_Detection_with_Satire_on_Vaccination_Topic/links/5d24917a458515c11c1f8724/Multilingual-Fake-News-Detection-with-Satire-on-Vaccination-Topic.pdf"]} -{"year":"2019","title":"Multilingual is not enough: BERT for Finnish","authors":["A Virtanen, J Kanerva, R Ilo, J Luoma, J Luotolahti… - arXiv preprint arXiv …, 2019"],"snippet":"… Second, we selected texts from the Common Crawl project6 by running aa map-reduce language detection job on the plain text material from Common Crawl. These sources were supplemented with plain text extracted …","url":["https://arxiv.org/pdf/1912.07076"]} -{"year":"2019","title":"Multilingual Sentence-Level Bias Detection in Wikipedia","authors":["D Aleksandrova, F Lareau, PA Ménard"],"snippet":"… Same BOW n-gram size and BOW size and value type as SGD. 5Available for 157 languages, pretrained on Common Crawl and Wikipedia (Grave et al., 2018) https:// fasttext.cc/docs/en/crawl-vectors.html 6Version 0.21.2 of the sklearn toolkit …","url":["https://www.researchgate.net/profile/Desislava_Aleksandrova/publication/334612399_Multilingual_Sentence-Level_Bias_Detection_in_Wikipedia/links/5d5bd0c392851c37636bfdf2/Multilingual-Sentence-Level-Bias-Detection-in-Wikipedia.pdf"]} -{"year":"2019","title":"Multimodal deep networks for text and image-based document classification","authors":["N Audebert, C Herold, K Slimani, C Vidal - APIA"],"snippet":"… For both methods, we use the SpaCy small English model [33] to perform the tokenization and punctuation removal. Individual word embeddings are then inferred using FastText [29] pretrained on the Common Crawl dataset …","url":["https://www.irit.fr/pfia2019/wp-content/uploads/2019/07/Actes_CH_PFIA2019.pdf#page=14"]} -{"year":"2019","title":"Multimodal Machine Translation with Embedding Prediction","authors":["T Hirasawa, H Yamagishi, Y Matsumura, M Komachi - arXiv preprint arXiv …, 2019"],"snippet":"… model. “+ pretrained” models are initialized with pretrained embeddings. 2018). These word embeddings are trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300. The embedding …","url":["https://arxiv.org/pdf/1904.00639"]} -{"year":"2019","title":"Multimodal Sentiment Analysis Using Deep Learning","authors":["R Sharma, N Le Tan, F Sadat - 2018 17th IEEE International Conference on Machine …, 2018"],"snippet":"… For the CNN model we used pre-trained word embeddings (GloVe 840B.300d). This is a 300-dimensional word embedding trained on 840 billion tokens from the common crawl dataset. The maximum sequence length is 200 …","url":["https://ieeexplore.ieee.org/abstract/document/8614265/"]} -{"year":"2019","title":"Named entity recognition for Polish","authors":["M Marcińczuk, A Wawer - Poznan Studies in Contemporary Linguistics, 2019"],"snippet":"AbstractIn this article we discuss the current state-of-the-art for named entity recognition for Polish. We present publicly available resources and open-source tools for named entity recognition. The overview includes various …","url":["https://www.degruyter.com/view/j/psicl.2019.55.issue-2/psicl-2019-0010/psicl-2019-0010.xml"]} -{"year":"2019","title":"Named Entity Recognition for Social Media Text","authors":["Y Zhang - 2019"],"snippet":"… We use two different pre-trained word embeddings based on Common Crawl data, which contains 840 billion tokens and 2.2 million vocabulary and Twitter data which contains 2 billion tweets, 27 billion tokens, and 1.2 million vocabulary …","url":["https://uu.diva-portal.org/smash/get/diva2:1366031/FULLTEXT01.pdf"]} -{"year":"2019","title":"Named Entity Recognition Using Gazetteer of Hierarchical Entities","authors":["M Štravs, J Zupančič - … Conference on Industrial, Engineering and Other …, 2019"],"snippet":"… To summarize, the proposed entity recognition method was tested using two languages (Slovenian and English), six different distance measures, and two different vector embeddings from Wikipedia (Wiki WV) and Common Crawl (CC WV) …","url":["https://link.springer.com/chapter/10.1007/978-3-030-22999-3_65"]} -{"year":"2019","title":"Named-entity recognition in Czech historical texts: Using a CNN-BiLSTM neural network model","authors":["H Hubková - 2019"],"snippet":"… We also tried to work with published pretrained word embeddings of contemporary Czech words provided by fastText6. These were trained on more than 178 millions of tokens from Wikipedia and 13 billions tokens based on common crawl (Grave et al., 2018) …","url":["http://www.diva-portal.org/smash/get/diva2:1325355/FULLTEXT01.pdf"]} -{"year":"2019","title":"Natural Language Processing for Book Recommender Systems","authors":["H Alharthi - 2019"],"snippet":"Page 1. Natural Language Processing for Book Recommender Systems by Haifa Alharthi Thesis submitted in partial fulfillment of the requirements for the PhD degree in Computer Science School of Electrical Engineering and Computer Science Faculty of Engineering …","url":["https://www.ruor.uottawa.ca/bitstream/10393/39134/1/Alharthi_Haifa_2019_thesis.pdf"]} -{"year":"2019","title":"Natural language processing using context-specific word vectors","authors":["B McCann, C Xiong, R Socher - US Patent App. 15/982,841, 2018"],"snippet":"… in the second language. In some examples, training of an MT-LSTM of the encoder 310 uses fixed 300-dimensional word vectors, such as the CommonCrawl-840B GloVe model for English word vectors. These word vectors …","url":["https://patentimages.storage.googleapis.com/49/87/1a/0d4e316e8e4194/US20180373682A1.pdf"]} -{"year":"2019","title":"Naver Labs Europe's Systems for the WMT19 Machine Translation Robustness Task","authors":["A Bérard, I Calapodescu, C Roux - arXiv preprint arXiv:1907.06488, 2019"],"snippet":"… 3.1 Pre-processing CommonCrawl filtering We first spent efforts on filtering and cleaning the WMT data (in particular CommonCrawl) … We filtered CommonCrawl as follows: we trained a baseline FR→EN model on WMT without …","url":["https://arxiv.org/pdf/1907.06488"]} -{"year":"2019","title":"Nested Variational Autoencoder for Topic Modeling on Microtexts with Word Vectors","authors":["T Trinh, T Quan, T Mai - arXiv preprint arXiv:1905.00195, 2019"],"snippet":"Page 1. Noname manuscript No. (will be inserted by the editor) Nested Variational Autoencoder for Topic Modeling on Microtexts with Word Vectors Trung Trinh · Tho Quan · Trung Mai Received: date / Accepted: date Abstract …","url":["https://arxiv.org/pdf/1905.00195"]} -{"year":"2019","title":"NeuMorph: Neural Morphological Tagging for Low-Resource Languages—An Experimental Study for Indic Languages","authors":["A Chakrabarty, A Chaturvedi, U Garain - ACM Transactions on Asian and Low …, 2019"],"snippet":"Page 1. 16 NeuMorph: Neural Morphological Tagging for Low-Resource Languages— An Experimental Study for Indic Languages ABHISEK CHAKRABARTY, AKSHAY CHATURVEDI, and UTPAL GARAIN, Indian Statistical Institute, India …","url":["https://dl.acm.org/citation.cfm?id=3342354"]} -{"year":"2019","title":"Neural Conversation Recommendation with Online Interaction Modeling","authors":["X Zeng, J Li, L Wang, KF Wong"],"snippet":"Page 1. Neural Conversation Recommendation with Online Interaction Modeling Xingshan Zeng1,2, Jing Li3∗, Lu Wang4, Kam-Fai Wong1,2 1The Chinese University of Hong Kong, Hong Kong, China 2MoE Key Laboratory …","url":["https://www.ccs.neu.edu/home/luwang/papers/EMNLP2019_zeng_li_wang_wong.pdf"]} -{"year":"2019","title":"Neural Facet Detection on Medical Resources","authors":["T Steffek - 2019"],"snippet":"Page 1. Neural Facet Detection on Medical Resources Thomas Steffek April 2, 2019 Page 2. Page 3. Beuth Hochschule für Technik Fachbereich VI - Informatik und Medien Database Systems and Text-based Information Systems (DATEXIS) Bachelor's thesis …","url":["https://prof.beuth-hochschule.de/fileadmin/prof/aloeser/Bachelorarbeit_Thomas-Steffek_with-title-page-1.1.pdf"]} -{"year":"2019","title":"Neural Feature Extraction for Contextual Emotion Detection","authors":["E Mohammadi, H Amini, L Kosseim"],"snippet":"… pretrained word embeddings. As the first word embedder, we chose GloVe (Pennington et al., 2014), which is pretrained on 840B tokens of web data from Common Crawl, and provides 300d vectors as word embeddings. As our sec …","url":["https://www.researchgate.net/profile/Hessam_Amini/publication/335704122_Neural_Feature_Extraction_for_Contextual_Emotion_Detection/links/5d76d6764585151ee4ab0908/Neural-Feature-Extraction-for-Contextual-Emotion-Detection.pdf"]} -{"year":"2019","title":"Neural Grammatical Error Correction by Simulating the Human Learner and the Human Proofreader","authors":["F Gaim, JW Chung, JC Park - 한국정보과학회 학술발표논문집, 2018"],"snippet":"… For this and the contrastive learning, we use a large 5-gram language model trained on the Common Crawl data [8]. Training and Decoding: To effectively handle out-of- vocabulary words, we use sub-word level tokenization and …","url":["http://www.dbpia.co.kr/Journal/ArticleDetail/NODE07613671"]} -{"year":"2019","title":"Neural Machine Translation for English–Kazakh with Morphological Segmentation and Synthetic Data","authors":["A Toral, L Edman, G Yeshmagambetova, J Spenader - … 2: Shared Task Papers, Day 1 …, 2019"],"snippet":"… 7.5 0.19 0.16 Wikititles 117.0 0.23 0.19 Table 1: Preprocessed EN–KK parallel training data. Words (M) Corpus Sentences (k) EN RU Common crawl 871.8 20.82 19.97 News-comm … Corpus Threshold Pairs left (k) CommonCrawl 0.7323 568.50 News Comm …","url":["https://www.aclweb.org/anthology/W19-5343"]} -{"year":"2019","title":"Neural network learning engine","authors":["CM Ormerod - US Patent App. 16/286,566, 2019"],"snippet":"… skill and not to limit the invention to any one embodiment, commercial word embedding tools can include Google News word embedding, which has been trained on an extensive corpus of news items, and/or GloVe word …","url":["https://patentimages.storage.googleapis.com/94/fd/43/d4a3cbb7706fec/US20190266234A1.pdf"]} -{"year":"2019","title":"Neural network-based approaches for biomedical relation classification: A review","authors":["Y Zhang, H Lin, Z Yang, J Wang, Y Sun, B Xu, Z Zhao - Journal of Biomedical …, 2019"],"snippet":"… Word2vec, Google news, https://code.google.com/archive/p/word2vec. GloVe, Wikipedia, Gigaword, Common Crawl, Twitter, https://nlp.stanford. edu/projects/glove. fastText, Wikipedia, UMBC corpus, news corpus …","url":["https://www.sciencedirect.com/science/article/pii/S1532046419302138"]} -{"year":"2019","title":"Neural NLP models under low-supervision scenarios","authors":["Y Zhang - 2019"],"snippet":"Page 1. Copyright by Ye Zhang 2019 Page 2. The Dissertation Committee for Ye Zhang certifies that this is the approved version of the following dissertation: Neural NLP Models Under Low-supervision Scenarios Committee: Matthew A Lease, Supervisor …","url":["https://repositories.lib.utexas.edu/bitstream/handle/2152/75032/ZHANG-DISSERTATION-2019.pdf?sequence=1"]} -{"year":"2019","title":"Neural Text Style Transfer via Denoising and Reranking","authors":["J Lee, Z Xie, C Wang, M Drach, D Jurafsky, AY Ng - … of the Workshop on Methods for …, 2019"],"snippet":"… 3. Fluency The post-transfer sentence should remain grammatical and fluent. We use the average log probability of the sentence posttransfer with respect to a language model trained on CommonCrawl as our measure of fluency …","url":["https://www.aclweb.org/anthology/W19-2309"]} -{"year":"2019","title":"NLNDE: The Neither-Language-Nor-Domain-Experts' Way of Spanish Medical Document De-Identification","authors":["L Lange, H Adel, J Strötgen - 2019"],"snippet":"… S2 (FLAIR+fastText): In contrast to all other runs, the second run uses only domain-independent embeddings, ie, embeddings that have been trained on standard narrative and news data from Common Crawl and Wikipedia …","url":["http://ceur-ws.org/Vol-2421/MEDDOCAN_paper_5.pdf"]} -{"year":"2019","title":"NLP@ UIOWA at SemEval-2019 Task 6: Classifying the Crass using Multi-windowed CNNs","authors":["J Rusert, P Srinivasan - Proceedings of the 13th International Workshop on …, 2019"],"snippet":"… Word embeddings for Non-Out of Vocabulary (OOV) words are obtained from Glove (Pennington et al., 2014) which has been trained on Twitter data3. Experiments were also conducted with Glove common crawl data, but no visible improvement was found …","url":["https://www.aclweb.org/anthology/S19-2125"]} -{"year":"2019","title":"Noisy Parallel Corpus Filtering through Projected Word Embeddings","authors":["M Kurfalı, R Östling - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… Larger monolingual corpora based on Wikipedia and common crawl data were also provided.2 To train our model, we use all the parallel data available for the English-Sinhala and EnglishNepali pairs (summarized …","url":["https://www.aclweb.org/anthology/W19-5438"]} -{"year":"2019","title":"NRC Parallel Corpus Filtering System for WMT 2019","authors":["G Bernier-Colborne, C Lo - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… embedding models. Common Crawl data was not used to train the bilingual word embeddings. 2.2 … representation layer. We used XLM to train a model using almost all the available data, except for the monolingual English Common Crawl data. This …","url":["https://www.aclweb.org/anthology/W19-5434"]} -{"year":"2019","title":"Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes","authors":["J Cao, M Tanana, ZE Imel, E Poitras, DC Atkins…"],"snippet":"Page 1. Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes Jie Cao†, Michael Tanana‡, Zac E. Imel‡, Eric Poitras‡, David C. Atkins♦, Vivek Srikumar† †School of Computing, University of Utah …","url":["https://svivek.com/research/publications/cao2019observing.pdf"]} -{"year":"2019","title":"Observing LOD Using Equivalent Set Graphs: It Is Mostly Flat and Sparsely Linked","authors":["L Asprino, W Beek, P Ciancarini, F van Harmelen… - International Semantic Web …, 2019"],"snippet":"… The two largest available crawls of LOD available today are WebDataCommons and LOD-a-lot. WebDataCommons 2 [12] consists of \\(\\sim \\)31B triples that have been extracted from the CommonCrawl datasets (November 2018 version) …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30793-6_4"]} -{"year":"2019","title":"Observing the LOD Cloud using Equivalent Set Graphs: the LOD Cloud is mostly flat and sparsely linked","authors":["L Asprino, W Beek, P Ciancarini, F van Harmelen…"],"snippet":"… The two largest available crawls of LOD available today are WebDataCommons and LOD-a-lot. WebDataCommons5 [12] consists of ∼31B triples that have been extracted from the CommonCrawl datasets (November 2018 version) …","url":["https://www.cs.vu.nl/~frankh/postscript/ISWC2019-LODanalytics.pdf"]} -{"year":"2019","title":"OECD Analytical Database on Individual Multinationals and their Affiliates (ADIMA)","authors":["G Pilgrim, N Ahmad, D Doyle - 2019"],"snippet":"… Secondly, information from MNE webpages is used from an open source 'copy of the internet' generated via web crawling from the Common Crawl 4 . This process develops a graph of the links between companies, from …","url":["https://www.gtap.agecon.purdue.edu/resources/download/9310.docx"]} -{"year":"2019","title":"Offensive Language and Hate Speech Detection for Danish","authors":["GI Sigurbergsson, L Derczynski - arXiv preprint arXiv:1908.04531, 2019"],"snippet":"… sample of text. Pre-trained Embeddings. The pre-trained FastText [24] embeddings are trained on data from the Common Crawl project and Wikipedia, in 157 languages (including English and Danish). FastText also provides …","url":["https://arxiv.org/pdf/1908.04531"]} -{"year":"2019","title":"On extracting data from tables that are encoded using HTML","authors":["JC Roldán, P Jiménez, R Corchuelo - Knowledge-Based Systems, 2019"],"snippet":"Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S095070511930509X"]} -{"year":"2019","title":"On Implementing the Binary Interpolative Coding Algorithm","authors":["GE PIBIRI - 2019"],"snippet":"… Table 4. Decoding time measured in average nanoseconds spent per decoded integer, for the run-aware implementation (ra) and for the not run-aware implementation. • CCNews is an English subset of the freely available news from CommonCrawl 3, consisting of …","url":["http://pages.di.unipi.it/pibiri/papers/BIC.pdf"]} -{"year":"2019","title":"On Measuring and Mitigating Biased Inferences of Word Embeddings","authors":["S Dev, T Li, J Phillips, V Srikumar - arXiv preprint arXiv:1908.09369, 2019"],"snippet":"Page 1. arXiv:1908.09369v1 [cs.CL] 25 Aug 2019 On Measuring and Mitigating Biased Inferences of Word Embeddings Sunipa Dev, Tao Li, Jeff Phillips, Vivek Srikumar School of Computing University of Utah Abstract Word …","url":["https://arxiv.org/pdf/1908.09369"]} -{"year":"2019","title":"On Measuring Social Biases in Sentence Encoders","authors":["C May, A Wang, S Bordia, SR Bowman, R Rudinger - arXiv preprint arXiv:1903.10561, 2019"],"snippet":"Page 1. On Measuring Social Biases in Sentence Encoders Chandler May1 Alex Wang2 Shikha Bordia2 Samuel R. Bowman2 Rachel Rudinger1 1Johns Hopkins University 2New York University {cjmay,rudinger}@jhu.edu {alexwang,sb6416,bowman}@nyu.edu Abstract …","url":["https://arxiv.org/pdf/1903.10561"]} -{"year":"2019","title":"On Optimally Partitioning Variable-Byte Codes","authors":["GE Pibiri, R Venturini - IEEE Transactions on Knowledge and Data …, 2019"],"snippet":"Page 1. 1041-4347 (c) 2018 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8691421/"]} -{"year":"2019","title":"On relevance of enriching word embeddings in solving Natural Language Inference problem","authors":["T Wesołowski"],"snippet":"Page 1. Jagiellonian University Faculty of Mathematics and Computer Science Theoretical Computer Science Stationary Studies Index number: 1079621 Tomasz Wesołowski On relevance of enriching word embeddings in solving Natural Language Inference problem …","url":["http://algo.edu.pl/OnRelevanceOfWordEmbeddings.pdf"]} -{"year":"2019","title":"On Slicing Sorted Integer Sequences","authors":["GE Pibiri - arXiv preprint arXiv:1907.01032, 2019"],"snippet":"… 2009. • CCNews is a dataset of news freely available from CommonCrawl: http://commoncrawl.org/ 2016/10/news-dataset-available. Precisely, the datasets consists of the news appeared from 09/01/16 to 30/03/18. Identifiers …","url":["https://arxiv.org/pdf/1907.01032"]} -{"year":"2019","title":"On the Effect of Low-Frequency Terms on Neural-IR Models","authors":["S Hofstätter, N Rekabsaz, C Eickhoff, A Hanbury - arXiv preprint arXiv:1904.12683, 2019"],"snippet":"… collection. The details of the resulting 1Provided in the form of evaluation tuples: top1000.dev.tsv 242B lower-cased (CommonCrawl) from: https://nlp.stanford.edu/ projects/glove/ Table 1: Left: Details of the vocabularies. Right …","url":["https://arxiv.org/pdf/1904.12683"]} -{"year":"2019","title":"On the Robustness of Unsupervised and Semi-supervised Cross-lingual Word Embedding Learning","authors":["Y Doval, J Camacho-Collados, L Espinosa-Anke… - arXiv preprint arXiv …, 2019"],"snippet":"… google.com/site/rmyeid/projects/polyglot 2The sources of the web-corpora are: UMBC (Han et al., 2013), 1-billion (Cardellino, 2016), itWaC and sdeWaC (Ba- roni et al., 2009), Hamshahri (AleAhmad et al., 2009), and Common Crawl downloaded from http://www …","url":["https://arxiv.org/pdf/1908.07742"]} -{"year":"2019","title":"On Using Machine Learning to Identify Knowledge in API Reference Documentation","authors":["D Fucci, A Mollaalizadehbahnemiri, W Maalej - arXiv preprint arXiv:1907.09807, 2019"],"snippet":"… For the deep learning classifiers in our benchmark, we train GloVe [19] embeddings based on four large corpora, summarized in Table 3. The Common Crawl (CC) is a pre-trained embedding downloaded in …","url":["https://arxiv.org/pdf/1907.09807"]} -{"year":"2019","title":"On Using SpecAugment for End-to-End Speech Translation","authors":["P Bahar, A Zeyer, R Schlüter, H Ney"],"snippet":"… For MT training, we use the TED, and the OpenSubtitles2018 corpora, as well as the data provided by the WMT 2018 evaluation (Europarl, ParaCrawl, CommonCrawl, News Commentary, and Rapid), a total of 65M lines of parallel sentences …","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1122/Bahar-IWSLT-2019.pdf"]} -{"year":"2019","title":"One Epoch Is All You Need","authors":["A Komatsuzaki - arXiv preprint arXiv:1906.06669, 2019"],"snippet":"… Trinh & Le (2018) pointed out that CommonCrawl contains a large portion of corrupt samples, which makes it unsuitable for the training. The proportion of the corrupt samples in CommonCrawl is substantially higher than 50 …","url":["https://arxiv.org/pdf/1906.06669"]} -{"year":"2019","title":"Online Parallel Data Extraction with Neural Machine Translation","authors":["D Ruiter - 2019"],"snippet":"Page 1. Universität des Saarlandes Master's Thesis Online Parallel Data Extraction with Neural Machine Translation submitted in fulfillment of the degree requirements of the MSc in Language Science and Technology at Saarland University …","url":["https://www.clubs-project.eu/assets/publications/other/MSc_Thesis_Ruiter.pdf"]} -{"year":"2019","title":"Ontological Traceability using Natural Language Processing","authors":["R Benitez - 2019"],"snippet":"Page 1. Ontological Traceability using Natural Language Processing A master thesis presented by Edder de la Rosa Benitez Submitted to the Department of Organization and Information in partial fulfillment of the …","url":["https://dspace.library.uu.nl/bitstream/handle/1874/383214/Master_Thesis_E_De_la_Rosa.pdf?sequence=2"]} -{"year":"2019","title":"OpenCeres: When Open Information Extraction Meets the Semi-Structured Web","authors":["C Lockard, P Shiralkar, XL Dong"],"snippet":"… 5.1 Experimental Setup Datasets: Our primary dataset is the augmented SWDE corpus described in Section 4. In addition, we used the set of 315 movie websites (comprising 433,000 webpages) found in Common …","url":["http://lunadong.com/publication/openCeres_naacl.pdf"]} -{"year":"2019","title":"OPTIMIZE THE LEARNING RATE OF NEURAL ARCHITECTURE IN MYANMAR STEMMER","authors":["Y Oo, KM Soe"],"snippet":"… Word vector pre-trained on large text corpora have been released on [10] \"Learning Word Vectors for 157 Languages \" that trained on 3 billion words from Wikipedia and Common Crawl using Continuous bag-of-words (CBOW) 300-dimension …","url":["https://www.academia.edu/download/61248451/120191117-8847-1ko3nhm.pdf"]} -{"year":"2019","title":"Optimizer Comparison with Dropout for Neural Sequence Labeling in Myanmar Stemmer","authors":["O Yadanar, KM Soe - 2019 IEEE International Conference on Industry 4.0 …, 2019"],"snippet":"… Parameter initialization: It has used Learning Word Vectors for 157 Languages that trained on 3 billion words from Wikipedia and Common Crawl using CBOW 300-dimension (E. Grave, P. Bojanowski, P. Gupta, A. Joulin, T. Mikolov,2018) for both word and character …","url":["https://ieeexplore.ieee.org/abstract/document/8784850/"]} -{"year":"2019","title":"Optimizing Social Media Data Using Genetic Algorithm","authors":["S Das, AK Kolya, D Das - Metaheuristic Approaches to Portfolio Optimization, 2019"],"snippet":"Page 1. 126 Copyright © 2019, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. Chapter 6 DOI: 10.4018/978-1-5225-8103-1.ch006 ABSTRACT Twitter-based …","url":["https://www.igi-global.com/chapter/optimizing-social-media-data-using-genetic-algorithm/233176"]} -{"year":"2019","title":"Overview of the CLEF eHealth Evaluation Lab 2019","authors":["E Kanoulas, D Li, L Azzopardi, R Spijker, G Zuccon… - Experimental IR Meets …"],"snippet":"… More specifically, for the Abstract and Title Screening subtask the PubMed Document Identifiers (PMIDs) of potentially relevant 4http://commoncrawl. org/(last accessed on 28 May 2019) … It consists of web pages acquired from the CommonCrawl …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=LqGsDwAAQBAJ&oi=fnd&pg=PA322&dq=commoncrawl&ots=8duC39Wv1R&sig=Tt9HYmgrR17eWjcTJPZvsij9B5g"]} -{"year":"2019","title":"P-SIF: Document Embeddings Using Partition Averaging","authors":["V Gupta, A Saw, P Nokhiz, P Netrapalli, P Rai…"],"snippet":"… Page 5. evaluation. We use the PARAGRAM-SL999 (PSL) as word embeddings, obtained by training on the PPDB dataset. 7 We use the fixed weighting parameter a value of 10−3, and the word frequencies p(w) are estimated from the commoncrawl dataset …","url":["https://vgupta123.github.io/docs/AAAI-GuptaV.3656.pdf"]} -{"year":"2019","title":"P2L: Predicting Transfer Learning for Images and Semantic Relations","authors":["B Bhattacharjee, N Codella, JR Kender, S Huo… - arXiv preprint arXiv …, 2019"],"snippet":"… We use the CC-DBP [12] dataset: the text of Common Crawl1 and the semantic relations schema and training data from DBpedia [1]. DBpedia is a knowledge graph extracted from the infoboxes from Wikipedia … 4.3.2 Validation on Common Crawl - DBpedia …","url":["https://arxiv.org/pdf/1908.07630"]} -{"year":"2019","title":"PaDAWaNS","authors":["TLM Brands"],"snippet":"Page 1. PaDAWaNS Proactive Domain Abuse Warning and Notification System by TLM Brands to obtain the degree of Master of Science at the Delft University of Technology, to be defended publicly on Tuesday January 15, 2019 at 11:00 AM …","url":["https://www.sidnlabs.nl/downloads/theses/thesis_brands_padawans.pdf"]} -{"year":"2019","title":"Parallel External Memory Wavelet Tree and Wavelet Matrix Construction","authors":["J Ellert, F Kurpicz - International Symposium on String Processing and …, 2019"],"snippet":"… CC \\((\\sigma =242)\\) contains websites (without HTML tags) that have been crawled by the Common Crawl corpus (http://commoncrawl.org), and. Wiki \\((\\sigma =213)\\) are recent Wikipedia dumps containing XML files that …","url":["https://link.springer.com/chapter/10.1007/978-3-030-32686-9_28"]} -{"year":"2019","title":"Paraphrase-Sense-Tagged Sentences","authors":["A Cocos, C Callison-Burch, S Chen, D Khashabi… - Transactions, 2019"],"snippet":"Skip to main content …","url":["http://callison-burch.github.io/publications.html"]} -{"year":"2019","title":"PDRCNN: Precise Phishing Detection with Recurrent Convolutional Neural Networks","authors":["W Wang, F Zhang, X Luo, S Zhang - Security and Communication Networks, 2019"],"snippet":"… This method first encodes the URL string using the one-hot encoding method, and then inputs each encoded character vector into the LSTM neurons for training and testing. The method achieved an accuracy of 0.935 on the …","url":["http://downloads.hindawi.com/journals/scn/2019/2595794.pdf"]} -{"year":"2019","title":"Peer Review and the Production of Scholarly Knowledge: Automated Textual Analysis of Manuscripts Revised for Publication in Administrative Science Quarterly","authors":["D Strang, F Dokshin - The Production of Managerial Knowledge and …, 2019"],"snippet":"… numbers, and filter out “stop words.” Stop words are the most common words in the English language (eg, “the,” “not,” “a”). 2 Next, for each word in the pre-processed sentences, we generate word vectors from a GloVe model …","url":["https://www.emeraldinsight.com/doi/abs/10.1108/S0733-558X20190000059006"]} -{"year":"2019","title":"PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization","authors":["J Zhang, Y Zhao, M Saleh, PJ Liu - arXiv preprint arXiv:1912.08777, 2019"],"snippet":"… T5 (Raffel et al., 2019) generalized the text-to- text framework to a variety of NLP tasks and showed the advantage of scaling up model size (to 11 billion parameters) and pre-training corpus, introducing C4, a massive text corpus …","url":["https://arxiv.org/pdf/1912.08777"]} -{"year":"2019","title":"People represent mental states in terms of rationality, social impact, and valence: Validating the 3d Mind Model","authors":["MA Thornton, D Tamir"],"snippet":"Page 1. Running head: MENTAL STATE DIMENSIONS 1 People represent mental states in terms of rationality, social impact, and valence: Validating the 3d Mind Model Mark A. Thornton* and Diana I. Tamir Department of …","url":["https://psyarxiv.com/akhpq/download?format=pdf"]} -{"year":"2019","title":"PhishFry–A Proactive Approach to Classify Phishing Sites using SCIKIT Learn","authors":["D Brites, M Wei"],"snippet":"… [Online]. Available: http://5000best.com/websites/. [Accessed 2019]. [26] OpenPhish, \"OpenPhish,\" 2019. [Online]. Available: https://openphish.com/. [27] Amazon Web Services, \"Common Crawl,\" Amazon, 2019. [Online] …","url":["https://www.shsu.edu/mxw032/publication/19gc-bw.pdf"]} -{"year":"2019","title":"Phishing Detection Based on Machine Learning and Feature Selection Methods","authors":["M Almseidin, AMA Zuraiq, M Al-kasassbeh, N Alnidami - International Journal of …, 2019"],"snippet":"… Phishing webpages are collected from Phish-Tank and Open-Phish, while legitimate web-pages are collected from Alexa and Common Crawl. These web-pages are downloaded on two distinct sessions, from January to May 2015 and through May to June 2017 …","url":["https://onlinejour.journals.publicknowledgeproject.org/index.php/i-jim/article/download/11411/6259"]} -{"year":"2019","title":"Phishing URL Detection Via Capsule-Based Neural Network","authors":["Y Huang, J Qin, W Wen - 2019 IEEE 13th International Conference on Anti …, 2019"],"snippet":"… [27] VirusTotal, https://www.virustotal.com/ [28] Common Crawl, https://commoncrawl.org/ [29] J. Ma, LK Saul, S. Savage, and GM VoelNer, “Beyond blacNlists: learning to detect malicious web sites from suspicious …","url":["https://ieeexplore.ieee.org/abstract/document/8925000/"]} -{"year":"2019","title":"Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages","authors":["Y Kim, P Petrov, P Petrushkov, S Khadivi, H Ney - arXiv preprint arXiv:1909.09524, 2019"],"snippet":"Page 1. Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages Yunsu Kim1∗ Petre Petrov1,2∗ Pavel Petrushkov2 Shahram Khadivi2 Hermann Ney1 1RWTH Aachen University, Aachen …","url":["https://arxiv.org/pdf/1909.09524"]} -{"year":"2019","title":"PKUSE at SemEval-2019 Task 3: Emotion Detection with Emotion-Oriented Neural Attention Network","authors":["L Ma, L Zhang, W Ye, W Hu - Proceedings of the 13th International Workshop on …, 2019"],"snippet":"… Table 1: Datasets for Semeval-2019 Task 3. 4.2 Experiments The model is implemented using Keras 2.0 (Chollet et al., 2017). We experiment with Stanford's GloVe 300 dimensional word embeddings trained on 840 billion words from Common Crawl …","url":["https://www.aclweb.org/anthology/S19-2049"]} -{"year":"2019","title":"PLAGO: A SYSTEM FOR PLAGIARISM DETECTION AND INTERVENTION IN MASSIVE COURSES","authors":["CT Guida - 2019"],"snippet":"… Web Crawl: Used for queuing and monitoring of importing web pages from the CommonCrawl.org public dataset (described in 3.5.2). • Admin Options … pages. Common Crawl is a non-profit organization which offers a public …","url":["https://smartech.gatech.edu/bitstream/handle/1853/61787/GUIDA-THESIS-2019.pdf?sequence=1&isAllowed=y"]} -{"year":"2019","title":"Poetry: Identification, Entity Recognition, and Retrieval","authors":["IV Foley, J John - 2019"],"snippet":"Page 1. University of Massachusetts Amherst ScholarWorks@UMass Amherst Doctoral Dissertations Dissertations and Theses 2019 Poetry: Identification, Entity Recognition, and Retrieval John J. Foley IV Follow this and additional …","url":["https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=2628&context=dissertations_2"]} -{"year":"2019","title":"Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and Validation","authors":["A Gliozzo, MR Glass, S Dash, M Canim - arXiv preprint arXiv:1908.08104, 2019"],"snippet":"… Also, a web-scale experiment conducted to extend DBPedia with knowledge from Common Crawl shows that our system is not only scalable but also does not require any adaptation cost, while yielding substantial accuracy gain. 1 Introduction …","url":["https://arxiv.org/pdf/1908.08104"]} -{"year":"2019","title":"Precise Detection of Content Reuse in the Web","authors":["C Ardi, J Heidemann - ACM SIGCOMM Computer Communication Review, 2019"],"snippet":"… We verify our algorithm and its choices with controlled experiments over three web datasets: Common Crawl (2009/10), GeoCities (1990s–2000s), and a phishing corpus (2014) … In the Common Crawl dataset of 40.5×109 chunks, we set the threshold to 105 …","url":["https://dl.acm.org/citation.cfm?id=3336940"]} -{"year":"2019","title":"Predicting ConceptNet Path Quality Using Crowdsourced Assessments of Naturalness","authors":["Y Zhou, S Schockaert, JA Shah - arXiv preprint arXiv:1902.07831, 2019"],"snippet":"… The number in parenthesis after each feature name indicates the dimension of that feature. Vertex embedding (300) This feature is taken directly from the 300dimensional GloVe (25) embedding, pre-trained on the Common Crawl2 dataset with 840 billion tokens …","url":["https://arxiv.org/pdf/1902.07831"]} -{"year":"2019","title":"Predicting Word Concreteness and Imagery","authors":["J Charbonnier, C Wartena - Proceedings of the 13th International Conference on …, 2019"],"snippet":"… The other two version (also available with and without subword information) with 2 million word vectors trained on the Common Crawl with 600B tokens. In our experiments we used the version trained on Common Crawl without …","url":["https://www.aclweb.org/anthology/W19-0415"]} -{"year":"2019","title":"Probing Contextualized Sentence Representations with Visual Awareness","authors":["Z Zhang, R Wang, K Chen, M Utiyama, E Sumita… - arXiv preprint arXiv …, 2019"],"snippet":"… We used newsdev2016 as the dev set and newstest2016 as the test set. 2) For the EN-DE translation task, 4.43M bilingual sentence pairs of the WMT14 dataset were used as training data, including Common Crawl, News Commentary, and Europarl v7 …","url":["https://arxiv.org/pdf/1911.02971"]} -{"year":"2019","title":"Product Classification Using Microdata Annotations","authors":["Z Zhang, M Paramita - International Semantic Web Conference, 2019"],"snippet":"… dimension of the continuous vector representation of each word. In this work, we use the GloVe word embedding vectors pre-trained on the Common Crawl corpus 3 with 300 dimensions. Since we are dealing with content from e …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30793-6_41"]} -{"year":"2019","title":"Provision and Usage of Provenance Data in the WebIsALOD Knowledge Graph","authors":["S Hertling, H Paulheim - CEUR Workshop Proceedings, 2018"],"snippet":"… As described in [6], the Copyright c 2018 for this paper by its authors. Copying permitted for private and academic purposes. 1 https://commoncrawl.org 2 NP stands for noun phrase. 3 https://www.w3.org/TR/skos-reference/ Page 2. isa:concept/_Gmail …","url":["http://ceur-ws.org/Vol-2317/article-06.pdf"]} -{"year":"2019","title":"PT-CoDE: Pre-trained Context-Dependent Encoder for Utterance-level Emotion Recognition","authors":["W Jiao, MR Lyu, I King - arXiv preprint arXiv:1910.08916, 2019"],"snippet":"… Here, we utilize the 300-dimensional pre-trained GloVe word vectors1 (Pennington et al., 2014) trained over 840B Common Crawl to initialize the word embedding layer. Those words that cannot be found in the GloVe …","url":["https://arxiv.org/pdf/1910.08916"]} -{"year":"2019","title":"QE BERT: Bilingual BERT using Multi-task Learning for Neural Quality Estimation","authors":["H Kim, JH Lim, HK Kim, SH Na - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… We used parallel data provided for the WMT19 news machine translation task6 to pre-train QE BERT. The English-Russian parallel data set consisted of the ParaCrawl corpus, Common Crawl corpus, News Commentary corpus, and Yandex …","url":["https://www.aclweb.org/anthology/W19-5407"]} -{"year":"2019","title":"QED: A Fact Verification and Evidence Support System","authors":["J Luken - 2019"],"snippet":"… embedding layers, as described below. 4.3.2 Embedding We use GloVe word embeddings (Pennington et al., 2014) with 300 dimensions pretrained using CommonCrawl to get a vector representation of the evidence sentence. We","url":["https://etd.ohiolink.edu/!etd.send_file?accession=osu1555074124008897&disposition=inline"]} -{"year":"2019","title":"Quantifying the Semantic Core of Gender Systems","authors":["DBPH Wallach"],"snippet":"… 4The FASTTEXT word embeddings were trained using Common Crawl and Wikipedia data, using CBOW with po- sition weights, with character n-grams of length 5. For more information, see http://fasttext.cc/docs/en …","url":["https://openreview.net/pdf?id=ByxcApoPwS"]} -{"year":"2019","title":"QuAVONet: Answering Questions on the SQuAD Dataset with QANet and Answer Verifier","authors":["J Cervantes"],"snippet":"… 5.2 Implementation Details For the word embeddings, I used the starter code's 300-dimensional GloVE vectors trained on the CommonCrawl dataset [6]. These embeddings remained unchanged and were not trained for any of my models …","url":["https://pdfs.semanticscholar.org/f71e/5c6cdd9e06068625eb82b3d9647823e80503.pdf"]} -{"year":"2019","title":"Quick and (maybe not so) Easy Detection of Anorexia in Social Media Posts","authors":["E Mohammadi, H Amini, L Kosseim - 2019"],"snippet":"… As shown in Figure 1, these token vectors are then fed to the hidden layer. Two different pretrained word embeddings were experimented with. The first word embedder was the 300d version of GloVe [26] that was pretrained …","url":["https://www.researchgate.net/profile/Hessam_Amini/publication/334848955_Quick_and_maybe_not_so_Easy_Detection_of_Anorexia_in_Social_Media_Posts/links/5d434b9992851cd04699c9ce/Quick-and-maybe-not-so-Easy-Detection-of-Anorexia-in-Social-Media-Posts.pdf"]} -{"year":"2019","title":"Quotient Hash Tables-Efficiently Detecting Duplicates in Streaming Data","authors":["R Géraud, M Lombard-Platet, D Naccache - arXiv preprint arXiv:1901.04358, 2019"],"snippet":"Page 1. arXiv:1901.04358v1 [cs.DS] 14 Jan 2019 Quotient Hash Tables - Efficiently Detecting Duplicates in Streaming Data Rémi Gérauda,c, Marius Lombard-Platet∗ a,b, and David Naccachea,c aDépartement d'informatique …","url":["https://arxiv.org/pdf/1901.04358"]} -{"year":"2019","title":"Racial bias in legal language","authors":["D Rice, JH Rhodes, T Nteta - Research & Politics, 2019"],"snippet":"Although racial bias in the law is widely recognized, it remains unclear how these biases are in entrenched in the language of the law, judicial opinions. In th...","url":["https://journals.sagepub.com/doi/pdf/10.1177/2053168019848930"]} -{"year":"2019","title":"Random Projection in Deep Neural Networks","authors":["PI Wójcik - arXiv preprint arXiv:1812.09489, 2018"],"snippet":"Page 1. Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie Wydzia�� Informatyki, Elektroniki i Telekomunikacji Katedra Informatyki Rozprawa doktorska Zastosowania metody rzutu przypadkowego w głębokich …","url":["https://arxiv.org/pdf/1812.09489"]} -{"year":"2019","title":"Real or Fake? Learning to Discriminate Machine from Human Generated Text","authors":["A Bakhtin, S Gross, M Ott, Y Deng, MA Ranzato… - arXiv preprint arXiv …, 2019"],"snippet":"… CCNews: We collect a de-duplicated subset of the English portion of the CommonCrawl news dataset [Nagel, 2016], which totals around 16 Billion words … Sebastian Nagel. Cc-news. http://web.archive.org/save/http …","url":["https://arxiv.org/pdf/1906.03351"]} -{"year":"2019","title":"Real-time Claim Detection from News Articles and Retrieval of Semantically-Similar Factchecks","authors":["B Adler, G Boscaini-Gilroy - arXiv preprint arXiv:1907.02030, 2019"],"snippet":"… new problem. Many unsupervised text embeddings are trained on the CommonCrawl 1 dataset of approx. 840 billion tokens. This … dataset. Supervised datasets are 1CommonCrawl found at http://commoncrawl.org/ unlikely ever …","url":["https://arxiv.org/pdf/1907.02030"]} -{"year":"2019","title":"Real-time event detection using recurrent neural network in social sensors","authors":["VQ Nguyen, TN Anh, HJ Yang - International Journal of Distributed Sensor Networks, 2019"],"snippet":"We proposed an approach for temporal event detection using deep learning and multi-embedding on a set of text data from social media. First, a convolutional neural network augmented with multiple w...","url":["https://journals.sagepub.com/doi/pdf/10.1177/1550147719856492"]} -{"year":"2019","title":"Real-world Conversational AI for Hotel Bookings","authors":["B Li, N Jiang, J Sham, H Shi, H Fazal - arXiv preprint arXiv:1908.10001, 2019"],"snippet":"… We compare the following models: 1) Averaged GloVe + feedforward: We use 100dimensional, trainable GloVe embeddings [17] trained on Common Crawl, and produce sentence embeddings for each of the two inputs by averaging across all tokens …","url":["https://arxiv.org/pdf/1908.10001"]} -{"year":"2019","title":"Recommendation System with Aspect-Based Sentiment Analysis","authors":["Q Du, D Zhu, W Duan"],"snippet":"… The word vectors model we use is the \"en_core_web_lg\" model in spaCy. The model contains English multi-task CNN trained on OntoNotes 5[3], with GloVe[8] vectors trained on Common Crawl. It provides 300dimensional …","url":["http://rafaelsilva.com/wp-content/uploads/2018/12/014-Aspect-based-sentiment-analysis.pdf"]} -{"year":"2019","title":"Refining Word Reesprentations by Manifold Learning","authors":["C Yonghe, H Lin, L Yang, Y Diao, S Zhang, F Xiaochao"],"snippet":"… judgment. This is exemplified by the WS353[Finkelstein et al., 2001]word similarity ground truth in Figure 1. Based on the Common Crawl corpus (42B), the Glove model is used to train 300-dimensional word vectors. The similarity …","url":["https://www.ijcai.org/proceedings/2019/0749.pdf"]} -{"year":"2019","title":"Regressing Word and Sentence Embeddings for Regularization of Neural Machine Translation","authors":["IJ Unanue, EZ Borzeshi, M Piccardi - arXiv preprint arXiv:1909.13466, 2019"],"snippet":"… De-En: The German-English dataset (de-en) has been taken from the WMT18 news translation shared task1. The training set contains over 5M sentence pairs collected from the Europarl, CommonCrawl and Newscommentary parallel corpora …","url":["https://arxiv.org/pdf/1909.13466"]} -{"year":"2019","title":"Rel4KC: A Reinforcement Learning Agent for Knowledge Graph Completion and Validation","authors":["X Lin, P Subasic, H Yin - 2019"],"snippet":"… The fact triples extracted from free text (Common Crawl) are then fed to the trained RL agent to determine their trustworthiness. If passing the validation, a triple is entered into target KG … The free text used in this study is Common Crawl corpus …","url":["http://www.cse.msu.edu/~zhaoxi35/DRL4KDD/1.pdf"]} -{"year":"2019","title":"Repositioning privacy concerns: Web servers controlling URL metadata","authors":["R Ferreira, RL Aguiar - Journal of Information Security and Applications, 2019"],"snippet":"… on empirical observation of web browsers and HTTP server implementations, and while some implementations allow longer URLs (eg, 100.000 octets) this value remains a reasonable assumption for practical purposes 1 . Our …","url":["https://www.sciencedirect.com/science/article/pii/S2214212618302588"]} -{"year":"2019","title":"Representation Learning for Question Classification via Topic Sparse Autoencoder and Entity Embedding","authors":["D Li, J Zhang, P Li - IEEE Big Data, 2018"],"snippet":"… WordNet 2. The embeddings of entity-related information are also trained with skip-gram. The word embeddings are initialized with the 300 dimensional pretrained vectors 3 from the Common Crawl of 840 billion tokens and 2.2 …","url":["http://research.baidu.com/Public/uploads/5c1c9ab3069f4.pdf"]} -{"year":"2019","title":"Representing Overlaps in Sequence Labeling Tasks with a Novel Tagging Scheme: bigappy-unicrossy","authors":["G Berk, B Erden, T Güngör"],"snippet":"… a language-independent system based on the bidirectional LSTM-CRF model provided by [7]. Similar to Deep-BGT system [2], we make use of the pretrained word embeddings provided by fastText [6]. The word embeddings …","url":["https://www.cmpe.boun.edu.tr/~gungort/papers/Representing%20Overlaps%20in%20Sequence%20Labeling%20Tasks%20with%20a%20Novel%20Tagging%20Scheme%20-%20bigappy-unicrossy.pdf"]} -{"year":"2019","title":"Review and Visualization of Facebook's FastText Pretrained Word Vector Model","authors":["JC Young, A Rusli - … International Conference on Engineering, Science, and …, 2019"],"snippet":"… Machine Learning (ML). Currently, FastText provides pretrained Word2Vec model for 157 language that trained on Common Crawl and Wikipedia (Bahasa Indonesia is one from the provided model) [15]. In its Word2Vec model …","url":["https://ieeexplore.ieee.org/abstract/document/8863015/"]} -{"year":"2019","title":"RIPPED: Recursive Intent Propagation using Pretrained Embedding Distances","authors":["M Ball - 2019"],"snippet":"… GloVe (Pennington et al., 2014) is a word embedding model trained on data from the Common Crawl corpus6. GloVe is a log-bilinear regression model that incorporates both local context windows and global matrix …","url":["https://cs.brown.edu/research/pubs/theses/ugrad/2019/ball.michael.pdf"]} -{"year":"2019","title":"RNN Embeddings for Identifying Difficult to Understand Medical Words","authors":["H Pylieva, A Chernodub, N Grabar, T Hamon - … of the 18th BioNLP Workshop and …, 2019"],"snippet":"… improve classification accuracy for our specific problem. We note that FastText word embeddings trained on Wikipedia and Common Crawl5 texts have an important part of words from our dataset. According to our analysis, the …","url":["https://www.aclweb.org/anthology/W19-5011"]} -{"year":"2019","title":"RoBERTa: A Robustly Optimized BERT Pretraining Approach","authors":["Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy… - arXiv preprint arXiv …, 2019"],"snippet":"… (16GB). • CC-NEWS, which we collected from the En- glish portion of the CommonCrawl News dataset (Nagel, 2016) … STORIES, a dataset introduced in Trinh and Le (2018) containing a subset of CommonCrawl data filtered …","url":["https://arxiv.org/pdf/1907.11692"]} -{"year":"2019","title":"Robust Argument Unit Recognition and Classification","authors":["D Trautmann, J Daxenberger, C Stab, H Schütze… - arXiv preprint arXiv …, 2019"],"snippet":"… 2http://commoncrawl.org/2016/02/ february-2016-crawl-archive-now-available/ 3https://www.elastic.co/products/ elasticsearch the topic. Each document was checked for its corresponding WARC file at the Common Crawl In …","url":["https://arxiv.org/pdf/1904.09688"]} -{"year":"2019","title":"Robust Named Entity Recognition with Truecasing Pretraining","authors":["S Mayhew, N Gupta, D Roth - arXiv preprint arXiv:1912.07095, 2019"],"snippet":"… and Kauchak (2011) and used in Susanto, Chieu, and Lu (2016), and a specially preprocessed large dataset from English Common Crawl (CC).1 … 1commoncrawl.org 2In a naming clash, the moses script is called …","url":["https://arxiv.org/pdf/1912.07095"]} -{"year":"2019","title":"SACABench: Benchmarking Suffix Array Construction","authors":["J Bahne, N Bertram, M Böcker, J Bode, J Fischer… - International Symposium on …, 2019"],"snippet":"… We removed every character but A, C, G, and T. CommonCrawl (\\(\\sigma =242,\\mathrm {avg\\_lcp}=3,995, \\mathrm {max\\_lcp}=605,632\\)), which is a crawl of the web done by the CommonCrawl Corpus (http://commoncrawl.org) without any HTML tags …","url":["https://link.springer.com/chapter/10.1007/978-3-030-32686-9_29"]} -{"year":"2019","title":"Samsung and University of Edinburgh's System for the IWSLT 2019","authors":["J Wetesko, M Chochowski, P Przybysz, P Williams… - 2019"],"snippet":"… CommonCrawl and NewsCrawl corpora we used the approach de- scribed in [5]. Two RNN language models were constructed using Marian toolkit: in-domain trained with MUST-C corpus and out-of-domain created using …","url":["https://www.zora.uzh.ch/id/eprint/176328/1/IWSLT2019_paper_34.pdf"]} -{"year":"2019","title":"Satellite System Graph: Towards the Efficiency Up-Boundary of Graph-Based Approximate Nearest Neighbor Search","authors":["C Fu, C Wang, D Cai - arXiv preprint arXiv:1907.06146, 2019"],"snippet":"Page 1. Satellite System Graph: Towards the Efficiency Up-Boundary of Graph-Based Approximate Nearest Neighbor Search Cong Fu, Changxu Wang, Deng Cai ∗ The State Key Lab of CAD&CG, College of Computer Science …","url":["https://arxiv.org/pdf/1907.06146"]} -{"year":"2019","title":"SberQuAD--Russian Reading Comprehension Dataset: Description and Analysis","authors":["P Efimov, L Boytsov, P Braslavski - arXiv preprint arXiv:1912.09723, 2019"],"snippet":"… We tokenized text using spaCy16. To initialize the embedding layer for BiDAF, DocQA, DrQA, and R-Net we use Russian case-sensitive fastText embeddings trained on Common Crawl and Wikipedia17. This initialization is used for both questions and paragraphs …","url":["https://arxiv.org/pdf/1912.09723"]} -{"year":"2019","title":"SC-UPB at the VarDial 2019 Evaluation Campaign: Moldavian vs. Romanian Cross-Dialect Topic Identification","authors":["C Onose, DC Cercel, S Trausan-Matu - Proceedings of the Sixth Workshop on NLP …, 2019"],"snippet":"… (2018), Nordic Language Processing Laboratory (NLPL) word embedding repository (Kutuzov et al., 2017) and Common Crawl (CC) word vectors (Grave et al., 2018). The relevant details for each word vector representation model can be viewed in Table 2 …","url":["https://www.aclweb.org/anthology/W19-1418"]} -{"year":"2019","title":"Scalable Cross-Lingual Transfer of Neural Sentence Embeddings","authors":["H Aldarmaki, M Diab - arXiv preprint arXiv:1904.05542, 2019"],"snippet":"… We used WMT'12 Common Crawl data for crosslingual alignment, and WMT'12 test sets for evaluations. We used the augmented SNLI data de- scribed in (Dasgupta et al., 2018) and their translations for training the mono-lingual and joint InferSent models …","url":["https://arxiv.org/pdf/1904.05542"]} -{"year":"2019","title":"SECNLP: A Survey of Embeddings in Clinical Natural Language Processing","authors":["K KS, S Sangeetha - arXiv preprint arXiv:1903.01039, 2019","KK Subramanyam, S Sivanesan - Journal of Biomedical Informatics, 2019"],"snippet":"Skip to main content Skip to article …","url":["https://arxiv.org/pdf/1903.01039","https://www.sciencedirect.com/science/article/pii/S1532046419302436"]} -{"year":"2019","title":"Security In Plain TXT","authors":["A Portier, H Carter, C Lever"],"snippet":"… These seed domains are compiled from a combination of sources, including the Alexa top 1 million, the TLD zone files for COM, NAME, NET, ORG, and BIZ, sites captured by the Common Crawl project, multiple public domain …","url":["http://www.henrycarter.org/papers/plaintxt19.pdf"]} -{"year":"2019","title":"Security Posture Based Incident Forecasting","authors":["D Mulugeta - 2019"],"snippet":"Page 1. Page 2. Page 3. Security Posture Based Incident Forecasting A Thesis Submitted to the Faculty of Drexel University by Dagmawi Mulugeta in partial fulfillment of the requirements for the degree of Master of Science June 2019 Page 4 …","url":["http://search.proquest.com/openview/a6f070655e6045b93b595adc3b0965ae/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2019","title":"See-Through-Text Grouping for Referring Image Segmentation","authors":["DJ Chen, S Jia, YC Lo, HT Chen, TL Liu - … of the IEEE International Conference on …, 2019"],"snippet":"… The representation st is visual-attended and its goodness is linked to the predicted segmentation map Pt−1. The GloVe model in our implementation is pre-trained on Common Crawl in 840B tokens. Following …","url":["http://openaccess.thecvf.com/content_ICCV_2019/papers/Chen_See-Through-Text_Grouping_for_Referring_Image_Segmentation_ICCV_2019_paper.pdf"]} -{"year":"2019","title":"Semantic Characteristics of Schizophrenic Speech","authors":["K Bar, V Zilberstein, I Ziv, H Baram, N Dershowitz… - arXiv preprint arXiv …, 2019"],"snippet":"… Specifically, we used Hebrew pretrained vectors provided by fastText (Grave et al., 2018), which were created from Wikipedia,3 as well as from other content extracted from the web with Common Crawl.4 Overall, 97% of the words in our corpus exist in fastText …","url":["https://arxiv.org/pdf/1904.07953"]} -{"year":"2019","title":"Semantic similarity measure for Thai language","authors":["P Wongchaisuwat"],"snippet":"… In this paper, pre-trained word vectors from fastText [10] and Thai2vec [1] corpus are used to compute the similarity between given words. The facebook research distributed the word vector trained on a common crawl and Wikipedia using the fastText model …","url":["https://saki.siit.tu.ac.th/isai-nlp2018/uploads_final/5__a25c56af02784c266f98ef0378499ff1/iSAI-NLP2018_0005_final.pdf"]} -{"year":"2019","title":"Semantic Textual Similarity Measures for Case-Based Retrieval of Argument Graphs","authors":["M Lenz, S Ollinger, P Sahitaj, R Bergmann - International Conference on Case-Based …, 2019"],"snippet":"… Word2vec GoogleNews 3 vectors are trained on the Google News dataset on about 100B tokens. GloVe 4 is trained on the Common Crawl dataset on 840B tokens. fastText 5 vectors are trained on Wikipedia and Common Crawl …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29249-2_15"]} -{"year":"2019","title":"Semi-supervised machine learning with word embedding for classification in price statistics","authors":["H Martindale, E Rowland, T Flower - 16th Meeting of the Ottawa Group on Price …, 2019"],"snippet":"Page 1. Office for National Statistics 1 Semi-supervised machine learning with word embedding for classification: April 2019 26/04/2019 Semi-supervised machine learning with word embedding for classification in price statistics …","url":["https://eventos.fgv.br/sites/eventos.fgv.br/files/arquivos/u161/semi-supervised_ml_for_price_stats-ottawa_group.pdf"]} -{"year":"2019","title":"Semi-supervised Neural Machine Translation via Marginal Distribution Estimation","authors":["Y Wang, Y Xia, L Zhao, J Bian, T Qin, E Chen, TY Liu - IEEE/ACM Transactions on …, 2019"],"snippet":"Page 1. 2329-9290 (c) 2019 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8732422/"]} -{"year":"2019","title":"SENPAI: Supporting Exploratory Text Analysis through Semantic & Syntactic Pattern Inspection","authors":["M Samory, T Mitra - 2019"],"snippet":"… lemmatization, so as to remove surface form variations which do not alter the meaning of a word, eg the lemma for both “moved” and “moves” is “move.” Then, we encode lemmas with the corresponding 300-dimensional word …","url":["http://people.cs.vt.edu/tmitra/public/papers/icwsm19-SENPAI.pdf"]} -{"year":"2019","title":"Sense disambiguation for Punjabi language using supervised machine learning techniques","authors":["VP Singh, P Kumar - Sādhanā, 2019"],"snippet":"… The character n-grams of length 5 have been applied to words in window of size 5 with 10 negative samples [10]. It has been trained on the Punjabi Wikipedia and the raw web data fetched by common crawl method. 6 Working of WSD System for Punjabi language …","url":["https://link.springer.com/article/10.1007/s12046-019-1206-x"]} -{"year":"2019","title":"Sentence and Word Weighting for Neural Machine Translation Domain Adaptation","authors":["PP Chen"],"snippet":"Page 1. Sentence and Word Weighting for Neural Machine Translation Domain Adaptation Pinzhen (Patrick) Chen Undergraduate Dissertation Artificial Intelligence and Software Engineering School of Informatics The …","url":["https://project-archive.inf.ed.ac.uk/ug4/20191530/ug4_proj.pdf"]} -{"year":"2019","title":"Sentence Classification and Information Retrieval for Petroleum Engineering","authors":["TF Ferraz, GABA Ferreira, FG Cozman, I Santos"],"snippet":"… Accordingly, we used a word embedding representation in order to represent the words as vectors and then be able to define and compute distances between terms. We used a pre-trained embedding model called ”Common Crawl” [Pennington et al. 2014] …","url":["http://www.bracis2019.ufba.br/Camera_Ready/199118_1.pdf"]} -{"year":"2019","title":"Sentence Mover's Similarity: Automatic Evaluation for Multi-Sentence Texts","authors":["E Clark, A Celikyilmaz, NA Smith"],"snippet":"… We obtain GloVe embeddings, which are type-based, 300-dimensional embeddings trained on Common Crawl,9 using spaCy,10 while the ELMo em- beddings are character-based, 1,024-dimensional, contextual …","url":["https://homes.cs.washington.edu/~nasmith/papers/clark+celikyilmaz+smith.acl19.pdf"]} -{"year":"2019","title":"Sentence-Level Content Planning and Style Specification for Neural Text Generation","authors":["X Hua, L Wang - arXiv preprint arXiv:1909.00734, 2019"],"snippet":"… Statistics are shown in Table 1. Input Keyphrases and Label Construction. To obtain the input keyphrase candidates and their sentence-level selection labels, we first construct queries to retrieve passages from Wikipedia and news articles collected from commoncrawl …","url":["https://arxiv.org/pdf/1909.00734"]} -{"year":"2019","title":"Sentiment Analysis","authors":["D Sarkar - Text Analytics with Python, 2019"],"snippet":"In this chapter, we cover one of the most interesting and widely used aspects pertaining to natural language processing (NLP), text analytics, and machine learning. The problem at hand is sentiment...","url":["https://link.springer.com/chapter/10.1007/978-1-4842-4354-1_9"]} -{"year":"2019","title":"Separate Chaining Meets Compact Hashing","authors":["D Köppl - arXiv preprint arXiv:1905.00163, 2019"],"snippet":"Page 1. Separate Chaining Meets Compact Hashing Dominik Köppl Department of Informatics, Kyushu University, Japan Society for Promotion of Science Abstract While separate chaining is a common strategy for resolving …","url":["https://arxiv.org/pdf/1905.00163"]} -{"year":"2019","title":"Sequence Labeling to Detect Stuttering Events in Read Speech","authors":["S Alharbi, M Hasan, AJH Simons, S Brumfitt, P Green - Computer Speech & …, 2019"],"snippet":"… In the present study, we used a pre-trained GloVe model to generate word embeddings for each utterance. This model was trained on the Common Crawl (CC) corpus (1.9 M vocab) Pennington et al. (2014). 6. Automatic Speech Recognition System …","url":["https://www.sciencedirect.com/science/article/pii/S0885230819302967"]} -{"year":"2019","title":"Sequence Time Expression Recognition in the Spanish Clinical Narrative","authors":["A Ruiz-de-la-Cuadra, JL López-Cuadrado… - 2019 IEEE 32nd …, 2019"],"snippet":"… embedding (Table 1). Name Training Words Size Resource Glo200Ve Non-zero entries [37] 840 B 300 Common Crawl Spanish Billion Word [38] Word2Vec [39] 1.5 B 300 Sensem, Ancora Corpus, OPUS Project, etc. EVEX Word2Vec …","url":["https://ieeexplore.ieee.org/abstract/document/8787434/"]} -{"year":"2019","title":"Sequence-to-sequence Pre-training with Data Augmentation for Sentence Rewriting","authors":["Y Zhang, T Ge, F Wei, M Zhou, X Sun - arXiv preprint arXiv:1909.06002, 2019"],"snippet":"… Specifically, for a correct sentence, a back translation model trained with the public GEC data first generates 10 best outputs; then a 5-gram language model (JunczysDowmunt and Grundkiewicz, 2016) trained on Common …","url":["https://arxiv.org/pdf/1909.06002"]} -{"year":"2019","title":"Sequential Attention-based Network for Noetic End-to-End Response Selection","authors":["Q Chen, W Wang - arXiv preprint arXiv:1901.02609, 2019"],"snippet":"… Embedding Training corpus #Words glove.6B.300d Wikipedia + Gigaword 0.4M glove.840B.300d Common Crawl 2.2M glove.twitter.27B.200d Twitter 1.2M … 1.0M crawl-300d-2M.vec Common Crawl 2.0M word2vec.300d Linux manual pages 0.3M …","url":["https://arxiv.org/pdf/1901.02609"]} -{"year":"2019","title":"Sequential Matching Model for End-to-end Multi-turn Response Selection","authors":["Q Chen, W Wang - ICASSP 2019-2019 IEEE International Conference on …, 2019"],"snippet":"… Re- sults on the Ubuntu development set are shown in Table 3. We can see that word2vec embedding trained on the training dataset achieves better results than Fasttext [23] embedding trained on the unlabeled corpus …","url":["https://ieeexplore.ieee.org/abstract/document/8682538/"]} -{"year":"2019","title":"Sequential transfer learning in NLP for text summarization","authors":["P Fecht"],"snippet":"… With W and ˜W, the model generates two sets of word vectors which are supposed to perform equally if X is symmetric [64]. The GloVe model has been trained on varying sized datasets from one up to 42 billion (Common Crawl) tokens of data …","url":["https://www.inovex.de/fileadmin/files/Fachartikel_Publikationen/Theses/sequential-transfer-learning-in-nlp-for-text-summarization-pascal-fecht-2019.pdf"]} -{"year":"2019","title":"Should John Be More Likely A Physician Than Lisa: Bias-Performance Trade-Off for Gendered Pronoun Resolution","authors":["S Goel, J Li, H Zheng"],"snippet":"… the female gendered words. For our case, we are using the pre-trained Glove6 (these cotain 840B tokens and are trained on the Common Crawl corpus) embeddings to get the hard-debiased em- beddings. To obtain these …","url":["https://shivankgoel.github.io/notes/ds/Gendered_Pronoun_Resolution.pdf"]} -{"year":"2019","title":"Similarity Driven Approximation for Text Analytics","authors":["G Hu, Y Zhang, S Rigo, TD Nguyen - arXiv preprint arXiv:1910.07144, 2019"],"snippet":"… For example, the Google Books Ngram data set contains 2.2 TB of data [1], and the Common Crawl corpus contains petabytes of data [2]. Processing such large text data sets can be computationally expensive, especially if it involves sophisticated algorithms …","url":["https://arxiv.org/pdf/1910.07144"]} -{"year":"2019","title":"Situating Sentence Embedders with Nearest Neighbor Overlap","authors":["LH Lin, NA Smith - arXiv preprint arXiv:1909.10724, 2019"],"snippet":"… GloVe average 100 Wikipedia 2014 + Gigaword 5 (6B tokens, uncased) 300 Wikipedia 2014 + Gigaword 5 (6B tokens, uncased) 300 Common Crawl (840B tokens, cased) FastText average 300 Wikipedia + UMBC + statmt.org …","url":["https://arxiv.org/pdf/1909.10724"]} -{"year":"2019","title":"Six dimensions describe action understanding: the ACT-FASTaxonomy","authors":["MA Thornton, D Tamir, PS Hall - PsyArXiv. June, 2019"],"snippet":"… different algorithm. For the present purposes, we used a pre-trained version of GloVe based on the Common Crawl: a set of 840 billion tokens generated by scraping the entire web. For model comparison, we derived an …","url":["https://psyarxiv.com/gt6bw/download/?format=pdf"]} -{"year":"2019","title":"SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization","authors":["H Jiang, P He, W Chen, X Liu, J Gao, T Zhao - arXiv preprint arXiv:1911.03437, 2019"],"snippet":"… For example, the well-known “Common Crawl project” is producing text data extracted from web pages at a rate of about 20TB per month. The resulting extremely large text corpus allows us to train extremely large neural network-based general language models …","url":["https://arxiv.org/pdf/1911.03437"]} -{"year":"2019","title":"Social Relation Extraction from Chatbot Conversations: A Shortest Dependency Path Approach","authors":["M Glas - SKILL 2019-Studierendenkonferenz Informatik, 2019"],"snippet":"… The dictionary used here, 5 https://github.com/zalandoresearch/flair 6 https://spacy.io/ 7 http://commoncrawl.org/ 8 https://catalog.ldc.upenn.edu/LDC2013T19 Page 8. 8 Markus Glas Fig. 3: Example of a dependency path within a sentence containing two entities …","url":["https://dl.gi.de/bitstream/handle/20.500.12116/28989/SKILL2019-01.pdf?sequence=1"]} -{"year":"2019","title":"Social Sensing for Improving the User Experience in Orienteering","authors":["F Persia, S Helmer, S Pugacs, G Pilato - 2019 IEEE 13th International Conference on …, 2019"],"snippet":"… ing. In particular, we have used the spaCy “en core web md” language model, which is an “English multi-task Convolutional Neural Network trained on OntoNotes [34], with GloVe [35] vectors trained on Common Crawl [36]” …","url":["https://ieeexplore.ieee.org/abstract/document/8665498/"]} -{"year":"2019","title":"SOK: A Comprehensive Reexamination of Phishing Research from the Security Perspective","authors":["A Das, S Baki, AE Aassal, R Verma, A Dunbar - arXiv preprint arXiv:1911.00953, 2019"],"snippet":"Page 1. REEXAMINING PHISHING RESEARCH 1 SOK: A Comprehensive Reexamination of Phishing Research from the Security Perspective Avisha Das, Shahryar Baki, Ayman El Aassal, Rakesh Verma, and Arthur Dunbar …","url":["https://arxiv.org/pdf/1911.00953"]} -{"year":"2019","title":"Sparse Victory–A Large Scale Systematic Comparison of Count-Based and Prediction-Based Vectorizers for Text Classification","authors":["R Chakraborty, K Arora, A Elhence"],"snippet":"… Corpus (100 billion words). For greater ease of comparison both the GloVe and fastText models have a dimension of 300 and have been trained on the Common Crawl Corpus (640 billion words). The ELMo embedding has …","url":["https://acl-bg.org/proceedings/2019/RANLP%202019/pdf/RANLP022.pdf"]} -{"year":"2019","title":"ST-Sem: A Multimodal Method for Points-of-Interest Classification Using Street-Level Imagery","authors":["SS Noorian, A Psyllidis, A Bozzon - International Conference on Web Engineering, 2019"],"snippet":"… representing each word as a bag of character n-grams. We use pre-trained word vectors for 2 languages (English and German), trained on Common Crawl and Wikipedia 6 . According to the detected language l, the corresponding pre …","url":["https://link.springer.com/chapter/10.1007/978-3-030-19274-7_3"]} -{"year":"2019","title":"STAR-GCN: Stacked and Reconstructed Graph Convolutional Networks for Recommender Systems","authors":["J Zhang, X Shi, S Zhao, I King - arXiv preprint arXiv:1905.13129, 2019"],"snippet":"… For movie features, we concatenate the title name, release year, and one-hot encoded genres. We process title names by averaging the off-the-shelf 300-dimensional GloVe CommonCrawl word vector [Pennington et al., 2014] of each word …","url":["https://arxiv.org/pdf/1905.13129"]} -{"year":"2019","title":"STD: An Automatic Evaluation Metric for Machine Translation Based on Word Embeddings","authors":["P Li, C Chen, W Zheng, Y Deng, F Ye, Z Zheng - IEEE/ACM Transactions on Audio …, 2019"],"snippet":"… H and M are their means respectively. The word embedding used in our STD implementation is the freely-available fastText word embedding1 [11], which has 2 million word vectors trained on Common Crawl (600B tokens) …","url":["https://ieeexplore.ieee.org/abstract/document/8736840/"]} -{"year":"2019","title":"Streaming Infrastructure and Natural Language Modeling with Application to Streaming Big Data","authors":["Y Du - 2019"],"snippet":"… In our research, we try to find an alternative resource to study such data. Common Crawl is a massive multi-petabyte dataset hosted by Amazon. It contains archived HTML web page data from 2008 to date. Common …","url":["https://tigerprints.clemson.edu/all_dissertations/2329/"]} -{"year":"2019","title":"Structured Two-Stream Attention Network for Video Question Answering","authors":["L Gao, P Zeng, J Song, YF Li, W Liu, T Mei, HT Shen - Proceedings of the AAAI …, 2019"],"snippet":"… consisting of M words, is first converted into a sequence Q = {qm}M m=1, where qm is a one-hot vector representing the word at position m. Next, we employ the word embedding GloVe (Pennington, Socher, and Manning …","url":["https://www.aaai.org/ojs/index.php/AAAI/article/view/4602/4480"]} -{"year":"2019","title":"Study of Tibetan Text Classification based on fastText","authors":["W Ma, H Yu, J Ma - 3rd International Conference on Computer Engineering …"],"snippet":"… Every single text in all data is a line, and the \"__label__ + tag\" is added at the beginning of each line. Pre-training data set: fastText publishes word vectors in 157 languages [13], which are trained on Common Crawl and Wikipedia using fastText …","url":["https://download.atlantis-press.com/article/125913150.pdf"]} -{"year":"2019","title":"SUBMISSION OF WRITTEN WORK","authors":["O ERSITY, F CO"],"snippet":"Page 1. IT U N IV ERSITY O F CO PEN H A G EN SUBMISSION OF WRITTEN WORK Class code: Name of course: Course manager: Course e-portfolio: Thesis or project title: Supervisor: Full Name: Birthdate (dd/mm-yyyy) …","url":["http://www.derczynski.com/itu/docs/Multilingual%20hate%20speech%20detection.pdf","https://www.derczynski.com/itu/docs/Multilingual%20hate%20speech%20detection.pdf"]} -{"year":"2019","title":"Subword-based Compact Reconstruction of Word Embeddings","authors":["S Sasaki, J Suzuki, K Inui - Proceedings of the 2019 Conference of the North …, 2019"],"snippet":"… or embedding vectors), especially those trained on a vast amount of text data, such as the Common Crawl (CC) cor … word embeddings trained from GloVe.840B and fastText.600B are available: https://github.com/losyer …","url":["https://www.aclweb.org/anthology/N19-1353"]} -{"year":"2019","title":"SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems","authors":["A Wang, Y Pruksachatkun, N Nangia, A Singh…"],"snippet":"… We also include a baseline where for each task we simply predict the majority class, as well as a bag-of-words baseline where each input is represented as an average of its tokens' GloVe word vectors (300-dimensional and trained …","url":["https://w4ngatang.github.io/static/papers/superglue.pdf"]} -{"year":"2019","title":"Supervised Multimodal Bitransformers for Classifying Images and Text","authors":["D Kiela, S Bhooshan, H Firooz, D Testuggine - arXiv preprint arXiv:1909.02950, 2019"],"snippet":"… We describe each of the baselines in more detail below. • Bag of words (Bow) We sum 300-dimensional GloVe embeddings (Pennington, Socher, and Manning 2014) (trained on Common Crawl) for all words in the text …","url":["https://arxiv.org/pdf/1909.02950"]} -{"year":"2019","title":"Supplementary Material for “Multi-task Learning of Hierarchical Vision-Language Representation”","authors":["DK Nguyen, T Okatani"],"snippet":"… Questions and captions were tokenized using Python Natural Language Toolkit (nltk) [2]. We used the vocabulary provided by the CommonCrawl-840B GloVe model for English word vectors [8], and set out-of-vocabulary words to unk …","url":["https://pdfs.semanticscholar.org/83a6/fd8eadd36c22bdac861bd2b20aba87968c3d.pdf"]} -{"year":"2019","title":"Survey on Publicly Available Sinhala Natural Language Processing Tools and Research","authors":["N de Silva - arXiv preprint arXiv:1906.02358, 2019"],"snippet":"… [21] further provided two monolingual corpora for Sinhala. Those were a 155k+ sentences of filtered Sinhala Wikipedia8 and 5178k+ sentences of Sinhala common crawl9. 2.2 Data Sets Specific data sets for Sinhala, as expected. is scarce …","url":["https://arxiv.org/pdf/1906.02358"]} -{"year":"2019","title":"Synchronous Bidirectional Neural Machine Translation","authors":["L Zhou, J Zhang, C Zong - Transactions of the Association for Computational …, 2019"],"snippet":"Create a new account. Email. Returning user. Can't sign in? Forgot your password? Enter your email address below and we will send you the reset instructions. Email. Cancel. If the address matches an existing account you will …","url":["https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00256"]} -{"year":"2019","title":"Syntactic dependencies correspond to word pairs with high mutual information","authors":["R Futrell, P Qian, E Gibson, E Fedorenko, IA Blank"],"snippet":"… 2.5 Dataset We use the Common Crawl corpus (Buck et al., 2014) of English web text … Entropy, 19:275–307. Buck, C., Heafield, K., and Van Ooyen, B. (2014). N-gram counts and language models from the common crawl. In …","url":["http://socsci.uci.edu/~rfutrell/papers/futrell2019syntactic.pdf"]} -{"year":"2019","title":"Syntactically Supervised Transformers for Faster Neural Machine Translation","authors":["N Akoury, K Krishna, M Iyyer - arXiv preprint arXiv:1906.02780, 2019"],"snippet":"… For English-German, we evaluate on WMT 2014 En↔De as well as IWSLT 2016 En→De, while for English-French we train on the Europarl / Common Crawl subset of the full WMT 2014 En→Fr data and evaluate over the full dev/test sets …","url":["https://arxiv.org/pdf/1906.02780"]} -{"year":"2019","title":"Syntax-aware Multilingual Semantic Role Labeling","authors":["S He, Z Li, H Zhao - arXiv preprint arXiv:1909.00310, 2019"],"snippet":"… The pre-trained word em- bedding is 100-dimensional GloVe vectors (Pennington et al., 2014) for English, 300-dimensional fastText vectors (Grave et al., 2018) trained on Common Crawl and Wikipedia for other languages …","url":["https://arxiv.org/pdf/1909.00310"]} -{"year":"2019","title":"Syntax-Aware Sentence Matching with Graph Convolutional Networks","authors":["Y Lei, Y Hu, X Wei, L Xing, Q Liu - International Conference on Knowledge Science …, 2019"],"snippet":"… 4.2 Experiment Setting. In order to compare with the baseline, we use the same setting as BiMPM. We initialize word embeddings in the word representation layer with the 300-dimensional GloVe word vectors …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29563-9_31"]} -{"year":"2019","title":"System and method for chat community question answering","authors":["N Londhe, S Kannan, N Bojja - US Patent App. 16/272,142, 2019"],"snippet":"US20190260694A1 - System and method for chat community question answering - Google Patents. System and method for chat community question answering. Download PDF Info. Publication number US20190260694A1. US20190260694A1 …","url":["https://patentimages.storage.googleapis.com/0c/f5/b6/7687c26806b141/US20190260694A1.pdf"]} -{"year":"2019","title":"System and method for concise display of query results via thumbnails with indicative images and differentiating terms","authors":["TP O'hara - US Patent 10,459,999, 2019"],"snippet":"… grams). In the case of a meta-search engine without access to the underlying indexes, one approach is to use data from the Common Crawl to derive global n-gram counts for TF-IDF and language modeling filtering. Another …","url":["http://www.freepatentsonline.com/10459999.html"]} -{"year":"2019","title":"System for creating a reasoning graph and for ranking of its nodes","authors":["B Agapiev - US Patent App. 15/793,751, 2019"],"snippet":"… View, Calif. and Common Crawl Foundation of Beverly Hills, Calif.) are processed (20) to identify statements of causal relationships (22, 24), which are then analyzed to extract causes and associated effect pairs (26). These …","url":["https://patentimages.storage.googleapis.com/ca/d2/fd/8b3a7f8fa4ec15/US20190073420A1.pdf"]} -{"year":"2019","title":"TüBa-D/DP Stylebook","authors":["D de Kok, S Pütz - 2019"],"snippet":"… Table 1: Subcorpora of the TüBa-D/DP. Subcorpus Genre Sentences Tokens Europarl Parliamentary proceedings 2.2M 55M taz (1986-2009) Newspaper 29.9M 393.7M Wikipedia (2019) Encyclopedia 42.2M …","url":["https://sfb833-a3.github.io/tueba-ddp/stylebook/stylebook-r4.pdf"]} -{"year":"2019","title":"TabbyXL: Rule-Based Spreadsheet Data Extraction and Transformation","authors":["A Shigarov, V Khristyuk, A Mikhailov, V Paramonov - International Conference on …, 2019"],"snippet":"… a spreadsheet-like format. Barik et al. [2] extracted 0.25M unique spreadsheets from Common Crawl 1 archive. Chen and Cafarella [6] reported about 0.4M spreadsheets of ClueWeb09 Crawl 2 archive. Spreadsheets can be …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30275-7_6"]} -{"year":"2019","title":"Tackling Graphical NLP problems with Graph Recurrent Networks","authors":["L Song - 2019"],"snippet":"Page 1. Tackling Graphical NLP problems with Graph Recurrent Networks by Linfeng Song Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Supervised by Professor Daniel Gildea Department of Computer Science …","url":["https://www.cs.rochester.edu/~lsong10/papers/Linfeng_Song_PhD_thesis.pdf"]} -{"year":"2019","title":"TARGER: Neural Argument Mining at Your Fingertips","authors":["A Chernodub, O Oliynyk, P Heidenreich, A Bondarenko…"],"snippet":"… Our background collection for the retrieval of argumentative sentences is formed by the DepCC corpus (Panchenko et al., 2018), a linguistically pre-processed subset of the Common Crawl containing 14.3 … Building a …","url":["https://webis.de/downloads/publications/papers/bondarenko_2019b.pdf"]} -{"year":"2019","title":"Task definition, annotated dataset, and supervised natural language processing models for symptom extraction from unstructured clinical notes","authors":["JM Steinkamp, W Bala, A Sharma, JJ Kantrowitz - Journal of Biomedical Informatics, 2019"],"snippet":"… Our word embeddings consisted of 300-dimensional Global Vectors (GloVe) [35] trained on the web common crawl data set concatenated with 300-dimensional custom trained FastText [28] vectors trained on the entirety …","url":["https://www.sciencedirect.com/science/article/pii/S153204641930276X"]} -{"year":"2019","title":"Team EP at TAC 2018: Automating data extraction in systematic reviews of environmental agents","authors":["A Nowak, P Kunstman - arXiv preprint arXiv:1901.02081, 2019"],"snippet":"… The model architecture is shown in Figure 3. Embeddings layer: Each token is represented by 1452 dimensional vector, consisting of: • 300-dimensional GloVe (Pennington et al., 2014) embedding (cased, trained on 840B tokens from Common Crawl) …","url":["https://arxiv.org/pdf/1901.02081"]} -{"year":"2019","title":"Techniques for Inverted Index Compression","authors":["GE Pibiri, R Venturini - arXiv preprint arXiv:1908.10598, 2019"],"snippet":"Page 1. Techniques for Inverted Index Compression GIULIO ERMANNO PIBIRI, ISTI-CNR, Italy ROSSANO VENTURINI, University of Pisa, Italy The data structure at the core of large-scale search engines is the inverted index …","url":["https://arxiv.org/pdf/1908.10598"]} -{"year":"2019","title":"Tell me you can read me","authors":["CE SUM, T THEOR"],"snippet":"Page 55. Complying with the obligation of transparency imposes indeed on the data controller the prior obligation to determine–deliberately or not, consciously or not–who are the targeted data subjects, and what are they supposed to find intelligible and easily accessible …","url":["https://pdfs.semanticscholar.org/8c2a/8c105a49e59c457c68b8390b49694c4c4c20.pdf#page=55"]} -{"year":"2019","title":"Temporal Context-Aware Representation Learning for Question Routing","authors":["X Zhang, W Cheng, B Zong, Y Chen, J Xu, D Li…"],"snippet":"… The state-of-the-art document embedding model, InferSent [3], is applied to compute the similarity between questions. We use the pre-trained 300-dimensional word vectors from fastText[19], which is trained on Common Crawl containing 600B tokens …","url":["https://xuczhang.github.io/papers/wsdm20_tcqr.pdf"]} -{"year":"2019","title":"Temporally Grounding Language Queries in Videos by Contextual Boundary-aware Prediction","authors":["J Wang, L Ma, W Jiang - arXiv preprint arXiv:1909.05010, 2019"],"snippet":"… 2015) features are adopted for all compared methods. Each word from the query is represented by GloVe (Pennington, Socher, and Manning 2014) word embedding vectors pre-trained on Common Crawl. We set hidden neuron size of LSTM to 512 …","url":["https://arxiv.org/pdf/1909.05010"]} -{"year":"2019","title":"Text Classification Using SVM Enhanced by Multithreading and CUDA","authors":["S Chatterjee, PG Jose, D Datta - International Journal of Modern Education and …, 2019"],"snippet":"Page 1. IJ Modern Education and Computer Science, 2019, 1, 11-23 Published Online January 2019 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijmecs.2019.01.02 Copyright © 2019 MECS IJ Modern Education and Computer Science, 2019, 1, 11-23 …","url":["http://search.proquest.com/openview/ab6d5a2cbbb23e2cba642a09784b043e/1?pq-origsite=gscholar&cbl=2026674"]} -{"year":"2019","title":"Text Corpus for NLP","authors":["C Room"],"snippet":"… Sep 2019. Common Crawl publishes 240 TiB of uncompressed data from 2.55 billion web pages. Of these, 1 billion URLs were not present in previous crawls. Common Crawl started in 2008. In 2013, they moved from ARC to Web ARChive (WARC) file format …","url":["https://devopedia.org/text-corpus-for-nlp"]} -{"year":"2019","title":"TEXT QUALITY EVALUATION METHODS AND PROCESSES","authors":["AA Pala, A Kagoshima, M Tober - US Patent App. 15/863,408, 2019"],"snippet":"… In one possible implementation, the reference text 2000 can be parts, or the complete version, of Wikipedia, for a given language, or one or more books, or Common Crawl, or any other corpus that consists of human-written high quality text …","url":["http://www.freepatentsonline.com/y2019/0213247.html"]} -{"year":"2019","title":"The AFRL WMT19 Systems: Old Favorites and New Tricks","authors":["J Gwinnup, G Erdmann, T Anderson - Proceedings of the Fourth Conference on …, 2019"],"snippet":"… Corpus Total Retained CommonCrawl 723,256 655,069 newscommentary 290,866 264,089 Yandex 1,000,000 901,307 ParaCrawl 12,061,155 5,173,675 UN2016 11,365,709 9,871,406 Total Lines 25,440,968 16,865,546 …","url":["https://www.aclweb.org/anthology/W19-5318"]} -{"year":"2019","title":"The BEA-2019 Shared Task on Grammatical Error Correction","authors":["C Bryant, M Felice, ØE Andersen, T Briscoe - … Workshop on Innovative Use of NLP for …, 2019"],"snippet":"Page 1. Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52–75 Florence, Italy, August 2, 2019. c 2019 Association for Computational Linguistics 52 The BEA-2019 …","url":["https://www.aclweb.org/anthology/W19-4406"]} -{"year":"2019","title":"The BLCU System in the BEA 2019 Shared Task","authors":["L Yang, C Wang - Proceedings of the Fourteenth Workshop on Innovative …, 2019"],"snippet":"… JunczysDowmunt and Grundkiewicz (2016); JunczysDowmunt et al. (2018) utilize the Common Crawl corpus to train the language model and pre-train part of the NMT model. Inspired by these studies, we also try to use a monolingual corpus for data augmentation …","url":["https://www.aclweb.org/anthology/W19-4421"]} -{"year":"2019","title":"The Geometry of Culture: Analyzing the Meanings of Class through Word Embeddings","authors":["AC Kozlowski, M Taddy, JA Evans - American Sociological Review, 2019"],"snippet":"We argue word embedding models are a useful tool for the study of culture using a historical analysis of shared understandings of social class as an empirical case. Word embeddings represent semant...","url":["https://journals.sagepub.com/doi/abs/10.1177/0003122419877135"]} -{"year":"2019","title":"The impact of individual audit partners on their clients' narrative disclosures","authors":["C Mauritz, M Nienhaus, C Oehler - 2019"],"snippet":"Page 1. The impact of individual audit partners on their clients' narrative disclosures ∗ Christoph Mauritz1, Martin Nienhaus2, and Christopher Oehler2 1University of Münster 2Goethe-University Frankfurt September 5, 2019 Abstract …","url":["http://www.geaba.de/wp-content/uploads/2019/09/Mauritz-Nienhaus-Oehler_2.pdf"]} -{"year":"2019","title":"The LAIX Systems in the BEA-2019 GEC Shared Task","authors":["R Li, C Wang, Y Zha, Y Yu, S Guo, Q Wang, Y Liu… - … on Innovative Use of NLP for …, 2019"],"snippet":"… Table 1 lists the data sets used in Restricted Track and Unrestricted Track, including FCE (Yannakoudakis et al., 2011), Lang-82 (Mizumoto et al., 2012), NUCLE (Ng et al., 2014), W&I+LOCNESS (Bryant et al., 2019) and Com …","url":["https://www.aclweb.org/anthology/W19-4416"]} -{"year":"2019","title":"The LIG system for the English-Czech Text Translation Task of IWSLT 2019","authors":["L Vial, B Lecouteux, D Schwab, H Le, L Besacier - arXiv preprint arXiv:1911.02898, 2019"],"snippet":"… C is a speech translation corpus of TED talks, similar to the test data of the task, and we added the News Commentary corpus, which consists of political and economic commentaries, be- cause it was the second smallest corpus …","url":["https://arxiv.org/pdf/1911.02898"]} -{"year":"2019","title":"The Linked Open Data cloud is more abstract, flatter and less linked than you may think!","authors":["L Asprino, W Beek, P Ciancarini, F van Harmelen… - arXiv preprint arXiv …, 2019"],"snippet":"… The two largest available crawls of LOD that are available today are WebDataCommons and LOD-a-lot. WebDataCommons5 [12] consists of ∼31B triples that have been extracted from the CommonCrawl datasets (November 2018 version) …","url":["https://arxiv.org/pdf/1906.08097"]} -{"year":"2019","title":"The NiuTrans Machine Translation Systems for WMT19","authors":["B Li, Y Li, C Xu, Y Lin, J Liu, H Liu, Z Wang, Y Zhang…"],"snippet":"… For EN↔RU, we used the following resource provided by WMT, including News Commentaryv14, ParaCrawl-v3, CommonCrawl and Yandex … corpus via random samplimng from 2M monolingual data selected by Xenc in the …","url":["http://nlplab.com/members/xiaotong_files/2019-wmt.pdf"]} -{"year":"2019","title":"The Quest to Automate Fact-checking","authors":["C Li"],"snippet":"… The model contains 300-dimensional vectors for 3 million words and phrases. https://code.google.com/archive/p/word2vec/ 2: Global Vectors for Word Representation using The Common Crawl corpus which contains …","url":["https://pdfs.semanticscholar.org/13e0/ef9f40c767060b510e2aa75740a3eda60ad4.pdf"]} -{"year":"2019","title":"The relationship between implicit intergroup attitudes and beliefs","authors":["B Kurdi, TC Mann, TES Charlesworth, MR Banaji - Proceedings of the National …, 2019"],"snippet":"Skip to main content. Submit; About: Editorial Board; PNAS Staff; FAQ; Rights and Permissions; Site Map. Contact; Journal Club; Subscribe: Subscription Rates; Subscriptions FAQ; Open Access; Recommend PNAS to Your …","url":["https://www.pnas.org/content/early/2019/02/26/1820240116.short"]} -{"year":"2019","title":"The RWTH Aachen University Machine Translation Systems for WMT 2019","authors":["J Rosendahl, C Herold, Y Kim, M Graça, W Wang… - Proceedings of the Fourth …, 2019"],"snippet":"… For De→En, we use data from CommonCrawl, Europarl, NewsCommentary and Rapid … (2017)), but without tied embedding weights, on the data from CommonCrawl, Europarl, NewsCommentary and Rapid ie about 6M sentence pairs …","url":["https://www.aclweb.org/anthology/W19-5338"]} -{"year":"2019","title":"The Semantic Web: Two Decades On","authors":["A Hogan"],"snippet":"Page 1. Semantic Web 0 (0) 1 1 IOS Press 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12 13 13 14 14 15 15 16 16 17 17 18 18 19 19 20 20 21 21 22 22 23 23 24 24 25 25 26 26 27 27 28 28 29 29 30 30 31 31 32 32 33 …","url":["http://www.semantic-web-journal.net/system/files/swj2303.pdf"]} -{"year":"2019","title":"The Source-Target Domain Mismatch Problem in Machine Translation","authors":["J Shen, PJ Chen, M Le, J He, J Gu, M Ott, M Auli… - arXiv preprint arXiv …, 2019"],"snippet":"… For Myanmar monolingual data, we use the language split Commoncrawl data from (Buck et al., 2014) which includes texts in various domains crawled from the web. We use the myanmar-tools2 library to classify and convert all Zawgyi text to Unicode …","url":["https://arxiv.org/pdf/1909.13151"]} -{"year":"2019","title":"The TALP-UPC Machine Translation Systems for WMT19 News Translation Task: Pivoting Techniques for Low Resource MT","authors":["N Casas, JAR Fonollosa, C Escolano, C Basta… - Proceedings of the Fourth …, 2019"],"snippet":"… 4.2 English-Russian The available parallel English-Russian corpora for the shared task included News Commentary v14, Wiki Titles v1, Common Crawl corpus, ParaCrawl v3, Yandex Corpus and the United Nations Parallel Corpus v1.0 (Ziemski et al., 2016) …","url":["https://www.aclweb.org/anthology/W19-5311"]} -{"year":"2019","title":"The Universitat d'Alacant submissions to the English-to-Kazakh news translation task at WMT 2019","authors":["VM Sánchez-Cartagena, JA Pérez-Ortiz…"],"snippet":"… 556 corpus lang. raw cleaned News Crawl kk 783k 783k Wiki dumps kk 1.7M 1.7M Common Crawl kk 10.9M 5.4M News Crawl en 200M 200M … The same filtering was applied to the monolingual Kazakh Common Crawl corpus …","url":["https://www.dlsi.ua.es/~fsanchez/pub/pdf/sanchez-cartagena19a.pdf"]} -{"year":"2019","title":"The University of Helsinki submissions to the WMT19 news translation task","authors":["A Talman, U Sulubacak, R Vázquez, Y Scherrer… - arXiv preprint arXiv …, 2019"],"snippet":"… removing all sentence pairs with a length difference ratio above a certain threshold: for CommonCrawl, ParaCrawl and Rapid we used a threshold of 3, for WikiTitles a threshold of 2, and for all other data sets a threshold of 9; …","url":["https://arxiv.org/pdf/1906.04040"]} -{"year":"2019","title":"The University of Sydney's Machine Translation System for WMT19","authors":["L Ding, D Tao - arXiv preprint arXiv:1907.00494, 2019"],"snippet":"… 3 Data Preparation We used all available parallel corpus 3 for Finnish→ English except the “Wiki Headlines” due to the large number of incomplete sentences, and for monolingual target side English data, we selected all …","url":["https://arxiv.org/pdf/1907.00494"]} -{"year":"2019","title":"The Web is missing an essential part of infrastructure: an Open Web Index","authors":["D Lewandowski - arXiv preprint arXiv:1903.03846, 2019"],"snippet":"… A search engine needs to keep its index current, meaning it needs to update at least a part of it every minute. This is an important requirement that is not being met by any of the current projects (like Common Crawl) …","url":["https://arxiv.org/pdf/1903.03846"]} -{"year":"2019","title":"TiFi: Taxonomy Induction for Fictional Domains [Extended version]","authors":["CX Chu, S Razniewski, G Weikum - arXiv preprint arXiv:1901.10263, 2019"],"snippet":"Page 1. TiFi: Taxonomy Induction for Fictional Domains [Extended version] ∗ Cuong Xuan Chu Max Planck Institute for Informatics Saarbrücken, Germany cxchu@mpi-inf. mpg.de Simon Razniewski Max Planck Institute for Informatics …","url":["https://arxiv.org/pdf/1901.10263"]} -{"year":"2019","title":"TLR at BSNLP2019: A Multilingual Named Entity Recognition System","authors":["JG Moreno, EL Pontes, M Coustaty, A Doucet - Proceedings of the 7th Workshop on …, 2019"],"snippet":"… in Figure 1. 3.1 FastText Embedding In this layer, we used pre-trained embeddings for each language trained on Common Crawl and Wikipedia using fastText (Bojanowski et al., 2017; Grave et al., 2018). These models were …","url":["https://www.aclweb.org/anthology/W19-3711"]} -{"year":"2019","title":"TMU Transformer System Using BERT for Re-ranking at BEA 2019 Grammatical Error Correction on Restricted Track","authors":["M Kaneko, K Hotate, S Katsumata, M Komachi - … Workshop on Innovative Use of NLP …, 2019"],"snippet":"… The 5-gram language model for re-ranking was trained on a subset of the Common Crawl corpus (Chollampatt and Ng, 2018a).5 We used a Python spell checker tool6 on the GEC model hy- pothesis sentences. 3.3 Evaluation …","url":["https://www.aclweb.org/anthology/W19-4422"]} -{"year":"2019","title":"Top-K Attention Mechanism for Complex Dialogue System","authors":["CU Shina, JW Chab - 2019"],"snippet":"… Then, the model submit the candidate with the highest value among the given candidates as the final correct an- swer. They randomly sampled one of the 99 negative samples to prevent bias during learning and used …","url":["http://workshop.colips.org/dstc7/papers/33.pdf"]} -{"year":"2019","title":"Toponym Identification in Epidemiology Articles--A Deep Learning Approach","authors":["MR Davari, L Kosseim, TD Bui - arXiv preprint arXiv:1904.11018, 2019"],"snippet":"… In order to measure the effect of such domain specific information, we experimented with 2 other pretrained word embedding models: Google News Word2vec [11], and a GloVe Model trained on Common Crawl [24] … Common Crawl GloVe 2.2M 300 29.84 …","url":["https://arxiv.org/pdf/1904.11018"]} -{"year":"2019","title":"Toward Automated Worldwide Monitoring of Network-Level Censorship","authors":["Z Weinberg - 2018"],"snippet":"Page 1. Toward Automated Worldwide Monitoring of Network-level Censorship Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical and Computer Engineering Zachary Weinberg BA Chemistry, Columbia University …","url":["http://search.proquest.com/openview/11a5908644ea63a6b01b3f0c4d23ce4e/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2019","title":"Toward Gender-Inclusive Coreference Resolution","authors":["YT Cao, H Daumé III - arXiv preprint arXiv:1910.13913, 2019"],"snippet":"Page 1. Toward Gender-Inclusive Coreference Resolution YANG TRISTA CAO, University of Maryland HAL DAUMÉ III, Microsoft Research & University of Maryland ABSTRACT Correctly resolving textual mentions of people …","url":["https://arxiv.org/pdf/1910.13913"]} -{"year":"2019","title":"Towards a Global Perspective on Web Tracking","authors":["N Samarasinghe, M Mannan - Computers & Security, 2019"],"snippet":"… Schelter et al. Schelter and Kunegis (2016) performed a large scale analysis of third-party trackers using the Common Crawl 2012 corpus. The corpus may contain tracking information of residential as well as institutional users …","url":["https://www.sciencedirect.com/science/article/pii/S0167404818314007"]} -{"year":"2019","title":"Towards an Automated Extraction of ABAC Constraints from Natural Language Policies","authors":["M Alohaly, H Takabi, E Blanco - IFIP International Conference on ICT Systems …, 2019"],"snippet":"… model. To configure the model, we set one hyper-parameter value at a time. Our default settings: dropout = 0, decay rate = 0, number of BiLSTM cells (ie, layers) = 1, and GloVe (Common crawl) with 300 dimensions. To determine …","url":["https://link.springer.com/chapter/10.1007/978-3-030-22312-0_8"]} -{"year":"2019","title":"Towards an automated method to assess data portals in the deep web","authors":["AS Correa, RM de Souza, FSC da Silva - Government Information Quarterly, 2019"],"snippet":"Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0740624X18305185"]} -{"year":"2019","title":"Towards Content Expiry Date Determination: Predicting Validity Periods of Sentences","authors":["A Almquist, A Jatowt2r0000"],"snippet":"… For this, we use Common Crawl dataset 16 which is a web dump composed of billions of websites with plain text versions available … For each sentence found in the Common Crawl dataset we identify DATE, TIME and DURATION …","url":["http://www.dl.kuis.kyoto-u.ac.jp/~adam/ecir19a.pdf"]} -{"year":"2019","title":"Towards Content Transfer through Grounded Text Generation","authors":["RE Dataset","S Prabhumoye, C Quirk, M Galley - arXiv preprint arXiv:1905.05293, 2019"],"snippet":"05/13/19 - Recent work in neural generation has attracted significant interest in controlling the form of text, such as style, persona, and p...","url":["https://arxiv.org/pdf/1905.05293","https://deepai.org/publication/towards-content-transfer-through-grounded-text-generation"]} -{"year":"2019","title":"Towards countering hate speech and personal attack in social media","authors":["P Charitidis, S Doropoulos, S Vologiannidis… - arXiv preprint arXiv …, 2019"],"snippet":"… each language. After conducting some preliminary experiments, the best pre-trained embedding choice for Greek and French language was using fastText embeddings [45], trained on Common Crawl and Wikipedia. For English …","url":["https://arxiv.org/pdf/1912.04106"]} -{"year":"2019","title":"Towards Functionally Similar Corpus Resources for Translation","authors":["M Kunilovskaya, S Sharoff"],"snippet":"… Secondly, we used lemmatised texts, with stop words filtered out (biLSTMlex in Table 1). For both scenarios we used pre-trained word embeddings of size 300, trained on the English Wikipedia and CommonCrawl data, using …","url":["http://corpus.leeds.ac.uk/serge/publications/2019-RANLP.pdf"]} -{"year":"2019","title":"Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning","authors":["D Cevher, S Zepf, R Klinger - arXiv preprint arXiv:1909.02764, 2019"],"snippet":"… We use a neural network with an embedding layer (frozen weights, pretrained on Common Crawl and Wikipedia (Grave et al., 2018)), a bidirectional LSTM (Schuster and Paliwal, 1997), and two dense layers followed by a soft max output layer …","url":["https://arxiv.org/pdf/1909.02764"]} -{"year":"2019","title":"Towards Multimodal Sarcasm Detection (An _Obviously_ Perfect Paper)","authors":["S Castro, D Hazarika, V Pérez-Rosas, R Zimmermann… - arXiv preprint arXiv …, 2019"],"snippet":"… 768. We also considered averaging Common Crawl pre-trained 300 dimensional GloVe word vectors (Pennington et al., 2014) for each token; however, it resulted in lower performance as compared to BERT-based features …","url":["https://arxiv.org/pdf/1906.01815"]} -{"year":"2019","title":"Towards Non-task-specific Distillation of BERT via Sentence Representation Approximation","authors":["B Wu, H Zhang, M Li, Z Wang, Q Feng, J Huang… - arXiv preprint arXiv …, 2020","HZ Bowen Wu, M Li, Z Wang, Q Feng, J Huang…"],"snippet":"… paper. 4.3 Hyperparameters For the student model in our proposed distilling method, we employ the 300-dimension GloVe (840B Common Crawl version; Pennington et al., 2014) to initialize the word embeddings. The number …","url":["https://arxiv.org/pdf/2004.03097","https://www.researchgate.net/profile/Bowen_Wu10/publication/337113946_Towards_Non-task-specific_Distillation_of_BERT_via_Sentence_Representation_Approximation/links/5dc5cffc4585151435f7df39/Towards-Non-task-specific-Distillation-of-BERT-via-Sentence-Representation-Approximation.pdf"]} -{"year":"2019","title":"Towards Robust Named Entity Recognition for Historic German","authors":["S Schweter, J Baiter - arXiv preprint arXiv:1906.07592, 2019"],"snippet":"… 69.59% Common Crawl 68.97% Wikipedia + Common Crawl 72.00% Wikipedia + Common Crawl + Character 74.50 … 69.62% Riedl and Padó (2018) (with transfer-learning) 74.33% ONB Wikipedia 75.80% CommonCrawl 78.70% Wikipedia + CommonCrawl 79.46 …","url":["https://arxiv.org/pdf/1906.07592"]} -{"year":"2019","title":"Towards semantic-rich word embeddings","authors":["G Beringer, M Jabłonski, P Januszewski, A Sobecki…"],"snippet":"… collected (III), for the our approach. We use a pretrained embedding model from spaCy - en_vectors_web_lg, which contains 300-dimensional word vectors trained on Common Crawl with GloVe2. We compare results on the …","url":["https://annals-csis.org/Volume_18/drp/pdf/120.pdf"]} -{"year":"2019","title":"Towards Unsupervised Grammatical Error Correction using Statistical Machine Translation with Synthetic Comparable Corpus","authors":["S Katsumata, M Komachi - arXiv preprint arXiv:1907.09724, 2019"],"snippet":"… makes up for the synthetic target data. To compare the fluency, the outputs of each best iter on JFLEG were evaluated with the perplexity based on the Common Crawl language model10. The perplexity of USMTforward in iter …","url":["https://arxiv.org/pdf/1907.09724"]} -{"year":"2019","title":"Tracking Naturalistic Linguistic Predictions with Deep Neural Language Models","authors":["M Heilbron, B Ehinger, P Hagoort, FP de Lange - arXiv preprint arXiv:1909.04400, 2019"],"snippet":"… Non-predictive controls We included two non-predictive and potentially confounding variables: first, frequency which we quantified as unigram surprise (−log p(w)) which was based on a word's lemma count in the CommonCrawl corpus, obtained via spaCy …","url":["https://arxiv.org/pdf/1909.04400"]} -{"year":"2019","title":"Transfer Learning across Languages from Someone Else's NMT Model","authors":["T Kocmi, O Bojar - arXiv preprint arXiv:1909.10955, 2019"],"snippet":"… WMT 2012 WMT 2018 English - French Commoncrawl, Europarl, Giga FREN, News commentary, UN corpus WMT 2013 WMT dis. 2015 … Based on our previous experiments, we ex- clude the noisiest corpus, ie web crawled ParaCrawl or Commoncrawl …","url":["https://arxiv.org/pdf/1909.10955"]} -{"year":"2019","title":"Transfer Learning from Transformers to Fake News Challenge Stance Detection (FNC-1) Task","authors":["V Slovikovskaya - arXiv preprint arXiv:1910.14353, 2019"],"snippet":"… 9XLNet is named after TransformerXL 10These corpora include (1) BOOK CORPUS [Zhu et al., 2015] plus English Wikipedia, the original data used to train BERT (16GB); (2) CC-NEWS, which authors collected from the English …","url":["https://arxiv.org/pdf/1910.14353"]} -{"year":"2019","title":"Transforma at SemEval-2019 Task 6: Offensive Language Analysis using Deep Learning Architecture","authors":["R Ong - arXiv preprint arXiv:1903.05280, 2019"],"snippet":"… This allows us to evaluate the increase in di- mensionality on the performance of our models 3. GloVe: Common Crawl (300d) - Trained on 42B tokens, 1.9M vocabulary of unique words … Table 7: T - GloVe Twitter, CC - GloVe Common Crawl …","url":["https://arxiv.org/pdf/1903.05280"]} -{"year":"2019","title":"transformers. zip: Compressing Transformers with Pruning and Quantization","authors":["R Cheong, R Daniel - 2019"],"snippet":"… 9: return M 4 Page 5. 4 Experiments 4.1 Dataset We train and evaluate on the WMT English - German translation task. Specifically, we train on all of Europarl, Common Crawl, and News Commentary, validate on the …","url":["https://pdfs.semanticscholar.org/fe82/735fe8ae2163a37aa2787eee0db8efc745b6.pdf"]} -{"year":"2019","title":"Translating Translationese: A Two-Step Approach to Unsupervised Machine Translation","authors":["N Pourdamghani, N Aldarrab, M Ghazvininejad…"],"snippet":"… For Arabic we use MultiUN (Tiedemann, 2012). For French we use CommonCrawl For German we use a mix of CommonCrawl (1.7M), and NewsCommentary (300K) … For Spanish we use CommonCrawl (1.8M), and Europarl (200K) …","url":["https://www.isi.edu/~jonmay/pubs/acl19.pdf"]} -{"year":"2019","title":"Tree Edit Distance Learning via Adaptive Symbol Embeddings","authors":["BPCGA Micheli, B Hammer"],"snippet":"Deep Learning Monitor. Paper Detail. Close This Page. Tree Edit Distance Learning via Adaptive Symbol Embeddings. 2018-06-18 13:54:45; Benjamin Paaßen, Claudio Gallicchio, Alessio Micheli, Barbara Hammer; 0. Abstract …","url":["https://deeplearn.org/arxiv/38595/tree-edit-distance-learning-via-adaptive-symbol-embeddings"]} -{"year":"2019","title":"TU Wien@ TREC Deep Learning'19--Simple Contextualization for Re-ranking","authors":["S Hofstätter, M Zlabinger, A Hanbury - arXiv preprint arXiv:1912.01385, 2019"],"snippet":"… For the full task we generated initial rankings with Anserini using BM25 and utilized the validation sets to tune the re-ranking 1https://github.com/microsoft/BlingFire 242B CommonCrawl lower-cased: https://nlp.stanford.edu/projects/glove …","url":["https://arxiv.org/pdf/1912.01385"]} -{"year":"2019","title":"Twitter Sentiment on Affordable Care Act using Score Embedding","authors":["M Farhadloo - arXiv preprint arXiv:1908.07061, 2019"],"snippet":"… The embeddings pre-trained on Common Crawl data were only available in dimension 300 and were trained on 840 billion tokens with vocabulary … of available unlabeled training data had an impact on the performance …","url":["https://arxiv.org/pdf/1908.07061"]} -{"year":"2019","title":"Two New Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala-English","authors":["F Guzmán, PJ Chen, M Ott, J Pino, G Lample, P Koehn… - arXiv preprint arXiv …, 2019"],"snippet":"… M monolingual Wikipedia (en) 67.8M 2.0B Common Crawl (ne) 3.6M 103.0M Wikipedia (ne) 92.3K 2.8M Sinhala–English … 5M monolingual Wikipedia (en) 67.8M 2.0B Common Crawl (si) 5.2M 110.3M Wikipedia (si) 155.9K 4.7M …","url":["https://arxiv.org/pdf/1902.01382"]} -{"year":"2019","title":"Type: Report Dissemination level: Public Due Date (in months): 24 (August 2019)","authors":["CSGOER Network"],"snippet":"Page 1. X Modal X Cultural X Lingual X Domain X Site Global OER Network Grant Agreement Number: 761758 Project Acronym: X5GON Project title: Cross Modal, Cross Cultural, Cross Lingual, Cross Domain, and Cross Site …","url":["https://www.x5gon.org/wp-content/uploads/2019/10/D5.2_afterJSTrev_26Aug19.pdf"]} -{"year":"2019","title":"UdS-DFKI Participation at WMT 2019: Low-Resource (en-gu) and Coreference-Aware (en-de) Systems","authors":["C España-Bonet, D Ruiter - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… proportions. Our base system uses CommonCrawl … x1 Parallel CommonCrawl 2,394,878 x1 x4 Europarl 1,775,445 x1 x4 NewsCommentary 328,059 x4 x16 Rapid 1,105,651 x1 x4 ParaCrawlFiltered 12,424,790 x0 x1 Table …","url":["https://www.aclweb.org/anthology/W19-5315"]} -{"year":"2019","title":"Understanding and Mitigating the Security Risks of Content Inclusion in Web Browsers","authors":["S Arshad - 2019"],"snippet":"… 47 5.1 Sample URL grouping. . . . . 73 5.2 Narrowing down the Common Crawl to the candidate set used in our analysis (from left to right) . . . . 79 5.3 Vulnerable pages and sites in the candidate set …","url":["http://search.proquest.com/openview/5a3bdc0060c7ad7004f26c77dae937c2/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2019","title":"Uni-and Multimodal and Structured Representations for Modeling Frame Semantics","authors":["T Botschen - 2019"],"snippet":"Page 1. Uniand Multimodal and Structured Representations for Modeling Frame Semantics Vom Fachbereich Informatik der Technischen Universität Darmstadt genehmigte Dissertation zur Erlangung des akademischen Grades …","url":["http://tuprints.ulb.tu-darmstadt.de/8484/1/Dissertation_TeresaBotschen.pdf"]} -{"year":"2019","title":"Unified Visual-Semantic Embeddings: Bridging Vision and Language with Structured Meaning Representations","authors":["H Wu, J Mao, Y Zhang, Y Jiang, L Li, W Sun, WY Ma - arXiv preprint arXiv:1904.05521, 2019"],"snippet":"Page 1. Unified Visual-Semantic Embeddings: Bridging Vision and Language with Structured Meaning Representations Hao Wu1,3,4,6,∗,†, Jiayuan Mao5,6,∗,†, Yufeng Zhang2,6,†, Yuning Jiang6, Lei Li6, Weiwei Sun1,3,4, Wei-Ying Ma6 …","url":["https://arxiv.org/pdf/1904.05521"]} -{"year":"2019","title":"Unraveling the Search Space of Abusive Language in Wikipedia with Dynamic Lexicon Acquisition","authors":["WF Chen, K Al-Khatib, M Hagen, H Wachsmuth…"],"snippet":"… The hidden state is employed to predict the probability of 'not-attack' using a linear regression layer. We use 300-dimensional word embeddings (Pennington et al., 2014) pre-trained on the Common Crawl with 840 …","url":["https://webis.de/downloads/publications/papers/stein_2019z.pdf"]} -{"year":"2019","title":"Unsupervised Cross-lingual Representation Learning at Scale","authors":["A Conneau, K Khandelwal, N Goyal, V Chaudhary… - arXiv preprint arXiv …, 2019"],"snippet":"… As shown in Figure 1, the CommonCrawl Corpus that we collected has significantly more monolingual data than the previously used Wikipedia corpora. Figure 3 shows that for the same BERTBase architecture, all models …","url":["https://arxiv.org/pdf/1911.02116"]} -{"year":"2019","title":"Unsupervised Extraction of Partial Translations for Neural Machine Translation","authors":["B Marie, A Fujita - Proceedings of the 2019 Conference of the North …, 2019"],"snippet":"… We extracted monolingual data ourselves from the Common Crawl project8 for Bengali (5.3M lines) and Malay (4.6M lines … 8http://commoncrawl org/ 9https://fasttext.cc/ 10The extraction of 100k partial translations from …","url":["https://www.aclweb.org/anthology/N19-1384"]} -{"year":"2019","title":"Unsupervised Joint Training of Bilingual Word Embeddings","authors":["B Marie, A Fujita - Proceedings of the 57th Conference of the Association …, 2019"],"snippet":"… For en- id, we used English (100M lines) and Indonesian (77M lines) Common Crawl corpora.5 We then mapped the word embeddings into a BWE space using VECMAP,6 one of the best and most robust methods for unsupervised mapping (Glavas et al., 2019) …","url":["https://www.aclweb.org/anthology/P19-1312"]} -{"year":"2019","title":"Unsupervised Lemmatization as Embeddings-Based Word Clustering","authors":["R Rosa, Z Žabokrtský - arXiv preprint arXiv:1908.08528, 2019"],"snippet":"… For the experiments reported in this paper, we use the pretrained word embedding dictionaries available from the FastText website.78 The word embeddings had been trained on Wikipedia9 and Common Crawl10 texts …","url":["https://arxiv.org/pdf/1908.08528"]} -{"year":"2019","title":"Unsupervised Question Answering by Cloze Translation","authors":["P Lewis, L Denoyer, S Riedel - arXiv preprint arXiv:1906.04980, 2019"],"snippet":"… Question Corpus We mine questions from En- glish pages from a recent dump of common crawl using simple selection criteria:3 We select sen … 3http:// commoncrawl.org/ 4We also experimented with language model pretraining …","url":["https://arxiv.org/pdf/1906.04980"]} -{"year":"2019","title":"Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment","authors":["P Bojanowski, O Celebi, T Mikolov, E Grave, A Joulin - arXiv preprint arXiv …, 2019"],"snippet":"… Indeed, despite their size, large web data such as Common Crawl lack coverage for highly technical expert fields such as medicine or law … Training data. We take two subsets of the May 2017 dump of the Common Crawl …","url":["https://arxiv.org/pdf/1910.06241"]} -{"year":"2019","title":"Updating verbal fluency analysis for the 21st century: Applications for psychiatry","authors":["TB Holmlund, J Cheng, PW Foltz, AS Cohen, B Elvevåg - Psychiatry Research, 2019"],"snippet":"… To base the analysis on a corpus with a wide variety of animal-word sources, we used a set of pre-trained word vectors calculated from approximately 42 billion tokens from the entire internet, courtesy of the Common Crawl project (Pennington et al., 2014) …","url":["https://www.sciencedirect.com/science/article/pii/S0165178118324181"]} -{"year":"2019","title":"Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs","authors":["A Fan, C Gardent, C Braud, A Bordes - 2019"],"snippet":"… WikiSum Second, we experiment on the WikiSum CommonCrawl (Liu et al., 2018b) summarization dataset4 with 1.5 million examples … denotes results from (Liu et al., 2018b) that use data scraped from unrestricted web search, not the static CommonCrawl version …","url":["https://hal.archives-ouvertes.fr/hal-02277063/document"]} -{"year":"2019","title":"Using logical form encodings for unsupervised linguistic transformation: Theory and applications","authors":["T Gröndahl, N Asokan - arXiv preprint arXiv:1902.09381, 2019"],"snippet":"Page 1. arXiv:1902.09381v1 [cs.CL] 25 Feb 2019 Using logical form encodings for unsupervised linguistic transformation: Theory and applications Tommi Gröndahl N. Asokan Abstract We present a novel method to architect …","url":["https://arxiv.org/pdf/1902.09381"]} -{"year":"2019","title":"Using the Semantic Web as a source of training data","authors":["C Bizer, A Primpeli, R Peeters - Datenbank-Spektrum, 2019"],"snippet":"… The Web Data Commons (WDC) project 4 monitors the adoption of schema.org annotations on the Web by analysing the Common Crawl 5 , a series of public web corpora each containing several billion HTML pages [12]. The …","url":["https://link.springer.com/article/10.1007/s13222-019-00313-y"]} -{"year":"2019","title":"Using Whole Document Context in Neural Machine Translation","authors":["V Macé, C Servan - arXiv preprint arXiv:1910.07481, 2019"],"snippet":"… models are evaluated on the same standard corpora that have Page 3. Corpora #lines # EN # DE Common Crawl 2.2M 54M 50M Europarl V9† 1.8M 50M 48M News Comm. V14† 338K 8.2M 8.3M ParaCrawl V3 27.5M 569M …","url":["https://arxiv.org/pdf/1910.07481"]} -{"year":"2019","title":"Variational Auto-Decoder: Neural Generative Modeling from Partial Data","authors":["A Zadeh, YC Lim, PP Liang, LP Morency - arXiv preprint arXiv:1903.00840, 2019"],"snippet":"… CMU-MOSEI consists of 23,500 sentences and CMU-MOSI consists of 2199 sentences. For text modality, the datasets contain GloVe word embeddings (Pennington et al., 2014) trained on 840 billion tokens from the Common Crawl dataset …","url":["https://arxiv.org/pdf/1903.00840"]} -{"year":"2019","title":"Vernon-fenwick at SemEval-2019 Task 4: Hyperpartisan News Detection using Lexical and Semantic Features","authors":["V Srivastava, A Gupta, D Prakash, SK Sahoo, RR Rohit… - Proceedings of the 13th …, 2019"],"snippet":"… semantic space. We have used 300-dimensional Glove embeddings trained on Common Crawl data of 2.2 million words and 840 billion tokens. An ar- ticle was tokenized into sentences and further into words to obtain it's article representation …","url":["https://www.aclweb.org/anthology/S19-2189"]} -{"year":"2019","title":"Video Question Answering with Spatio-Temporal Reasoning","authors":["Y Jang, Y Song, CD Kim, Y Yu, Y Kim, G Kim - International Journal of Computer …, 2019"],"snippet":"Page 1. International Journal of Computer Vision https://doi.org/10.1007/s11263-01901189-x Video Question Answering with Spatio-Temporal Reasoning Yunseok Jang1 · Yale Song2 · Chris Dongjoo Kim1 · Youngjae Yu1 · Youngjin Kim1 · Gunhee Kim1 …","url":["https://link.springer.com/article/10.1007/s11263-019-01189-x"]} -{"year":"2019","title":"Vir is to Moderatus as Mulier is to Intemperans Lemma Embeddings for Latin","authors":["R Sprugnoli, M Passarotti, G Moretti"],"snippet":"… Both Facebook and the organizers of the CoNLL shared tasks on multilingual parsing have pre-computed and released word embeddings trained on Latin texts crawled from the web: the former using the fastText model on …","url":["https://www.researchgate.net/profile/Rachele_Sprugnoli/publication/336798734_Vir_is_to_Moderatus_as_Mulier_is_to_Intemperans_Lemma_Embeddings_for_Latin/links/5db2a47e92851c577ec259b4/Vir-is-to-Moderatus-as-Mulier-is-to-Intemperans-Lemma-Embeddings-for-Latin.pdf"]} -{"year":"2019","title":"Vision-based Page Rank Estimation with Graph Networks","authors":["TI Denk, S Güner"],"snippet":"… The Open PageRank initiative provides freely available data that was built on top of Common Crawl [do/19], which provides high quality crawl data of webp ages since 2013. Open PageRank uses the number of backlinks of …","url":["https://www.researchgate.net/profile/Timo_Denk/publication/334824445_Vision-based_Page_Rank_Estimation_with_Graph_Networks/links/5d429cb692851cd04696fd56/Vision-based-Page-Rank-Estimation-with-Graph-Networks.pdf"]} -{"year":"2019","title":"VizNet: Towards A Large-Scale Visualization Learning and Benchmarking Repository","authors":["K Hu, N Gaikwad, M Bakker, M Hulsebos, E Zgraggen…"],"snippet":"… Corpora The first category of corpora includes data tables harvested from the web. In particular, we use horizontal relational tables from the WebTables 2015 corpus [6], which extracts structured tables from the Common Crawl …","url":["https://hci.stanford.edu/~cagatay/projects/viznet/VizNet-CHI19-Submission.pdf"]} -{"year":"2019","title":"Wanca in Korp: Text corpora for underresourced Uralic languages","authors":["H Jauhiainen, T Jauhiainen, K Lindén - DATA AND HUMANITIES (RDHUM) 2019 …"],"snippet":"… In addition to conducting our own crawling, we also used the pre-crawled corpus distributed by the Common Crawl Foundation … 2 In addition to conducting our own crawling, we used the pre-crawled corpus distributed by the Common Crawl Foundation …","url":["https://researchportal.helsinki.fi/files/126205806/Proceedings_RDHum2019.pdf#page=23"]} -{"year":"2019","title":"WDC Product Data Corpus and Gold Standard for Large-Scale Product Matching-Version 2.0","authors":["R Peeters, A Primpeli, C Bizer"],"snippet":"… methods. The Web Data Commons project regularly extracts schema.org annotations from the Common Crawl, a large public web corpus. November 2017 version of the WDC schema.org data set contains 365 million offers …","url":["http://webdatacommons.org/largescaleproductcorpus/v2/"]} -{"year":"2019","title":"Weakly-Supervised Concept-based Adversarial Learning for Cross-lingual Word Embeddings","authors":["H Wang, J Henderson, P Merlo - arXiv preprint arXiv:1904.09446, 2019"],"snippet":"… (2018a)11, we use their pretrained CBOW embeddings of 300 dimensions. For English, Italian and German, the models are trained on the WacKy corpus. The Finnish model is trained from Common Crawl and the Spanish model is trained from WMT News Crawl …","url":["https://arxiv.org/pdf/1904.09446"]} -{"year":"2019","title":"Web Archive Analysis Using Hive and SparkSQL","authors":["X Wang, Z Xie - 2019 ACM/IEEE Joint Conference on Digital Libraries …, 2019"],"snippet":"… Keywords web archive, big data, distributed computation 1 Introduction Web preservation organizations such as Common Crawl or Internet Archive are common sources of web archive data … We use a data set from Common Crawl May 2018 collection …","url":["https://ieeexplore.ieee.org/abstract/document/8791112/"]} -{"year":"2019","title":"Web Engineering: 19th International Conference, ICWE 2019, Daejeon, South Korea, June 11–14, 2019, Proceedings","authors":["M Bakaev"],"snippet":"Page 1. Maxim Bakaev Flavius Frasincar In-Young Ko (Eds.) Web Engineering 19th International Conference, ICWE 2019 Daejeon, South Korea, June 11–14, 2019 Proceedings 123 Page 2. Lecture Notes in Computer Science …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=5R6VDwAAQBAJ&oi=fnd&pg=PR5&dq=commoncrawl&ots=X57GCPV1TC&sig=41aU_I70hr0H-D_h9MbSG1Ruryc"]} -{"year":"2019","title":"Web table integration and profiling for knowledge base augmentation","authors":["O Lehmberg - 2019"],"snippet":"Page 1. Web Table Integration and Profiling for Knowledge Base Augmentation Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der Universität Mannheim …","url":["https://madoc.bib.uni-mannheim.de/52346/1/thesis.pdf"]} -{"year":"2019","title":"Web View: Measuring & Monitoring Representative Information on Websites","authors":["A Saverimoutou, B Mathieu, S Vaton - ICIN 2019-QOE-MANAGEMENT 2019, 2019"],"snippet":"… XRay [8] and AdFisher run automated personalization detection experiments and Common Crawl 7 uses an Apache Nutch based crawler … 4http://phantomjs. org/ 5https://www.seleniumhq.org/ 6https://github.com/ghostwords/chameleon …","url":["https://hal.archives-ouvertes.fr/hal-02072471/document"]} -{"year":"2019","title":"WebIsAGraph: A Very Large Hypernymy Graph from a Web Corpus","authors":["F Stefano, I Finocchi, SP Ponzetto, V Paola - Sixth Italian Conference on …, 2019","S Faralli, I Finocchi, SP Ponzetto, P Velardi - 2019"],"snippet":"… Abstract In this paper, we present WebIsAGraph, a very large hypernymy graph compiled from a dataset of is-a relationships ex- tracted from the CommonCrawl … This is because, due to their large size, source input corpora …","url":["https://iris.luiss.it/handle/11385/192535","https://www.researchgate.net/profile/Stefano_Faralli2/publication/336899588_WebIsAGraph_A_Very_Large_Hypernymy_Graph_from_a_Web_Corpus/links/5db9a6c24585151435d5b691/WebIsAGraph-A-Very-Large-Hypernymy-Graph-from-a-Web-Corpus.pdf"]} -{"year":"2019","title":"What a neural language model tells us about spatial relations","authors":["M Ghanimifard, S Dobnik - Proceedings of the Combined Workshop on Spatial …, 2019"],"snippet":"… Finally, we also use pre-trained GloVe embeddings on the Common Crawl (CC) dataset with 42B tokens4 … On multi-word test suite the P-vectors perform slightly better. On both test suites, GloVe trained on Common Crawl performs …","url":["https://www.aclweb.org/anthology/W19-1608"]} -{"year":"2019","title":"What are Links in Linked Open Data? A Characterization and Evaluation of Links between Knowledge Graphs on the Web","authors":["A Haller, JD Fernández, MR Kamdar, A Polleres - Working Papers on Information …, 2019"],"snippet":"Page 1. What are Links in Linked Open Data? A Characterization and Evaluation of Links between Knowledge Graphs on the Web Armin Haller, Javier D. Fernández, Maulik R. Kamdar, Axel Polleres Arbeitspapiere zum Tätigkeitsfeld …","url":["http://epub.wu.ac.at/7193/1/20191002ePub_LOD_link_analysis.pdf"]} -{"year":"2019","title":"What does Neural Bring? Analysing Improvements in Morphosyntactic Annotation and Lemmatisation of Slovenian, Croatian and Serbian","authors":["N Ljubešić, K Dobrovoljc - Proceedings of the 7th Workshop on Balto-Slavic …, 2019"],"snippet":"… neural morphosyntactic taggers, we also experiment with various embeddings, mostly (1) the original CoNLL 2017 word2vec (w2v) embeddings for Slovenian and Croatian (Ginter et al., 2017) (there are none available for …","url":["https://www.aclweb.org/anthology/W19-3704"]} -{"year":"2019","title":"Who Needs Words? Lexicon-Free Speech Recognition","authors":["T Likhomanenko, G Synnaeve, R Collobert - arXiv preprint arXiv:1904.04479, 2019"],"snippet":"… char GCNN-20B no 6.4 2.7 3.6 1.5 4https://github.com/facebookresearch/wav2letter 5Speaker adaptation; pronunciation lexicon 612k hours AM train set and common crawl LM 7Speaker adaptation; 3k acoustic states 8Data augmentation; n-gram LM …","url":["https://arxiv.org/pdf/1904.04479"]} -{"year":"2019","title":"WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia","authors":["H Schwenk, V Chaudhary, S Sun, H Gong, F Guzmán - arXiv preprint arXiv …, 2019"],"snippet":"… recall. In this work, we chose the global mining op- tion. This will allow us to scale the same ap- proach to other, potentially huge, corpora for which document-level alignments are not easily available, eg Common Crawl. An …","url":["https://arxiv.org/pdf/1907.05791"]} -{"year":"2019","title":"WINOGRANDE: An Adversarial Winograd Schema Challenge at Scale","authors":["K Sakaguchi, RL Bras, C Bhagavatula, Y Choi - arXiv preprint arXiv:1907.10641, 2019"],"snippet":"… Ensemble Neural LMs Trinh and Le (2018) is one of the first attempts to apply a neural language model which is pre-trained on a very large corpora (including LM-1-Billion, CommonCrawl, SQuAD, and Gutenberg Books). In …","url":["https://arxiv.org/pdf/1907.10641"]} -{"year":"2019","title":"Word Embedding Based Extension of Text Categorization Topic Taxonomies","authors":["T Eljasik-Swoboda, F Engel, M Kaufmann, M Hemmje"],"snippet":"… ArgumenText is a practical implementation of an AM engine (Stab et al., 2018). It employs a two-step mechanism in which a large collection of documents (http://commoncrawl.org/, in Stab et al.'s experiment with 683 …","url":["http://ceur-ws.org/Vol-2348/paper01.pdf"]} -{"year":"2019","title":"Word Embedding Models for Query Expansion in Answer Passage Retrieval","authors":["S MASTER"],"snippet":"Page 1. MASTER'S THESIS Word Embedding Models for Query Expansion in Answer Passage Retrieval NIRMAL ROY Page 2. Page 3. Word Embedding Models for Query Expansion in Answer Passage Retrieval THESIS submitted …","url":["https://pdfs.semanticscholar.org/f436/c49151fd8d00c59655a939bbbd552f1577c4.pdf"]} -{"year":"2019","title":"Word Embedding Visualization Via Dictionary Learning","authors":["J Zhang, Y Chen, B Cheung, BA Olshausen - arXiv preprint arXiv:1910.03833, 2019"],"snippet":"… similar. For simplicity, we show the results for the 300 dimensional GloVe word vectors[30] pretrained on CommonCrawl [2]. We shall discuss the difference across different embedding models at the end in this section. Once …","url":["https://arxiv.org/pdf/1910.03833"]} -{"year":"2019","title":"Word Embeddings (Also) Encode Human Personality Stereotypes","authors":["O Agarwal, F Durupınar, NI Badler, A Nenkova - … of the Eighth Joint Conference on …, 2019"],"snippet":"… or profession. We experimented with GloVe representations (Pennington et al., 2014) trained on Common crawl (6B tokens, 400K vocab, 300d) and symmetric pattern (SP) based representations (Schwartz et al., 2015). We …","url":["https://www.aclweb.org/anthology/S19-1023"]} -{"year":"2019","title":"Word Embeddings and Gender Stereotypes in Swedish and English","authors":["R Précenth - 2019"],"snippet":"Page 1. UUDM Project Report 2019:15 Examensarbete i matematik, 30 hp Handledare: David Sumpter Examinator: Denis Gaidashev Maj 2019 Department of Mathematics Uppsala University Word Embeddings and Gender Stereotypes in Swedish and English …","url":["https://uu.diva-portal.org/smash/get/diva2:1313459/FULLTEXT01.pdf"]} -{"year":"2019","title":"Word Embeddings for Fine-Grained Sentiment Analysis","authors":["D Bacon, R Dalal, MRD Kodandarama, MR Hari…"],"snippet":"… Lastly, we considered the word embedding sub-model. We used the GLoVe word vectoring [11] trained on Common Crawl [https://commoncrawl.org/] as implemented by spaCy [7]. This resulted in a vector-dimension of 300 for each word …","url":["https://divatekodand.github.io/files/word_embeddings.pdf"]} -{"year":"2019","title":"Word Embeddings for Sentiment Analysis: A Comprehensive Empirical Survey","authors":["E Çano, M Morisio - arXiv preprint arXiv:1902.00753, 2019"],"snippet":"… This bundle contains data of Common Crawl (http: //commoncrawl.org/), a nonprofit organization that builds and maintains free and public text sets by crawling the Web. CommonCrawl42 is a highly reduced version easier and faster to work with …","url":["https://arxiv.org/pdf/1902.00753"]} -{"year":"2019","title":"Word Embeddings for the Armenian Language: Intrinsic and Extrinsic Evaluation","authors":["K Avetisyan, T Ghukasyan - arXiv preprint arXiv:1906.03134, 2019"],"snippet":"… A year later, Facebook released another batch of fastText embeddings, trained on Common Crawl and Wikipedia [2]. Other publicly available embeddings include 4 … these embeddings were trained on Wikipedia and Common Crawl, using CBOW architecture with …","url":["https://arxiv.org/pdf/1906.03134"]} -{"year":"2019","title":"Word Embeddings in Low Resource Gujarati Language","authors":["I Joshi, P Koringa, S Mitra - 2019 International Conference on Document Analysis …, 2019"],"snippet":"… (2014) released GloVe models trained on Wikipedia, Gigaword and Common Crawl (840B tokens). A notable effort is the work of Al-Rfou et al … Word embeddings for Gujarati language were released as a part of …","url":["https://ieeexplore.ieee.org/abstract/document/8893052/"]} -{"year":"2019","title":"Word Similarity Datasets for Thai: Construction and Evaluation","authors":["P Netisopakul, G Wohlgenannt, A Pulich - arXiv preprint arXiv:1904.04307, 2019"],"snippet":"… The models are trained on Common Crawl and Wikipedia corpora using fastText [13], regarding settings they report the us- age of the CBOW algorithm, 300 dimensions, a window size of 5 and 10 negatives. The model is large and contains 2M vectors …","url":["https://arxiv.org/pdf/1904.04307"]} -{"year":"2019","title":"Word Usage Similarity Estimation with Sentence Representations and Automatic Substitutes","authors":["AG Soler, M Apidianaki, A Allauzen - arXiv preprint arXiv:1905.08377, 2019"],"snippet":"… al., 2014). We use 300-dimensional GloVe embeddings pre-trained on Common Crawl (840B tokens).5 The representation of a sentence is obtained by averaging the GloVe embeddings of the words in the sentence. SIF (Smooth …","url":["https://arxiv.org/pdf/1905.08377"]} -{"year":"2019","title":"Word-embedding data as an alternative to questionnaires for measuring the affective meaning of concepts","authors":["A van Loon, J Freese - 2019"],"snippet":"… Here we include information from both algorithms. The GloVe embeddings we use have been trained on text obtained from Wikipedia, Twitter, and Common Crawl. The Word2vec embeddings we use are trained on the Google News Corpus …","url":["https://osf.io/preprints/socarxiv/r7ewx/download"]} -{"year":"2019","title":"Word-Embeddings and Grammar Features to Detect Language Disorders in Alzheimer's Disease Patients","authors":["JS Guerrero-Cristancho, JC Vásquez-Correa… - TecnoLógicas, 2020"],"snippet":"… occurrence in a document [13]. Said authors considered a pre-trained model with the Common Crawl dataset, whose vocabulary size exceeds the 2 million and contains 840 billion words. A logistic regression classifier and …","url":["https://revistas.itm.edu.co/index.php/tecnologicas/article/download/1387/1456"]} -{"year":"2019","title":"WTMED at MEDIQA 2019: A Hybrid Approach to Biomedical Natural Language Inference","authors":["Z Wu, Y Song, S Huang, Y Tian, F Xia - Proceedings of the 18th BioNLP Workshop …, 2019"],"snippet":"Page 1. Proceedings of the BioNLP 2019 workshop, pages 415–426 Florence, Italy, August 1, 2019. c 2019 Association for Computational Linguistics 415 WTMED at MEDIQA 2019: A Hybrid Approach to Biomedical Natural Language Inference …","url":["https://www.aclweb.org/anthology/W19-5044"]} -{"year":"2019","title":"X-WikiRE: A Large, Multilingual Resource for Relation Extraction as Machine Comprehension","authors":["M Abdou, C Sas, R Aralikatte, I Augenstein, A Søgaard - arXiv preprint arXiv …, 2019"],"snippet":"… All monolingual models' word embeddings were initialised using fastText embeddings trained on each language's Wikipedia and common crawl corpora,7 except for the comparison experiments described in sub-section …","url":["https://arxiv.org/pdf/1908.05111"]} -{"year":"2019","title":"XLNet: Generalized Autoregressive Pretraining for Language Understanding","authors":["Z Yang, Z Dai, Y Yang, J Carbonell, R Salakhutdinov… - arXiv preprint arXiv …, 2019"],"snippet":"Page 1. XLNet: Generalized Autoregressive Pretraining for Language Understanding Zhilin Yang∗1, Zihang Dai∗12, Yiming Yang1, Jaime Carbonell1, Ruslan Salakhutdinov1, Quoc V. Le2 1Carnegie Mellon University, 2Google …","url":["https://arxiv.org/pdf/1906.08237"]} -{"year":"2019","title":"YNU Wb at HASOC 2019: Ordered Neurons LSTM with Attention for Identifying Hate Speech and Offensive Language","authors":["B Wang, SL Yunxia Ding, X Zhou - Proceedings of the 11th annual meeting of the …, 2019"],"snippet":"… And the pre-training word vector we used is fastText, which is provided by Mikolov et al. [7]. It is a 2 million word vector trained using subword information on Common Crawl with 600B tokens, and its dimension is 300. 4.3 Result …","url":["http://ceur-ws.org/Vol-2517/T3-2.pdf"]} -{"year":"2019","title":"YNUWB at SemEval-2019 Task 6: K-max pooling CNN with average meta-embedding for identifying offensive language","authors":["B Wang, X Zhou, X Zhang - Proceedings of the 13th International Workshop on …, 2019"],"snippet":"… FastText is provided by Mikolov et al. (Mikolov et al., 2018), it is a 2 million word vector trained using subword information on Common Crawl with 600B tokens, and its dimension is 300. Glove is provided by Jeffrey Pennington et al …","url":["https://www.aclweb.org/anthology/S19-2143"]} -{"year":"2019","title":"Zastosowania metody rzutu przypadkowego w głębokich sieciach neuronowych","authors":["PI Wójcik"],"snippet":"Page 1. Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie Wydział Informatyki, Elektroniki i Telekomunikacji Katedra Informatyki Rozprawa doktorska Zastosowania metody rzutu przypadkowego w głębokich …","url":["http://www.doktoraty.iet.agh.edu.pl/_media/2018:pwojcik:phd.pdf"]} -{"year":"2019","title":"Zero-Resource Cross-Lingual Named Entity Recognition","authors":["MS Bari, S Joty, P Jwalapuram - arXiv preprint arXiv:1911.09812, 2019"],"snippet":"… We use FastText embeddings (Grave et al. 2018), which are trained on Common Crawl and Wikipedia, and SGD with a gradient clipping of 5.0 to train the model. We found that the learning rate was crucial for training, and …","url":["https://arxiv.org/pdf/1911.09812"]} -{"year":"2019","title":"Zero-Resource Neural Machine Translation with Monolingual Pivot Data","authors":["A Currey, K Heafield"],"snippet":"… We use all available parallel corpora for EN↔DE (Europarl v7, Common Crawl, and News Commentary v11) and for EN↔RU (Common Crawl, News Commentary v11, Yandex Corpus, and Wiki Headlines) to train the initial …","url":["https://kheafield.com/papers/edinburgh/pivot.pdf"]} -{"year":"2019","title":"Zero-shot Learning and Knowledge Transfer in Music Classification and Tagging","authors":["J Choi, J Lee, J Park, J Nam - arXiv preprint arXiv:1906.08615, 2019"],"snippet":"… We utilized a pretrained GloVe model available online. It contains 19 million vocabularies with 300 dimensional embedding trained from documents in Common Crawl data. We then evaluated the model on MTAT and GTZAN …","url":["https://arxiv.org/pdf/1906.08615"]} -{"year":"2019","title":"Zero-Shot Question Classification Using Synthetic Samples","authors":["H Fu, C Yuan, X Wang, Z Sang, S Hu, Y Shi - 2018 5th IEEE International Conference …, 2019"],"snippet":"… The detailed data set is listed in Table 1. All experiments follow the principle of counterpart parameters. The Chinese and English word vectors are pre-trained using Glove respectively on Samsung and Common Crawl corpus. The word dimension is 300 …","url":["https://ieeexplore.ieee.org/abstract/document/8691209/"]} -{"year":"2019","title":"Zero-Shot Semantic Segmentation via Variational Mapping","authors":["N Kato, T Yamasaki, K Aizawa - Proceedings of the IEEE International Conference on …, 2019"],"snippet":"… Dataset Unseen classes PASCAL-50 aeroplane, bicycle, bird, boat, bottle PASCAL-51 bus, car, cat, chair, cow PASCAL-52 diningtable, dog, horse, motorbike, person PASCAL-53 potted plant, sheep, sofa, train, tv/monitor …","url":["http://openaccess.thecvf.com/content_ICCVW_2019/papers/MDALC/Kato_Zero-Shot_Semantic_Segmentation_via_Variational_Mapping_ICCVW_2019_paper.pdf"]} -{"year":"2020","title":"18 Evaluation of Greek Word Embeddings","authors":["S Outsios, C Karatsalos, K Skianis, M Vazirgiannis"],"snippet":"… wiki. The last model has been trained on Common Crawl and Wikipedia data using FastText based on CBOW model with position-weights (Grave et al., 2018), mentioned as cc+wiki. Category gr_def gr_neg1 0 cc.el.300 …","url":["http://www.eleto.gr/download/Conferences/12th%20Conference/Papers-and-speakers/12th_18-02-20_OutsiosStamatis-KaratsalosChristos-SkianisK-VazirgiannisMichalis_Paper1_V04.pdf"]} -{"year":"2020","title":"\" Thy algorithm shalt not bear false witness\": An Evaluation of Multiclass Debiasing Methods on Word Embeddings","authors":["T Schlender, G Spanakis - arXiv preprint arXiv:2010.16228, 2020"],"snippet":"… However, surprisingly, the WEAT score measured in ConceptNet is the worst of all three. The GloVe embeddings seem to carry the most bias concerning the RNSB and MAC metrics, which is intuitive when considering the common crawl data it was trained on …","url":["https://arxiv.org/pdf/2010.16228"]} -{"year":"2020","title":"A Benchmark of Rule-Based and Neural Coreference Resolution in Dutch Novels and News","authors":["C Poot, A van Cranenburgh - arXiv preprint arXiv:2011.01615, 2020"],"snippet":"… 5 Evaluation Before presenting our main benchmark results, we discuss the issue of coreference evaluation metrics. 6We use Fasttext common crawl embeddings, https://fasttext.cc/docs/en/crawl-vectors.html Page 6 …","url":["https://arxiv.org/pdf/2011.01615"]} -{"year":"2020","title":"A Better Use of Audio-Visual Cues: Dense Video Captioning with Bi-modal Transformer","authors":["V Iashin, E Rahtu - arXiv preprint arXiv:2005.08271, 2020"],"snippet":"Page 1. IASHIN, RAHTU: A BETTER USE OF AUDIO-VISUAL CUES 1 A Better Use of Audio-Visual Cues: Dense Video Captioning with Bi-modal Transformer Vladimir Iashin vladimir.iashin@tuni.fi Esa Rahtu esa.rahtu@tuni.fi …","url":["https://arxiv.org/pdf/2005.08271"]} -{"year":"2020","title":"A brief tour to the NLP Sesame Street","authors":["E Montoya"],"snippet":"… In addition to the strategy to verify fake news this research provided of a large corpus of news articles from Common Crawl named RealNews, as Grover needed a large corpus of news with metadata which was not available or …","url":["https://chatbotslife.com/a-brief-tour-to-the-nlp-sesame-street-7bba02d75ae3"]} -{"year":"2020","title":"A Call for More Rigor in Unsupervised Cross-lingual Learning","authors":["M Artetxe, S Ruder, D Yogatama, G Labaka, E Agirre - arXiv preprint arXiv …, 2020"],"snippet":"… However, as of November 2019, Wikipedia exists in only 307 languages3 of which nearly half have less than 10,000 articles. While one could hope to overcome this by taking the entire web as a corpus, as …","url":["https://arxiv.org/pdf/2004.14958"]} -{"year":"2020","title":"A Character-Level BiGRU-Attention for Phishing Classification","authors":["L Yuan, Z Zeng, Y Lu, X Ou, T Feng - International Conference on Information and …, 2019"],"snippet":"… In addition, Common Crawl that stored a great deal of websites is an open website for crawler learners. There are 800,000 websites provided as legitimate websites data … Phish urls. Legal urls. Data sources. Phish Tank. Common Crawl …","url":["https://link.springer.com/chapter/10.1007/978-3-030-41579-2_43"]} -{"year":"2020","title":"A Comprehensive Survey of Grammar Error Correction","authors":["Y Wang, Y Wang, J Liu, Z Liu - arXiv preprint arXiv:2005.06600, 2020"],"snippet":"… Common Crawl. The Common Crawl corpus [10] is a repository of web crawl data which is open to everyone. It completes crawls monthly since 2011. • EVP … [28] Word-Level L1 Yes None Error Selection Wikipedia, 2014 Common Crawl …","url":["https://arxiv.org/pdf/2005.06600"]} -{"year":"2020","title":"A Comprehensive Survey on Word Representation Models: From Classical to State-Of-The-Art Word Representation Language Models","authors":["U Naseem, I Razzak, SK Khan, M Prasad - arXiv preprint arXiv:2010.15036, 2020"],"snippet":"Page 1. A Comprehensive Survey on Word Representation Models: From Classical to State-Of-The-Art Word Representation Language Models USMAN NASEEM∗, School of Computer Science, The University of Sydney, Australia …","url":["https://arxiv.org/pdf/2010.15036"]} -{"year":"2020","title":"A Cross-lingual Natural Language Processing Framework for Infodemic Management","authors":["R Pal, R Pandey, V Gautam, K Bhagat, T Sethi - arXiv preprint arXiv:2010.16357, 2020"],"snippet":"… The algorithm effectively minimizes this function to learn meaningful vector representations. The version of Glove used for experimentation is the publicly available common Crawl (840B tokens, 2.2M vocab, cased, 300 dimension vectors) …","url":["https://arxiv.org/pdf/2010.16357"]} -{"year":"2020","title":"A Deep Learning Approach to Interest Analysis","authors":["T Meer - 2020"],"snippet":"Page 1. A Deep Learning Approach to Interest Analysis Thomas van der Meer A thesis submitted for the degree of Master of Business Informatics Department of Information and Computing Sciences Utrecht University The …","url":["https://dspace.library.uu.nl/bitstream/handle/1874/398939/scriptie_eindversie_tvdm.pdf?sequence=1"]} -{"year":"2020","title":"A Deep Learning-Based Approach for Identifying the Medicinal Uses of Plant-Derived Natural Compounds. Front. Pharmacol. 11: 584875. doi: 10.3389/fphar …","authors":["S Yoo, HC Yang, S Lee, J Shin, S Min, E Lee, M Song… - Frontiers in Pharmacology …, 2020"],"snippet":"… alphaisothiocyanatotoluene.” In this study, we used the pre-trained fastText model with Wikipedia and Common Crawl (Grave et al., 2018). The model additionally learned from the DrugBank indication and PubMed literature …","url":["https://pdfs.semanticscholar.org/2105/4ac827a06c594f54e1ffe9c865fcbb994980.pdf"]} -{"year":"2020","title":"A deep search method to survey data portals in the whole web: toward a machine learning classification model","authors":["AS Correa, A Melo Jr, FSC da Silva - Government Information Quarterly, 2020"],"snippet":"… Later, the same authors (AS Correa & da Silva, 2019) took advantage of the URL index of the Common Crawl project (an open repository of web crawl data) to survey potential data portals by searching the URL text strings …","url":["https://www.sciencedirect.com/science/article/pii/S0740624X20302896"]} -{"year":"2020","title":"A Deep-Learning-Based Blocking Technique for Entity Linkage","authors":["F Azzalini, M Renzi, L Tanca - International Conference on Database Systems for …, 2020"],"snippet":"… attribute value \\(t[A_{k}]\\) is transformed into a real-valued vector \\(\\mathbf{v} (w)\\). The fastText model we use is crawl-300d-2M-subword [3] where each word is represented as a 300-dimensional vector and the …","url":["https://link.springer.com/chapter/10.1007/978-3-030-59410-7_37"]} -{"year":"2020","title":"A Focused Study to Compare Arabic Pre-training Models on Newswire IE Tasks","authors":["W Lan, Y Chen, W Xu, A Ritter - arXiv preprint arXiv:2004.14519, 2020"],"snippet":"… three times; (2) add the Arabic shuffled Os- car data (Ortiz Suárez et al., 2019), a large-scale multilingual dataset obtained by language identification and filtering of the Common Crawl corpus … XLM-Rbase CommonCrawl 55.6B …","url":["https://arxiv.org/pdf/2004.14519"]} -{"year":"2020","title":"A Framework for Word Embedding Based Automatic Text Summarization and Evaluation","authors":["TT Hailu, J Yu, TG Fantaye - Information, 2020"],"snippet":"Text summarization is a process of producing a concise version of text (summary) from one or more information sources. If the generated summary preserves meaning of the original text, it will help the users to make fast and …","url":["https://www.mdpi.com/2078-2489/11/2/78/pdf"]} -{"year":"2020","title":"A German Language Voice Recognition System using DeepSpeech","authors":["J Xu, K Matta, S Islam, A Nürnberger"],"snippet":"… 1, pp. 517–520, 1992. [14] Christopher Cieri, David Miller and Kevin Walker, “The Fisher Corpus: a Resource for the Next Generations of Speech-to-Text,” LREC, 2004. [15] CommomCrawl, “English language model,” http://commoncrawl.org/, 2020 …","url":["https://www.researchgate.net/profile/Kaveen_Matta_Kumaresh/publication/342657372_German_Voice_Recognition_System_using_DeepSpeech/links/5efee6e3a6fdcc4ca447681a/German-Voice-Recognition-System-using-DeepSpeech.pdf"]} -{"year":"2020","title":"A Gradient Boosting-Seq2Seq System for Latin POS Tagging and Lemmatization","authors":["GGA Celano - LREC 2020 Workshop Language Resources and …"],"snippet":"… prefixes, infixes, or suffixes to be weighted. Some models for Latin, such as the one based on texts from Common Crawl and Wikipedia, have already been computed and are freely available. 7 However, since the data released …","url":["https://www.academia.edu/download/63734156/LT4HALAbook20200624-19244-er3k3d.pdf#page=126"]} -{"year":"2020","title":"A graph based framework for structured prediction tasks in sanskrit","authors":["A Krishna, A Gupta, P Goyal, B Santra, P Satuluri - Computational Linguistics, 2020"],"snippet":"Page 1. A Graph Based Framework for Structured Prediction Tasks in Sanskrit Amrith Krishna* University of Cambridge Bishal Santra Indian Institute of Technology Kharagpur Ashim Gupta† University of Utah Pavankumar Satuluri Chinmaya Vishwavidyapeeth …","url":["https://www.mitpressjournals.org/doi/pdf/10.1162/coli_a_00390"]} -{"year":"2020","title":"A Graph-Theoretic Approach for the Detection of Phishing Webpages","authors":["CL Tan, KL Chiew, KSC Yong, J Abdullah, Y Sebastian - Computers & Security, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S016740482030078X"]} -{"year":"2020","title":"A Hybrid Approach for Aspect-Based Sentiment Analysis Using Deep Contextual Word Embeddings and Hierarchical Attention","authors":["MM Trusca, D Wassenberg, F Frasincar, R Dekker - arXiv preprint arXiv:2004.08673, 2020","R Dekker - Web Engineering: 20th International Conference, ICWE …"],"snippet":"… The last two conditions for f are necessary to prevent overweighting of either rare or frequent co-occurrences. In this paper, we choose to use 300-dimension GloVe word embeddings trained on the Common Crawl (42 billion words)[14]. Word2vec …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=XpnqDwAAQBAJ&oi=fnd&pg=PA365&dq=commoncrawl&ots=-hereKCQai&sig=PCXnLnE9TjyMtqs05nolBDRi69g","https://arxiv.org/pdf/2004.08673"]} -{"year":"2020","title":"A Large Scale Study on Health Information Retrieval for Laypersons","authors":["Z Liu - 2020"],"snippet":"… 3.1 Description of Document Collection The consumer-oriented health search task uses a dataset called clefehealth2018 corpus, which was created by acquiring web pages from various health do- mains(websites) using the CommonCrawl platform1 …","url":["https://cs.anu.edu.au/courses/CSPROJECTS/20S1/reports/u6022937_report.pdf"]} -{"year":"2020","title":"A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal","authors":["D Gholipour Ghalandari, C Hokamp, J Glover, G Ifrim - arXiv, 2020","DG Ghalandari, C Hokamp, NT Pham, J Glover, G Ifrim - arXiv preprint arXiv …, 2020"],"snippet":"… We also automatically extend these source articles by looking for related articles in the Common Crawl archive … Table 1: Example event summary and linked source ar- ticles from the Wikipedia Current Events Portal, and …","url":["https://arxiv.org/pdf/2005.10070","https://ui.adsabs.harvard.edu/abs/2020arXiv200510070G/abstract"]} -{"year":"2020","title":"A Large-Scale Semi-Supervised Dataset for Offensive Language Identification","authors":["S Rosenthal, P Atanasova, G Karadzhov, M Zampieri… - arXiv preprint arXiv …, 2020"],"snippet":"… The first layer of the LSTM model is an embedding layer, which we initialize with a concatenation of the GloVe 300-dimensional (Pennington et al., 2014) and FastText's Common Crawl 300dimensional embeddings (Grave et al., 2018). The Page 5 …","url":["https://arxiv.org/pdf/2004.14454"]} -{"year":"2020","title":"A Longitudinal Analysis of Job Skills for Entry-Level Data Analysts","authors":["T Dong, J Triche - Journal of Information Systems Education, 2020"],"snippet":"… Therefore, we used the Common Crawl dataset to address this problem (http:// commoncrawl.org/). Common Crawl is a non-profit organization that builds and maintains an open repository of web crawl data that is, in essence, a copy of the Internet …","url":["http://jise.org/Volume31/n4/JISEv31n4p312.pdf"]} -{"year":"2020","title":"A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages","authors":["P Ortiz Suárez, L Romary, B Sagot - arXiv, 2020","PO Suárez, L Romary, B Sagot - arXiv preprint arXiv:2006.06202, 2020"],"snippet":"… al., 2019), a freely available2 multilingual dataset obtained by performing language classification, filtering and cleaning of the whole Common Crawl corpus.3 … 1 https://commoncrawl.org 2 https://traces1.inria.fr/oscar/ 3Snapshot …","url":["https://arxiv.org/pdf/2006.06202","https://ui.adsabs.harvard.edu/abs/2020arXiv200606202O/abstract"]} -{"year":"2020","title":"A Multilingual Evaluation for Online Hate Speech Detection","authors":["M Corazza, S Menini, E Cabrio, S Tonelli, S Villata - ACM Transactions on Internet …, 2020"],"snippet":"… In particular, we use the Italian and German embeddings trained on Common Crawl and Wikipedia [33] with size 300 … English Fasttext Crawl embeddings: English embeddings trained by Fasttext9 on Common Crawl with an embedding size of 300 …","url":["https://dl.acm.org/doi/abs/10.1145/3377323"]} -{"year":"2020","title":"A Neural-based model to Predict the Future Natural Gas Market Price through Open-domain Event Extraction","authors":["MT Chau, D Esteves, J Lehmann"],"snippet":"… Strong baseline We feed the price and sentence embedding of filtered news using spaCy small English (Context tensor trained on [39], 300-d embedding vector) and large English model (trained on both [39] and Common Crawl …","url":["http://ceur-ws.org/Vol-2611/paper2.pdf"]} -{"year":"2020","title":"A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUAL BILSTM NETWORK","authors":["R Shelke, D Thakore"],"snippet":"… It provides word embeddings for Hindi (and 157 other languages) and is based on the CBOW (Continuous Bag-of-Words) model. The CBOW model learns by predicting the current word based on its context, and it was trained …","url":["http://www.academia.edu/download/63216061/120200506-26612-102sbv8.pdf"]} -{"year":"2020","title":"A novel approach to sentiment analysis in Persian using discourse and external semantic information","authors":["R Dehkharghani, H Emami - arXiv preprint arXiv:2007.09495, 2020"],"snippet":"Page 1. * Corresponding Author A novel approach to sentiment analysis in Persian using discourse and external semantic information *Rahim Dehkharghani, Faculty of Engineering, University of Bonab, Bonab, Iran rdehkharghani …","url":["https://arxiv.org/pdf/2007.09495"]} -{"year":"2020","title":"A Novel BGCapsule Network for Text Classification","authors":["AK Gangwar, V Ravi - arXiv preprint arXiv:2007.04302, 2020"],"snippet":"… GloVe. We used GloVe [21] pretrained model. The GloVe model trained on 2.2 million vocabularies, 840 billion tokens of web data from Common Crawl. This Glove embedding projected each word to a 300-dimensional vector …","url":["https://arxiv.org/pdf/2007.04302"]} -{"year":"2020","title":"A novel reasoning mechanism for multi-label text classification","authors":["R Wang, R Ridley, W Qu, X Dai - Information Processing & Management"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0306457320309341"]} -{"year":"2020","title":"A Performance Comparison among Different Amounts of Context on Deep Learning Based Intent Classification Models","authors":["M Jung, J Kim, JY Jang, H Jung, S Shin - 2020 International Conference on …, 2020"],"snippet":"… We also employ word embeddings trained on Common Crawl of fastText [13], a library for efficient text classification and representation learning. We apply a bidirectional LSTM (Bi-LSTM) network [6] to build an LSTM based intent classification model …","url":["https://ieeexplore.ieee.org/abstract/document/9289467/"]} -{"year":"2020","title":"A Practical Approach for Taking Down Avalanche Botnets Under Real-World Constraints","authors":["D Preuveneers, A Duda, W Joosen, M Korczynski"],"snippet":"Page 1. A Practical Approach for Taking Down Avalanche Botnets Under Real-World Constraints Victor Le Pochat∗, Tim Van hamme∗, Sourena Maroofi§, Tom Van Goethem∗, Davy Preuveneers∗, Andrzej Duda§, Wouter Joosen …","url":["https://lirias.kuleuven.be/retrieve/567093/"]} -{"year":"2020","title":"A Practical Guide to Hybrid Natural Language Processing: Combining Neural Models and Knowledge Graphs for NLP","authors":["JM Gomez-Perez, R Denaux, A Garcia-Silva - 2020"],"snippet":"Page 1. Jose Manuel Gomez-Perez Ronald Andres Garcia-Silva Denaux A to Practical Hybrid Natural Guide Language Processing Combining Neural Models and Knowledge Graphs for NLP Page 2. A Practical Guide …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=Ou_rDwAAQBAJ&oi=fnd&pg=PR7&dq=commoncrawl&ots=7ExXbVHzPG&sig=WOLotv9GbQ2RA9QHICwuff_hHVM"]} -{"year":"2020","title":"A Recipe for Creating Multimodal Aligned Datasets for Sequential Tasks","authors":["AS Lin, S Rao, A Celikyilmaz, E Nouri, C Brockett… - arXiv preprint arXiv …, 2020"],"snippet":"… We extract text recipes from Common Crawl,2 one of the largest web sources of text … CommonCrawl text-text recipe pairs We randomly choose 200 text-text recipes pairs (spanning 5 dishes) from the test … Table 3: Results for …","url":["https://arxiv.org/pdf/2005.09606"]} -{"year":"2020","title":"A Revised Generative Evaluation of Visual Dialogue","authors":["D Massiceti, V Kulharia, PK Dokania, N Siddharth… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. A Revised Generative Evaluation of Visual Dialogue Daniela Massiceti Viveka Kulharia Puneet K. Dokania N. Siddharth Philip HS Torr University of Oxford {daniela, viveka, puneet, nsid, phst} @robots.ox.ac.uk Abstract …","url":["https://arxiv.org/pdf/2004.09272"]} -{"year":"2020","title":"A Study on Transformer-based Machine Comprehension with Curriculum Learning","authors":["MQ BUI - 2020"],"snippet":"Page 1. Japan Advanced Institute of Science and Technology JAIST Repository https://dspace.jaist.ac.jp/ Title A Study on Transformer-based Machine Comprehension with Curriculum Learning Author(s) BUI, MINH …","url":["https://dspace.jaist.ac.jp/dspace/bitstream/10119/16864/5/paper.pdf"]} -{"year":"2020","title":"A Survey of Document Grounded Dialogue Systems (DGDS)","authors":["L Ma, WN Zhang, M Li, T Liu - arXiv preprint arXiv:2004.13818, 2020"],"snippet":"Page 1. A Survey of Document Grounded Dialogue Systems (DGDS) LONGXUAN MA, Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, China WEI-NAN ZHANG, Research Center …","url":["https://arxiv.org/pdf/2004.13818"]} -{"year":"2020","title":"A Survey on Contextual Embeddings","authors":["Q Liu, MJ Kusner, P Blunsom - arXiv preprint arXiv:2003.07278, 2020"],"snippet":"… T5 introduces a new pre-training dataset, Colossal Clean Crawled Corpus by cleaning the web pages from Common Crawl … by training a Transformerbased masked language model on one hundred languages, using more …","url":["https://arxiv.org/pdf/2003.07278"]} -{"year":"2020","title":"A Systematic Study of Inner-Attention-Based Sentence Representations in Multilingual Neural Machine Translation","authors":["R Vázquez, A Raganato, M Creutz, J Tiedemann - Computational Linguistics, 2020"],"snippet":"Page 1. Computational Linguistics Just Accepted MS. https://doi.org/10.1162/ COLI_a_00377 © Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license A Systematic Study of Inner-Attention-Based …","url":["https://www.mitpressjournals.org/doi/pdf/10.1162/COLI_a_00377"]} -{"year":"2020","title":"A Text Augmentation Approach using Similarity Measures based on Neural Sentence Embeddings for Emotion Classification on Microblogs","authors":["YK Shyang, JLS Yan - 2020 IEEE 2nd International Conference on Artificial …, 2020"],"snippet":"… vectors. We used two InferSent models available, which were InferSent trained using GloVe on Common Crawl 840B (InferSent-GloVe) and InferSent trained using fastText on Common Crawl 600B (InferSentfastText). Four …","url":["https://ieeexplore.ieee.org/abstract/document/9257826/"]} -{"year":"2020","title":"A Transformer-based Audio Captioning Model with Keyword Estimation","authors":["Y Koizumi, R Masumura, K Nishida, M Yasuda, S Saito - arXiv preprint arXiv …, 2020"],"snippet":"… sions different. We use the bottleneck feature of VGGish [11] (Dx = 128) for audio embedding, and fastText [18] trained on the Common Crawl corpus (Dw = 300) for caption-word and keyword embedding, respectively. Since the …","url":["https://arxiv.org/pdf/2007.00222"]} -{"year":"2020","title":"A web analytics approach to map the influence and reach of CCAFS","authors":["B Carneiro, G Resce, Y Ma, G Pacillo, P Läderach - 2020"],"snippet":"Page 1. A web analytics approach to map the influence and reach of CCAFS Working Paper No. 326 CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS) Page 2. A web analytics …","url":["https://cgspace.cgiar.org/bitstream/handle/10568/110588/Working%20Paper-326.pdf?sequence=1&isAllowed=y"]} -{"year":"2020","title":"A WRITTEN TEST FOR ARTIFICIAL GENERAL INTELLIGENCE","authors":["M Casteluccio - Strategic Finance, 2020"],"snippet":"… If you input a few words, GPT-3 will write a completed thought or sentence. The model was trained on data from Common Crawl, a nonprofit that builds and maintains an open repository of web crawl data accessible to the public for free (common crawl.org) …","url":["http://search.proquest.com/openview/e8f242b31761f187b35a3bb15ab0e724/1?pq-origsite=gscholar&cbl=48426"]} -{"year":"2020","title":"Accenture at CheckThat! 2020: If you say so: Post-hoc fact-checking of claims using transformer-based models","authors":["E Williams, P Rodrigues, V Novak - arXiv preprint arXiv:2009.02431, 2020","V Novak"],"snippet":"… BPE) instead of WordPiece. [23] The base-roberta model was pre-trained on 160GB of text extracted from BookCorpus, English Wikipedia, CC-News, OpenWebText, and Stories (a subset of CommonCrawl Data) [14]. At the time …","url":["http://ceur-ws.org/Vol-2696/paper_226.pdf","https://arxiv.org/pdf/2009.02431"]} -{"year":"2020","title":"Accurate and Fast URL Phishing Detector: A Convolutional Neural Network Approach","authors":["W Wei, Q Ke, J Nowak, M Korytkowski, R Scherer… - Computer Networks, 2020"],"snippet":"… The database downloaded during the article writing contained 10,604 records. To obtain legitimate websites, the second part of the training dataset was downloaded from the Common Crawl Foundation (http://commoncrawl.org/) …","url":["https://www.sciencedirect.com/science/article/pii/S1389128620301109"]} -{"year":"2020","title":"Active Learning for Spreadsheet Cell Classification","authors":["J Gonsior, J Rehak, M Thiele, E Koci, M Günther…"],"snippet":"… Another recent corpus is Fuse [4], which comprises 249, 376 unique spreadsheets, extracted from Common Crawl2. Each spreadsheet is accompanied by a JSON file, which includes NLP token extraction and metrics …","url":["https://wwwdb.inf.tu-dresden.de/wp-content/uploads/SEAData2.pdf"]} -{"year":"2020","title":"Adaptive GloVe and FastText Model for Hindi Word Embeddings","authors":["V Gaikwad, Y Haribhakta - Proceedings of the 7th ACM IKDD CoDS and 25th …, 2020"],"snippet":"… developed using original GloVe model, FastText model (FastTextHin), Adaptive FastText model (AFM) (trained on Hindi monolingual corpus [11] and FastText embeddings published on the website [25] (FastTextWeb) …","url":["https://dl.acm.org/doi/abs/10.1145/3371158.3371179"]} -{"year":"2020","title":"ADD: Academic Disciplines Detector Based on Wikipedia","authors":["A Gjorgjevikj, K Mishev, D Trajanov - IEEE Access, 2020"],"snippet":"… For representation of the textual data into fixed-length vector form using pre-trained text encoding models, in addition to the model files, word vectors trained with GloVe [38] on Common Crawl (840B tokens)6 and with FastText …","url":["https://ieeexplore.ieee.org/iel7/6287639/8948470/08948031.pdf"]} -{"year":"2020","title":"Advanced Semantics for Commonsense Knowledge Extraction","authors":["TP Nguyen, S Razniewski, G Weikum - arXiv preprint arXiv:2011.00905, 2020"],"snippet":"Page 1. Advanced Semantics for Commonsense Knowledge Extraction Tuan-Phong Nguyen Max Planck Institute for Informatics tuanphong@mpi-inf.mpg.de Simon Razniewski Max Planck Institute for Informatics srazniew@mpi-inf.mpg.de …","url":["https://arxiv.org/pdf/2011.00905"]} -{"year":"2020","title":"Advanced Web Crawlers","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… A good starting point is using common crawl's fork of Apache Nutch 1.x–based crawler. The codebase is open sourced and available on the GitHub repo (https://github.com/commoncrawl), and it's not only well …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_8"]} -{"year":"2020","title":"Advancements in Deep Learning Theory and Applications: Perspective in 2020 and beyond","authors":["MN Saadat, M Shuaib - Advances and Applications in Deep Learning, 2020"],"snippet":"The aim of this chapter is to introduce newcomers to deep learning, deep learning platforms, algorithms, applications, and open-source datasets. This chapter will give you a broad overview of the term deep learning, in context …","url":["https://www.intechopen.com/books/advances-and-applications-in-deep-learning/advancements-in-deep-learning-theory-and-applications-perspective-in-2020-and-beyond"]} -{"year":"2020","title":"Advancing Neural Language Modeling in Automatic Speech Recognition","authors":["K Irie - 2020"],"snippet":"Page 1. Advancing Neural Language Modeling in Automatic Speech Recognition Von der Fakultät für Mathematik, Informatik und Naturwissenschaften der RWTH Aachen University zur Erlangung des akademischen Grades …","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1142/IrieKazuki--AdvancingNeuralLanguageModelinginAutomaticSpeechRecognition--2020.pdf"]} -{"year":"2020","title":"Adversarial Self-Supervised Data-Free Distillation for Text Classification","authors":["X Ma, Y Shen, G Fang, C Chen, C Jia, W Lu - arXiv preprint arXiv:2010.04883, 2020"],"snippet":"… DeepFace (Taigman et al., 2014) is trained on user images under confidential policies for protecting users. Further, some datasets, like Common Crawl dataset used in GPT3 (Brown et al., 2020), contain nearly a trillion words and are difficult to transmit and store …","url":["https://arxiv.org/pdf/2010.04883"]} -{"year":"2020","title":"Adversarial Training for Large Neural Language Models","authors":["X Liu, H Cheng, P He, W Chen, Y Wang, H Poon, J Gao - arXiv preprint arXiv …, 2020"],"snippet":"… For continual pre-training of RoBERTa, we use Wikipedia (13GB), OPENWEBTEXT (public Reddit content (Gokaslan and Cohen); 38GB), STORIES (a subset of CommonCrawl (Trinh and Le, 2018); 31GB). 2https://dumps.wikimedia.org/enwiki/ Page 5 …","url":["https://arxiv.org/pdf/2004.08994"]} -{"year":"2020","title":"Affective Conditioning on Hierarchical Attention Networks applied to Depression Detection from Transcribed Clinical Interviews","authors":["D Xezonaki, G Paraskevopoulos, A Potamianos…"],"snippet":"… Next, for both corpora we tokenize the speaker turns by splitting them into words. We use 300D GloVe [31] pretrained word embeddings, trained on the Common Crawl corpus, to extract word representations. Implementation …","url":["https://indico2.conference4me.psnc.pl/event/35/contributions/3166/attachments/895/934/Thu-3-1-3.pdf"]} -{"year":"2020","title":"Affective Conditioning on Hierarchical Networks applied to Depression Detection from Transcribed Clinical Interviews","authors":["D Xezonaki, G Paraskevopoulos, A Potamianos… - arXiv preprint arXiv …, 2020"],"snippet":"… Next, for both corpora we tokenize the speaker turns by splitting them into words. We use 300D GloVe [31] pretrained word embeddings, trained on the Common Crawl corpus, to extract word representations. Implementation …","url":["https://arxiv.org/pdf/2006.08336"]} -{"year":"2020","title":"AgglutiFiT: Efficient Low-Resource Agglutinative Language Model Fine-Tuning","authors":["Z Li, X Li, J Sheng, W Slamu - IEEE Access, 2020"],"snippet":"… test set. For crosslingual pre-training language models, we use the XLM − R model loaded from the torch.Hub that It is trained on 2.5TB of CommonCrawl data, in 17 languages and uses a large vocabulary size of 95K. XLM …","url":["https://ieeexplore.ieee.org/iel7/6287639/8948470/09164940.pdf"]} -{"year":"2020","title":"AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for Indic Languages","authors":["A Kunchukuttan, D Kakwani, S Golla, A Bhattacharyya… - arXiv preprint arXiv …, 2020"],"snippet":"… FastText also provides embeddings trained on Wikipedia + CommonCrawl corpus … 1. We augmented our crawls with some data from other sources: Leipzig corpus (Goldhahn et al., 2012) (Tamil and Bengali), WMT NewsCrawl …","url":["https://arxiv.org/pdf/2005.00085"]} -{"year":"2020","title":"Algorithmic Bias: On the Implicit Biases of Social Technology","authors":["G Johnson - 2020"],"snippet":"… This study found that parsing software trained on a dataset called “the common crawl”—an assemblage of 840 billion words collected by crawling the internet—resulted in the program producing “human-like semantic …","url":["http://philsci-archive.pitt.edu/17169/1/Algorithmic%20Bias.pdf"]} -{"year":"2020","title":"ALOD2Vec Matcher Results for OAEI 2020","authors":["H Paulheim"],"snippet":"… like DBpedia [8] – but instead on the whole Web: The dataset consists of hypernymy relations extracted from the Common Crawl3, a … 3 see http://commoncrawl.org/ 4 see http://webisa.webdatacommons.org/concept …","url":["http://disi.unitn.it/~pavel/om2020/papers/oaei20_paper2.pdf"]} -{"year":"2020","title":"An AI-Based System for Formative and Summative Assessment in Data Science Courses","authors":["P Vittorini, S Menini, S Tonelli - International Journal of Artificial Intelligence in …, 2020"],"snippet":"Massive open online courses (MOOCs) provide hundreds of students with teaching materials, assessment tools, and collaborative instruments. The assessment a.","url":["https://link.springer.com/article/10.1007/s40593-020-00230-2"]} -{"year":"2020","title":"An Analysis of Dataset Overlap on Winograd-Style Tasks","authors":["A Emami, A Trischler, K Suleman, JCK Cheung - arXiv preprint arXiv:2011.04767, 2020"],"snippet":"… This is a form of data contamination. One of the earliest works that trained a language model on Common Crawl data identified and removed a training documents that overlapped with one of their evaluation datasets (Trinh and Le, 2018) …","url":["https://arxiv.org/pdf/2011.04767"]} -{"year":"2020","title":"An Approach to NMT Re-Ranking Using Sequence-Labeling for Grammatical Error Correction","authors":["B Wang, K Hirota, C Liu, Y Dai, Z Jia"],"snippet":"Page 1. NMT Re-Ranking Using Sequence-Labeling for GEC Paper: An Approach to NMT Re-Ranking Using Sequence-Labeling for Grammatical Error Correction Bo Wang, Kaoru Hirota, Chang Liu, Yaping Dai † , and Zhiyang Jia …","url":["https://www.jstage.jst.go.jp/article/jaciii/24/4/24_557/_pdf"]} -{"year":"2020","title":"An Effective Phishing Detection Model Based on Character Level Convolutional Neural Network from URL","authors":["A Aljofey, Q Jiang, Q Qu, M Huang, JP Niyigena - Electronics, 2020"],"snippet":"Phishing is the easiest way to use cybercrime with the aim of enticing people to give accurate information such as account IDs, bank details, and passwords. This type of cyberattack is usually triggered by emails, instant messages, or …","url":["https://www.mdpi.com/2079-9292/9/9/1514/pdf"]} -{"year":"2020","title":"An Empirical Investigation of Performances of Different Word Embedding Algorithms in Comment Clustering","authors":["E Dorani, N Duru, T Yıldız - 2019 Innovations in Intelligent Systems and …"],"snippet":"… In this study, we used the Common Crawl (1.9 million words)1 and (2.0 million word)2 pre-trained word vectors for Glove and FastText, respectively … Such that we used the Common Crawl (1.9 million words) and (2.0 million word) …","url":["https://ieeexplore.ieee.org/abstract/document/8946379/"]} -{"year":"2020","title":"An Empirical Investigation Towards Efficient Multi-Domain Language Model Pre-training","authors":["K Arumae, Q Sun, P Bhatia - arXiv preprint arXiv:2010.00784, 2020"],"snippet":"… We processed publicly available bio-medical and non-bio-medical corpora for pre-training our models. For non-bio-medical data, we use BookCorpus and English Wikipedia data, CommonCrawl Stories (Trinh and Le, 2018) …","url":["https://arxiv.org/pdf/2010.00784"]} -{"year":"2020","title":"An Empirical Study of Pre-trained Transformers for Arabic Information Extraction","authors":["W Lan, Y Chen, W Xu, A Ritter - Proceedings of the 2020 Conference on Empirical …, 2020"],"snippet":"… XLM-Rlarge CommonCrawl 295B/55.6B/2.9B SentencePiece 250k/80k/ 14k yes large 550M … and the Gigaword portion three times; (2) adding the Arabic section of the Oscar corpus (Ortiz Suárez et al., 2019), a large-scale …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.382.pdf"]} -{"year":"2020","title":"An Empirical Study of Transformer-Based Neural Language Model Adaptation","authors":["K Li, Z Liu, T He, H Huang, F Peng, D Povey… - ICASSP 2020-2020 IEEE …, 2020"],"snippet":"… We denote the merged corpus as “Source1” in all tables. The second corpus is an English subset (10%) of the Common Crawl News (CCNews) [27], a news corpus contains articles published worldwide between September 2016 and February 2019 …","url":["https://ieeexplore.ieee.org/abstract/document/9053399/"]} -{"year":"2020","title":"An Empirical Study on Explainable Prediction of Text Complexity: Preliminaries for Text Simplification","authors":["C Garbacea, M Guo, S Carton, Q Mei - arXiv preprint arXiv:2007.15823, 2020"],"snippet":"… In our experiments we use the 12 layer XLNeT base pre-trained model on 2https://docs.fast.ai/text.html 4 Page 5. the English Wikipedia and the Books corpus (similar to BERT), and additionally also on Giga5 ClueWeb …","url":["https://arxiv.org/pdf/2007.15823"]} -{"year":"2020","title":"An Empirical Study on Release Engineering of Artificial Intelligence Products","authors":["M Xiu - arXiv preprint arXiv:2012.01403, 2020"],"snippet":"… SPIRAL CelebA HQ 10 (Config: Network Structure) hub.Module Text Embedding Albert Wikipedia, BooksCorpus, Stories, CommonCrawl, Giga5, Clue Web 4 (Config: Model Size) (Data Pre-processing: Cased or Not) hub.Module …","url":["https://arxiv.org/pdf/2012.01403"]} -{"year":"2020","title":"An English–Swahili parallel corpus and its use for neural machine translation in the news domain","authors":["F Sánchez-Martınez, VM Sánchez-Cartagena…"],"snippet":"… 9https://commoncrawl.github.io/ cc-crawl-statistics/plots/languages 10https://github.com/ CLD2Owners/cld2 300 Page 3. of crawling and, from the remaining 3 232, only 908 ended up containing data in both languages. Document alignment …","url":["https://www.dlsi.ua.es/~fsanchez/pub/pdf/sanchez-martinez20b.pdf"]} -{"year":"2020","title":"An Enhanced Sentiment Analysis Framework Based on Pre-Trained Word Embedding","authors":["EH Mohamed, MES Moussa, MH Haggag - International Journal of Computational …, 2020"],"snippet":"Login to your account …","url":["https://www.worldscientific.com/doi/abs/10.1142/S1469026820500315"]} -{"year":"2020","title":"An Evaluation Benchmark for Testing the Word Sense Disambiguation Capabilities of Machine Translation Systems","authors":["A Raganato, Y Scherrer, J Tiedemann - … of The 12th Language Resources and …, 2020"],"snippet":"… Page 2. 3669 CS–EN DE–EN FI–EN FR–EN RU–EN Books GlobalVoices Europarl JW300 News-Comm. Tatoeba TED Talks EU Bookshop MultiUN Common Crawl Table 1: Corpora used to extract the MuCoW test suites. The …","url":["https://www.aclweb.org/anthology/2020.lrec-1.452.pdf"]} -{"year":"2020","title":"An Evaluation Model for Auto-generated Cognitive Scripts","authors":["AM ELMougi, YMK Omar, R Hodhod"],"snippet":"… Fig. 6. The Cinema Linear Cognitive Script Converted into Text. B. Computing the GloVe Similarity Ratio Threshold The proposed model uses GloVe vectors of 300 dimensions that are created by training Common Crawl (840B tokens …","url":["https://pdfs.semanticscholar.org/14fd/085addcbcebe7e531198d52f041c4e86a3d9.pdf"]} -{"year":"2020","title":"An Evaluation of Recent Neural Sequence Tagging Models in Turkish Named Entity Recognition","authors":["G Aras, D Makaroglu, S Demir, A Cakir - arXiv preprint arXiv:2005.07692, 2020"],"snippet":"Page 1. An Evaluation of Recent Neural Sequence Tagging Models in Turkish Named Entity Recognition Gizem Arasa,, Didem Makaroglua,b, Seniz Demirc and Altan Cakirb aDemiroren Teknoloji AS, Istanbul, Turkey bDepartment …","url":["https://arxiv.org/pdf/2005.07692"]} -{"year":"2020","title":"An Exploration in L2 Word Embedding Alignment","authors":["P Liao"],"snippet":"… It is also run for Wikipedia Chinese fastText embeddings to Common Crawl English fastText embeddings to test the method on a large corpus of a different domain. All fastText embeddings are pretrained and downloaded from their website16 …","url":["https://pdfs.semanticscholar.org/513e/16939e369cf09be34ac4c983d20be53a94b1.pdf"]} -{"year":"2020","title":"An Exploratory Approach to the Corpus Filtering Shared Task WMT20","authors":["A Kejriwal, P Koehn"],"snippet":"… We make the number of lines taken by the wikipedia data be between 40-50% of the number of lines taken by the CommonCrawl data … entropy as defined in Equation 5. We find that the scores we got were ex- tremely …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.108.pdf"]} -{"year":"2020","title":"An Interactive Network for End-to-End Review Helpfulness Modeling","authors":["J Du, L Zheng, J He, J Rong, H Wang, Y Zhang - Data Science and Engineering, 2020"],"snippet":"Review helpfulness prediction aims to prioritize online reviews by quality. Existing methods largely combine review texts and star ratings for helpfulness.","url":["https://link.springer.com/article/10.1007/s41019-020-00133-1"]} -{"year":"2020","title":"An Iterative Knowledge Transfer NMT System for WMT20 News Translation Task","authors":["J Kim, S Park, S Kim, Y Choi - Proceedings of the Fifth Conference on Machine …, 2020"],"snippet":"… Kyoto Free Translation Task 0.44M TED Talks 0.24M Monolingual Data (En) Europarl v10 2.29M News Commentary v15 0.6M News Crawl 23.35M News Discussions 63.51M Monolingual Data (Ja) News Crawl …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.11.pdf"]} -{"year":"2020","title":"An Open-Domain Web Search Engine for Answering Comparative Questions","authors":["T Abye, T Sager, AJ Triebel"],"snippet":"… pp. 91–96 (2017) 3. Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic ChatNoir: Search Engine for the ClueWeb and the Common Crawl. In: Azzopardi, L., Hanbury, A., Pasi, G., Piwowarski, B. (eds.) Advances in Information Retrieval …","url":["http://ceur-ws.org/Vol-2696/paper_130.pdf"]} -{"year":"2020","title":"Analogical frames by constraint satisfaction","authors":["L De Vine - 2020"],"snippet":"Page 1. Analogical Frames by Constraint Satisfaction A THESIS SUBMITTED IN FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Lance De Vine BMaths …","url":["https://eprints.qut.edu.au/198036/1/Lance_De%20Vine_Thesis.pdf"]} -{"year":"2020","title":"Analysis of the communication of smart city features through social media","authors":["JP Fontanilles"],"snippet":"Page 1. Jordi Pascual Fontanilles Analysis of the communication of smart city features through social media MASTER'S THESIS Supervised by Dr. Antonio Moreno Ribas Computer Security Engineering and Artificial …","url":["https://deim.urv.cat/~itaka/itaka2/PDF/acabats/MEMORIA_TFM_JordiPascual.pdf"]} -{"year":"2020","title":"Analyzing Sustainability Reports Using Natural Language Processing","authors":["A Luccioni, E Bailor, N Duchene - arXiv preprint arXiv:2011.08073, 2020"],"snippet":"… In fact, research in financial NLP has found that using general-purpose NLP models trained on corpora such as Wikipedia and the Common Crawl fail to capture domainspecific terms and concepts which are critical for a coherent …","url":["https://arxiv.org/pdf/2011.08073"]} -{"year":"2020","title":"Analyzing the Effect of Community Norms on Gender Bias","authors":["NR Raut - 2020"],"snippet":"… We use word2vec to train embeddings on the comment data for each of the subreddit. For fasttext, we use 2 million word vectors trained with subword information on Common Crawl …","url":["http://search.proquest.com/openview/18f1238b848a27a836459d849f5795c8/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2020","title":"ANDI@ CONCRETEXT: Predicting concreteness in context for English and Italian using distributional models and behavioural norms","authors":["A Rotaru - Proceedings of the 7th evaluation campaign of Natural …, 2020"],"snippet":"… Skip-gram (Google News – 100B) 21 21 21 21 GloVe (Common Crawl – 840B) 21 21 21 21 ConceptNet NumberBatch (ConceptNet + Skip-gram + GloVe) … abs(V(w) - V(c)) Behavioural norms (frequency, etc.) 20 20 20 20 …","url":["http://ceur-ws.org/Vol-2765/paper100.pdf"]} -{"year":"2020","title":"Announcing CzEng 2.0 Parallel Corpus with over 2 Gigawords","authors":["T Kocmi, M Popel, O Bojar - arXiv preprint arXiv:2007.03006, 2020"],"snippet":"… format. New parallel data come from Europarl (v10), News commentary, Wikititles, Commoncrawl, Paracrawl2, WikiMatrix (Schwenketal., 2019), and Tilde MODEL Corpus (EESC, EMA, Rapid; Rozis and Skadinš, 2017). We …","url":["https://arxiv.org/pdf/2007.03006"]} -{"year":"2020","title":"Anomaly detection with Generative Adversarial Networks and text patches","authors":["A Drozdyuk, N Eke"],"snippet":"… We used the model containing two million word vectors trained on Common Crawl … The depressive data was similarly cleaned. After cleaning the data it was converted to word vectors using the FastText model trained on Common Crawl and Wikipedia [12] …","url":["https://norberte.github.io/assets/pdf/GAN%20Project%20Report.pdf"]} -{"year":"2020","title":"Another factor to consider is the user's perspective. Everyone uses web search. Using a search engine to look things up on the web is the most popular activity on the …","authors":["D Lewandowski"],"snippet":"… Indexes that offer complete access do, however, exist. At the top of this list is Common Crawl,NOTE 12 a nonprofit project that aims to provide a web index for anyone who's interested … Common Crawl represents an important development …","url":["https://pandoc.networkcultures.org/epub/SotQreader/ch010.xhtml"]} -{"year":"2020","title":"Answering Comparative Questions with Arguments","authors":["A Bondarenko, A Panchenko, M Beloucif, C Biemann… - Datenbank-Spektrum, 2020"],"snippet":"… analyzing Wikidata and DBpedia as additional sources of (structured) information besides the retrieval of sentences/documents from the Common Crawl … Ruppert E, Faralli S, Ponzetto SP, Biemann C (2018) Building a …","url":["https://link.springer.com/article/10.1007/s13222-020-00346-8"]} -{"year":"2020","title":"Answering Event-Related Questions over Long-term News Article Archives","authors":["J Wang, A Jatowt, M Färber, M Yoshikawa"],"snippet":"… We can see that the actual time scope (January, 1988) of the first question is reflected relatively well by its distribution of relevant documents as generally 4 We use Glove [23] embeddings trained on the Common Crawl dataset with 300 dimensions. Page 6 …","url":["http://www.aifb.kit.edu/images/1/19/QA_ECIR2020.pdf"]} -{"year":"2020","title":"Application of Machine Learning Techniques for Text Generation","authors":["S Martí Román - 2020"],"snippet":"Page 1. Escola Tècnica Superior d'Enginyeria Informàtica Universitat Politècnica de València Application of Machine Learning Techniques for Text Generation DEGREE FINAL WORK Degree in Computer Engineering Author: Salvador Martí Román …","url":["https://riunet.upv.es/bitstream/handle/10251/149583/Mart%C3%AD%20-%20Uso%20de%20t%C3%A9cnicas%20de%20aprendizaje%20autom%C3%A1tico%20para%20la%20generaci%C3%B3n%20de%20texto.pdf?sequence=1"]} -{"year":"2020","title":"Application of Machine Learning to Classify News Headlines","authors":["P Guttula, RM Aburas, S Srijan"],"snippet":""} -{"year":"2020","title":"AQuaMuSe: Automatically Generating Datasets for Query-Based Multi-Document Summarization","authors":["S Kulkarni, S Chammas, W Zhu, F Sha, E Ie - arXiv preprint arXiv:2010.12694, 2020"],"snippet":"… is a nontrivial task in itself and there are several con1https://commoncrawl org … paragraphs, we use a pre-processed and cleaned version of the Common Crawl corpus (Raffel et al … We illustrate our approach us- ing Google's Natural …","url":["https://arxiv.org/pdf/2010.12694"]} -{"year":"2020","title":"AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings","authors":["A Lauscher, R Takieddin, SP Ponzetto, G Glavaš - arXiv preprint arXiv:2011.01575, 2020"],"snippet":"… For FT, we investigate two models, one trained on the portions of Wikipedia and CommonCrawl corpora written in Modern Standard Arabic (MS) and the other on portions written in Egyptian Arabic.9 We evaluate the four variants …","url":["https://arxiv.org/pdf/2011.01575"]} -{"year":"2020","title":"ArchiMeDe@ DANKMEMES: A New Model Architecture for Meme Detection","authors":["J Setpal, G Sarti"],"snippet":"… We fine-tune representations over the available meme textual data and use them as components of our end-to-end system. 1umberto-commoncrawl-cased-v1 in the HuggingFace's model hub (Wolf et al., 2019) Page 4. 2.3 Visual input …","url":["http://ceur-ws.org/Vol-2765/paper138.pdf"]} -{"year":"2020","title":"Are All Good Word Vector Spaces Isomorphic?","authors":["I Vulić, S Ruder, A Søgaard - arXiv preprint arXiv:2004.04070, 2020"],"snippet":"… 3Recent initiatives replace training on Wikipedia with training on larger CommonCrawl data (Grave et al., 2018; Conneau et al., 2020), but the large differences in corpora sizes between high-resource and low-resource languages are not removed …","url":["https://arxiv.org/pdf/2004.04070"]} -{"year":"2020","title":"Are All Languages Created Equal in Multilingual BERT?","authors":["S Wu, M Dredze - arXiv preprint arXiv:2005.09093, 2020"],"snippet":"… (2019) train a multilingual masked language model (Devlin et al., 2019) on 2.5TB of CommonCrawl filtered data covering 100 languages and show it outperforms a Wikipedia-based model on low resource languages (Urdu …","url":["https://arxiv.org/pdf/2005.09093"]} -{"year":"2020","title":"Argumentative relation classification for argumentative dialogue systems","authors":["C Schindler - 2020"],"snippet":"… With this setup, args outperformed ArgumenText in the category related. 3.2.2. ArgumenText The search engine ArgumenText [35] uses the English part of CommonCrawl2 to retrieve relevant documents … 2http://commoncrawl …","url":["https://oparu.uni-ulm.de/xmlui/bitstream/handle/123456789/33973/BScThesis_SchindlerC.pdf?sequence=3&isAllowed=y"]} -{"year":"2020","title":"Argumentative Topology: Finding Loop (holes) in Logic","authors":["S Tymochko, Z New, L Bynum, E Purvine, T Doster… - arXiv preprint arXiv …, 2020"],"snippet":"… Word embeddings are performed using two pretrained models: Word2Vec trained on the Google News dataset [12] and GloVe trained on Common Crawl [13]. We compute the persistence diagrams of the topological …","url":["https://arxiv.org/pdf/2011.08952"]} -{"year":"2020","title":"ArgumenText: Argument Classification and Clustering in a Generalized Search Scenario","authors":["J Daxenberger, B Schiller, C Stahlhut, E Kaiser… - Datenbank-Spektrum, 2020"],"snippet":"… Full size image. For the public version of the ArgumenText search engine, we indexed more than 400 million English and German web pages from the CommonCrawl project and segmented all documents into sentences [21] …","url":["https://link.springer.com/article/10.1007/s13222-020-00347-7"]} -{"year":"2020","title":"Artificial Intelligence in mental health and the biases of language based models","authors":["I Straw, C Callison-Burch - PloS one, 2020"],"snippet":"… 52]. As described by Pennington et al. GloVe embeddings were trained on text copora from Wikipedia data, Gigaword and web data from Common Crawl which built a vocabulary of 400,000 frequent words [57]. Word2Vec was …","url":["https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0240376"]} -{"year":"2020","title":"ASAPPpy: a Python Framework for Portuguese STS?","authors":["J Santos, A Alves, HG Oliveira"],"snippet":"… 20 Page 8. From those, CBOW Word2vec and GloVe, both with 300-dimensioned vectors, were selected; (ii) fastText.cc embeddings [9], which provide word vectors for 157 languages, trained on Common Crawl and Wikipedia using fastText …","url":["http://ceur-ws.org/Vol-2583/2_ASAPPpy.pdf"]} -{"year":"2020","title":"Ascent of Pre-trained State-of-the-Art Language Models","authors":["K Nagda, A Mukherjee, M Shah, P Mulchandani… - Advanced Computing …, 2020"],"snippet":"… 6.2 Dataset. Similar to BERT, XLNet was pre-trained using English Wikipedia dataset (13 GB of plain text), as well as CommonCrawl, Giga5 and ClueWeb 2012-B datasets [11]. The large variant of the model has a sequence and …","url":["https://link.springer.com/chapter/10.1007/978-981-15-3242-9_26"]} -{"year":"2020","title":"Aspect-Controlled Neural Argument Generation","authors":["B Schiller, J Daxenberger, I Gurevych - Training"],"snippet":"… Consequently, the following preprocessing steps ultimately target retrieval and classification of sentences. To evaluate different data sources, we use a dump from Common-Crawl2 (CC) and Reddit comments3 …","url":["https://public.ukp.informatik.tu-darmstadt.de/UKP_Webpage/publications/2020/2020_PP_BES_aspect_controlled_argument_generation_v0.2.pdf"]} -{"year":"2020","title":"Assessing Demographic Bias in Named Entity Recognition","authors":["S Mishra, S He, L Belli"],"snippet":"… level confidence via the Constrained ForwardBackward algorithm [5]. Different versions of this model were trained on CoNLL 03 NER benchmark dataset [27] by utilizing varying embedding methods: (a) GloVe uses GloVe 840B …","url":["https://kg-bias.github.io/NER_Bias_KG_Bias.pdf"]} -{"year":"2020","title":"Assessing Suitable Word Embedding Model for Malay Language through Intrinsic Evaluation","authors":["YT Phua, KH Yew, OM Foong, MYW Teow - 2020 International Conference on …, 2020"],"snippet":"… This mode was trained on Common Crawl and Wikipedia [26], and pre-trained word vectors for 294 languages were trained on Wikipedia [22]. The results of the evaluation were 0.477 for Pearson correlation coefficient and 0.51 …","url":["https://ieeexplore.ieee.org/abstract/document/9247707/"]} -{"year":"2020","title":"ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations","authors":["F Alva-Manchego, L Martin, A Bordes, C Scarton… - arXiv preprint arXiv …, 2020"],"snippet":"… stopwords). We use the 50k most frequent words of the FastText word embeddings vo- cabulary (Bojanowski et al., 2016). This vo- cabulary was originally sorted with frequencies of words in the Common Crawl. This score …","url":["https://arxiv.org/pdf/2005.00481"]} -{"year":"2020","title":"Attention-based hierarchical recurrent neural networks for MOOC forum posts analysis","authors":["N Capuano, S Caballé, J Conesa, A Greco - Journal of Ambient Intelligence and …, 2020"],"snippet":"… WikiNER corpora (Nivre et al. 2016) for Italian as well as the OntoNotes (Pradhan and Ramshaw 2017) and the Common Crawl Footnote 2 corpora for English. Text categorization model. Independently from the specific document …","url":["https://link.springer.com/article/10.1007/s12652-020-02747-9"]} -{"year":"2020","title":"Attention-based Model for Evaluating the Complexity of Sentences in English Language","authors":["D Schicchi, G Pilato, GL Bosco - 2020 IEEE 20th Mediterranean Electrotechnical …, 2020"],"snippet":"… The output of the attention layer (context-vector) is given as input to a dense layer, which gives the probability that the sentence belongs to either hard-to-understand or easy-to- 1www.wikipedia.org 2www.commoncrawl …","url":["https://ieeexplore.ieee.org/abstract/document/9140531/"]} -{"year":"2020","title":"Attribute Sentiment Scoring with Online Text Reviews: Accounting for Language Structure and Missing Attributes","authors":["I Chakraborty, M Kim, K Sudhir - 2020"],"snippet":"… 10These embeddings have been trained on different corpus like Wikipedia dumps, Gigaword news dataset and web data from Common Crawl and have more than 5 billion unique tokens. Page 18. 17 lenging text and image classification problems (Wang et al …","url":["https://www.ishitachakra.com/JobMarketPaper_IshitaYale.pdf"]} -{"year":"2020","title":"Augmenting cross-domain knowledge bases using web tables","authors":["Y Oulabi - 2020"],"snippet":"Page 1. Augmenting Cross-Domain Knowledge Bases Using Web Tables Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der Universität Mannheim vorgelegt von Yaser …","url":["https://madoc.bib.uni-mannheim.de/55962/1/Oulabi2020_PhD_thesis.pdf"]} -{"year":"2020","title":"Author2Vec: A Framework for Generating User Embedding","authors":["X Wu, W Lin, Z Wang, E Rastorgueva - arXiv preprint arXiv:2003.11627, 2020"],"snippet":"… the user. We used the Facebook FastText (https:// fasttext.cc/) pre-trained Word2Vec model: crawl-300d-2M, which is a model with 2 million word vectors trained on Common Crawl (600B to- kens). The baseline implementations …","url":["https://arxiv.org/pdf/2003.11627"]} -{"year":"2020","title":"Autoencoding Improves Pre-trained Word Embeddings","authors":["M Kaneko, D Bollegala - arXiv preprint arXiv:2010.13094, 2020"],"snippet":"… 3M words learnt from the Google News corpus), GloVe2 (300-dimensional word embeddings for ca. 2.1M words learnt from the Common Crawl), and fastText3 (300dimensional embeddings for ca. 2M words learnt from the Common Crawl) …","url":["https://arxiv.org/pdf/2010.13094"]} -{"year":"2020","title":"Automated coding of implicit motives: A machine‑learning approach","authors":["JS Pang, H Ring"],"snippet":"… experiments we decided to use Facebook's FastText subword embeddings of 300 dimensions trained on Common Crawl (600 billion tokens).5 This is the set of pre-trained vectors that we used to derive word features from …","url":["https://link.springer.com/content/pdf/10.1007/s11031-020-09832-8.pdf"]} -{"year":"2020","title":"Automated Short Answer Grading: A Simple Solution for a Difficult Task","authors":["S Menini, S Tonelli, G De Gasperis, P Vittorini"],"snippet":"… combining vectors representing both words and subwords. To generate these embeddings we start from the pre-computed Italian language model3 trained on Common Crawl and Wikipedia. The latter, in particular, is suitable …","url":["https://pdfs.semanticscholar.org/9ff2/6a502dd1b3e0c136af4bc2ca9af9b901fce4.pdf"]} -{"year":"2020","title":"Automatic Detection of Machine Generated Text: A Critical Survey","authors":["G Jawahar, M Abdul-Mageed, LVS Lakshmanan - arXiv preprint arXiv:2011.01314, 2020"],"snippet":"… GPT-3 (Brown et al., 2020) fragments from CommonCrawl (570GB / 175B) three previous news articles and title of a proposed article body of the proposed article top-p fake news Table 1: Summary of the characteristics of TGMs that can act as threat models …","url":["https://arxiv.org/pdf/2011.01314"]} -{"year":"2020","title":"Automatic language identification of short texts","authors":["A Avenberg - 2020"],"snippet":"Page 1. UPTEC F 20043 Examensarbete 30 hp September 2020 Automatic language identification of short texts Anna Avenberg Page 2. Teknisknaturvetenskaplig fakultet UTH-enheten Besöksadress: Ångströmlaboratoriet …","url":["https://www.diva-portal.org/smash/get/diva2:1473718/FULLTEXT01.pdf"]} -{"year":"2020","title":"Automatic Metaphor Interpretation Using Word Embeddings","authors":["K Bar, N Dershowitz, L Dankin - arXiv preprint arXiv:2010.02665, 2020"],"snippet":"… relatively large corpus. Specifically, we use DepCC,1 a dependency-parsed “web-scale corpus” based on CommonCrawl.2 There are 365 million documents in the corpus, comprising about 252B tokens. Among other preprocessing …","url":["https://arxiv.org/pdf/2010.02665"]} -{"year":"2020","title":"Automatic Poetry Generation from Prosaic Text","authors":["T Van de Cruys - Proceedings of the 58th Annual Meeting of the …, 2020"],"snippet":"Page 1. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2471–2480 July 5 - 10, 2020. c 2020 Association for Computational Linguistics 2471 Automatic Poetry Generation from Prosaic Text …","url":["https://www.aclweb.org/anthology/2020.acl-main.223.pdf"]} -{"year":"2020","title":"Automatic Short Answer Grading using Text-to-Text Transfer Transformer Model","authors":["S Haller - 2020"],"snippet":"Page 1. Faculty of Electrical Engineering, Mathematics & Computer Science Automatic Short Answer Grading using Text-to-Text Transfer Transformer Model Stefan Haller M.Sc. Thesis in Business Information …","url":["http://essay.utwente.nl/83879/7/Haller_MA_EEMCS.pdf"]} -{"year":"2020","title":"Automatic Speech Recognition for ILSE-Interviews: Longitudinal Conversational Speech Recordings covering Aging and Cognitive Decline","authors":["A Abulimiti, J Weiner, T Schultz"],"snippet":"… In addition, we selected text data similar to ILSE from large amounts of Common Crawl Data1 based on Term Frequency–Inverse Document Frequency (tf-idf) for the training of Recurrent Neural Network (RNN) based …","url":["https://indico2.conference4me.psnc.pl/event/35/contributions/3209/attachments/742/780/Thu-SS-1-6-9.pdf"]} -{"year":"2020","title":"Automatic text segmentation based on relevant context","authors":["S Kim, WW Chang, N Lipka, F Dernoncourt, CY Park - US Patent App. 16/368,334, 2020"],"snippet":"US20200311207A1 - Automatic text segmentation based on relevant context - Google Patents. Automatic text segmentation based on relevant context. Download PDF Info. Publication number US20200311207A1. US20200311207A1 …","url":["https://patents.google.com/patent/US20200311207A1/en"]} -{"year":"2020","title":"Automating Just-In-Time Comment Updating","authors":["Z Liu, X Xia, M Yan, S Li","Z Liu, X Xia, M Yan, S Li - Automated Software Engineering, to appear, 2020"],"snippet":"Page 1. Automating Just-In-Time Comment Updating Zhongxin Liu∗† Zhejiang University China liu_zx@zju.edu.cn Xin Xia‡ Monash University Australia xin.xia@monash.edu Meng Yan Chongqing University China mengy@cqu.edu.cn …","url":["https://pdfs.semanticscholar.org/0b03/b1526e88c8780214697bbe9163b95a59cdbd.pdf","https://xin-xia.github.io/publication/ase202.pdf"]} -{"year":"2020","title":"Automating Text Naturalness Evaluation of NLG Systems","authors":["E Çano, O Bojar - arXiv preprint arXiv:2006.13268, 2020"],"snippet":"… The model is trained on RealNews, a large news collection they derived from Page 4. 4 E. Çano and O. Bojar Common Crawl1 dumps … They are typically assessed by 1 https://commoncrawl.org/ 2 http://www.statmt.org …","url":["https://arxiv.org/pdf/2006.13268"]} -{"year":"2020","title":"BanFakeNews: A Dataset for Detecting Fake News in Bangla","authors":["MZ Hossain, MA Rahman, MS Islam, S Kar - arXiv preprint arXiv:2004.08789, 2020"],"snippet":"… words in it. We experiment with the Bangla 300 dimensional word vectors pre-trained7 with Fasttext (Grave et al., 2018) on Wikipedia8 and Common Crawl9, where we have a coverage of 55.21%. Additionally, we experiment …","url":["https://arxiv.org/pdf/2004.08789"]} -{"year":"2020","title":"Bangla Text Classification using Transformers","authors":["T Alam, A Khan, F Alam - arXiv preprint arXiv:2011.04446, 2020"],"snippet":"… NSP task is removed and only MLM loss is used for pretraining. XLM-RoBERTa [18] is the multilingual variant of RoBERTa trained with a multilingual MLM. It is trained on one hundred languages, with more than two terabytes of filtered Common Crawl data …","url":["https://arxiv.org/pdf/2011.04446"]} -{"year":"2020","title":"Bankruptcy Map: A System for Searching and Analyzing US Bankruptcy Cases at Scale","authors":["E Choi, G Brassil, K Keller, J Ouyang, K Wang"],"snippet":"… layers [14]. The network was pre-trained on the large OntoNotes dataset, with GloVe vectors used for feature creation trained on Common Crawl data [3, 16]. The model recognized named entities and their types. We collected …","url":["https://cpb-us-w2.wpmucdn.com/express.northeastern.edu/dist/d/53/files/2020/02/CJ_2020_paper_57.pdf"]} -{"year":"2020","title":"BARThez: a Skilled Pretrained French Sequence-to-Sequence Model","authors":["MK Eddine, AJP Tixier, M Vazirgiannis - arXiv preprint arXiv:2010.12321, 2020"],"snippet":"… Other than that, BARTHez corpus is similar to FlauBERT's. It primarily consists in the French part of CommonCrawl, NewsCrawl, Wikipedia and other smaller corpora that are listed in Table 1. To clean the corpus from noisy examples …","url":["https://arxiv.org/pdf/2010.12321"]} -{"year":"2020","title":"Benchmarking Neural and Statistical Machine Translation on Low-Resource African Languages","authors":["K Duh, P McNamee, M Post, B Thompson"],"snippet":"… The columns CommonCrawl and Wikipedia indicate the amount of monolingual data on the web, which can be viewed as an indicator of the upper limit of how much web-crawled data we may be able to obtain. CommonCrawl …","url":["https://pdfs.semanticscholar.org/3bde/97a22dab1147b0f3209805315bbff9b82674.pdf"]} -{"year":"2020","title":"BERT-based Ensembles for Modeling Disclosure and Support in Conversational Social Media Text","authors":["K Pant, T Dadu, R Mamidi - 2020"],"snippet":"… Gokaslan, A., Cohen, V.: Openwebtext corpus. http://Skylion007.github.io/ OpenWebTextCorpus (2019) 5. Nagel, S.: Cc-news (2016), http://web.archive.org/save/ http://commoncrawl. org/2016/10/newsdataset-available/ 6. Rajendran …","url":["http://web2py.iiit.ac.in/research_centres/publications/download/inproceedings.pdf.83b74a0e278a6d03.424552542d626173656420456e73656d626c657320666f72204d6f64656c696e6720446973636c6f73757265202620537570706f727420696e20436f6e766572736174696f6e616c20536f6369616c204d6564696120546578742e706466.pdf"]} -{"year":"2020","title":"BERT-Based Simplification of Japanese Sentence-Ending Predicates in Descriptive Text","authors":["T Kato, R Miyata, S Sato - Proceedings of the 13th International Conference on …, 2020"],"snippet":"… For any parts having more than one word, the average of the embedding is used. To obtain the embedding vectors, we used existing Japanese pre-trained word vectors that were trained on Common Crawl and Wikipedia using fastText.7 …","url":["https://www.aclweb.org/anthology/2020.inlg-1.31.pdf"]} -{"year":"2020","title":"BERTweet: A pre-trained language model for English Tweets","authors":["DQ Nguyen, T Vu, AT Nguyen - arXiv preprint arXiv:2005.10200, 2020"],"snippet":"… The pre-trained RoBERTa is a strong language model for English, learned from 160GB of texts covering books, Wikipedia, CommonCrawl news, CommonCrawl stories, and web text contents. XLM-R is a cross-lingual variant …","url":["https://arxiv.org/pdf/2005.10200"]} -{"year":"2020","title":"Better Web Corpora For Corpus Linguistics And NLP","authors":["V Suchomel"],"snippet":"Page 1. Masaryk University Faculty of Informatics Better Web Corpora For Corpus Linguistics And NLP Doctoral Thesis Vít Suchomel Brno, Spring 2020 Page 2. Masaryk University Faculty of Informatics Better Web Corpora …","url":["https://is.muni.cz/th/u4rmz/Better_Web_Corpora_For_Corpus_Linguistics_And_NLP.pdf"]} -{"year":"2020","title":"Beyond English-Centric Multilingual Machine Translation","authors":["A Fan, S Bhosale, H Schwenk, Z Ma, A El-Kishky… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Beyond English-Centric Multilingual Machine Translation Angela Fan∗, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary …","url":["https://arxiv.org/pdf/2010.11125"]} -{"year":"2020","title":"Beyond Instructional Videos: Probing for More Diverse Visual-Textual Grounding on YouTube","authors":["J Hessel, Z Zhu, B Pang, R Soricut - arXiv preprint arXiv:2004.14338, 2020"],"snippet":"… corresponding visual content. However, in contrast to the highly diverse corpora utilized for text-based pretraining (Wikipedia, Common Crawl, etc.), pretraining for web videos so far has been limited to instructional videos. This domain …","url":["https://arxiv.org/pdf/2004.14338"]} -{"year":"2020","title":"Biases as Values: Evaluating Algorithms in Context","authors":["M Díaz - 2020"],"snippet":"Page 1. NORTHWESTERN UNIVERSITY Biases as Values: Evaluating Algorithms in Context A DISSERTATION SUBMITTED TO THE GRADUATE SCHOOL IN PARTIAL FULFILLMENT OF THE REQUIREMENTS for the degree DOCTOR OF PHILOSOPHY …","url":["http://search.proquest.com/openview/83eed19485a394e067ee5a9b03d84ef2/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2020","title":"Biomedical Information Extraction Pipelines for Public Health in the Age of Deep Learning","authors":["AM Ranganatha - 2019"],"snippet":"Page 1. Biomedical Information Extraction Pipelines for Public Health in the Age of Deep Learning by Arjun Magge Ranganatha A Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy …","url":["http://search.proquest.com/openview/bf6cbc4695dc3135c8e78ff548e670f8/1?pq-origsite=gscholar&cbl=2026366&diss=y"]} -{"year":"2020","title":"Blocking Techniques for Entity Linkage: A Semantics-Based Approach","authors":["F Azzalini, S Jin, M Renzi, L Tanca - Data Science and Engineering, 2020"],"snippet":"… each attribute value \\(t[A_{k}]\\) is transformed into a real-valued vector \\(\\mathbf{v }(w)\\). The fastText model we use is crawl-300d-2M-subword [23] where each word is represented as a 300-dimensional vector and the …","url":["https://link.springer.com/article/10.1007/s41019-020-00146-w"]} -{"year":"2020","title":"Bottom-Up Modeling of Permissions to Reuse Residual Clinical Biospecimens and Health Data","authors":["E Umberfield - 2020"],"snippet":"Page 1. Bottom-Up Modelling of Permissions to Reuse Residual Clinical Biospecimens and Health Data by Elizabeth Umberfield A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor …","url":["https://deepblue.lib.umich.edu/bitstream/handle/2027.42/162937/eliewolf_1.pdf?sequence=1"]} -{"year":"2020","title":"BREXIT: Psychometric Profiling the Political Salubrious through Machine Learning","authors":["J Usher, P Dondio"],"snippet":"… We used the English multi-task Convoluted Neural Network trained on OntoNotes, with GloVe vectors trained on Common Crawl, which assigns word vectors, context-specific token vectors, POS tags, dependency parse …","url":["http://wims2020.sigappfr.org/wp-content/uploads/2020/06/WIMS'20/p178-Usher.pdf"]} -{"year":"2020","title":"BREXIT: Psychometric Profiling the Political Salubrious through Machine Learning: Predicting personality traits of Boris Johnson through Twitter political text","authors":["J Usher, P Dondio - Proceedings of the 10th International Conference on …, 2020"],"snippet":"… We used the English multi-task Convoluted Neural Network trained on OntoNotes, with GloVe vectors trained on Common Crawl, which assigns word vectors, context-specific token vectors, POS tags, dependency parse …","url":["https://dl.acm.org/doi/abs/10.1145/3405962.3405981"]} -{"year":"2020","title":"Building a user-generated content north-african arabizi treebank: Tackling hell","authors":["D Seddah, F Essaidi, A Fethi, M Futeral, B Muller… - Proceedings of the 58th …, 2020"],"snippet":"… used data-driven language identification models to extract NArabizi samples among the whole collection of the Common-Crawl-based OSCAR … one based on search query-based web-crawling and the other from a cleaned version …","url":["https://www.aclweb.org/anthology/2020.acl-main.107.pdf"]} -{"year":"2020","title":"Building a Wide Reach Corpus for Secure Parser Development","authors":["T Allison, W Burke, V Constantinou, E Goh…"],"snippet":"… [17] CGR Lavanya Pamulaparty and MS Rao, “A novel approach for avoiding overload in the web crawling.” Odisha, India: High Performance Computing and Applications (ICHPCA), 2014. [18] “Common Crawl,” https://commoncrawl.org …","url":["http://spw20.langsec.org/papers/corpus_LangSec2020.pdf"]} -{"year":"2020","title":"Building LARO: Language Agnostic Sentence Embeddings from finetuned RoBERTa⋆","authors":["AS Salvado"],"snippet":"… XLM-RoBERTa is the successor of RoBERTa and a large multi-lingual language model. It is a Transformer-based model, technically a Transformer en- coder, and was trained on 2.5TB of filtered CommonCrawl data in 100 …","url":["https://users.informatik.haw-hamburg.de/~ubicomp/projekte/master2020-proj/soblechero.pdf"]} -{"year":"2020","title":"Building Web Corpora for Minority Languages","authors":["H Jauhiainen, T Jauhiainen, K Lindén - Proceedings of the 12th Web as Corpus …, 2020"],"snippet":"… Common Crawl Foundation3 regularly crawls the Internet and offers the texts it finds for free download. Smith et al … Kanerva et al. (2014) used the morphological analyser OMorFi4 to find Finnish sentences in the Common Crawl corpus …","url":["https://www.aclweb.org/anthology/2020.wac-1.4.pdf"]} -{"year":"2020","title":"Caliskan Et Al-authors-full","authors":["A Caliskan, JJ Bryson, A Narayanan"],"snippet":"… We use the largest of the four corpora provided—the “Common Crawl” corpus obtained from a large-scale crawl of the web, containing 840 billion tokens (roughly, words). Tokens in this corpus are casesensitive, resulting in 2.2 million different ones …","url":["https://www.studeersnel.nl/nl/document/technische-universiteit-delft/machine-learning/werkstukessay/caliskan-et-al-authors-full/9896508/view"]} -{"year":"2020","title":"CALM: Continuous Adaptive Learning for Language Modeling","authors":["K Arumae, P Bhatia - arXiv preprint arXiv:2004.03794, 2020"],"snippet":"… We processed publicly available biomedical and non-biomedical corpora for pre-training our models. For non-biomedical data, we use BookCorpus and English Wikipedia data, CommonCrawl Stories (Trinh and Le, 2018), and OpenWebText (Gokaslan and Cohen) …","url":["https://arxiv.org/pdf/2004.03794"]} -{"year":"2020","title":"Can Embeddings Adequately Represent Medical Terminology? New Large-Scale Medical Term Similarity Datasets Have the Answer!","authors":["C Schulz, D Juric - arXiv preprint arXiv:2003.11082, 2020"],"snippet":"… 3) Non-medical: As a comparison, we also include - the GloVe word embedding (Pennington, Socher, and Manning 2014); - 2 Fasttext embeddings trained on Wikipedia and Common Crawl (plus its model (M)) (Mikolov et al. 2018) …","url":["https://arxiv.org/pdf/2003.11082"]} -{"year":"2020","title":"Can Emojis Convey Human Emotions? A Study to Understand the Association between Emojis and Emotions","authors":["AAM Shoeb, G de Melo - Proceedings of the 2020 Conference on Empirical …, 2020"],"snippet":"… Here, sim(v1,v2) denotes the cosine similarity be- tween two vectors. We first consider the widely used 300dimensional GloVe (Pennington et al., 2014) models pretrained on CommonCrawl 840B and Twitter, as these contain emojis …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.720.pdf"]} -{"year":"2020","title":"Can I Take Your Subdomain? Exploring Related-Domain Attacks in the Modern Web","authors":["M Squarcina, M Tempesta, L Veronese, S Calzavara… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Can I Take Your Subdomain? Exploring Related-Domain Attacks in the Modern Web Marco Squarcina1, Mauro Tempesta1, Lorenzo Veronese1, Stefano Calzavara2, Matteo Maffei1 1TU Wien, 2Università Ca' Foscari Venezia Abstract …","url":["https://arxiv.org/pdf/2012.01946"]} -{"year":"2020","title":"Can Knowledge Rich Sentences Help Language Models to Solve Common Sense Reasoning Problems?","authors":["A Prakash - 2019"],"snippet":"… 20 Page 33. Figure 3. RoBERTa network architecture 2. CommonCrawl News was used, which contained 63 million news articles between September 2016 and February 2019 3. OPENWEBTEXT (Gokaslan and Cohen 2019) which is a text corpus containing …","url":["http://search.proquest.com/openview/08e0d3a7c85dcbbd4875abd0d3c48e17/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2020","title":"Capturing Word Order in Averaging Based Sentence Embeddings","authors":["JH Lee, JC Collados, LE Anke, S Schockaert"],"snippet":"… two million words. These word vectors were trained on Common Crawl with 600 billion tokens [19]. The sentences that we used for training, validation and testing were obtained from an English Wikipedia dump. All sentences …","url":["http://josecamachocollados.com/papers/ECAI2020_Capturing_Word_Order_in_Averaging_Based_Sentence_Embeddings.pdf"]} -{"year":"2020","title":"Caspar: Extracting and Synthesizing User Stories of Problems from App Reviews","authors":["H Guo, MP Singh"],"snippet":"Page 1. Caspar: Extracting and Synthesizing User Stories of Problems from App Reviews Hui Guo Secure Computing Institute North Carolina State University Raleigh, North Carolina hguo5@ncsu.edu Munindar P. Singh Secure …","url":["https://hguo5.github.io/Caspar/docs/Caspar_ICSE_20.pdf"]} -{"year":"2020","title":"CatchPhish: A URL and Anti-Phishing Research Platform","authors":["S Waddell"],"snippet":"Page 1. CatchPhish: A URL and Anti-Phishing Research Platform Stephen Waddell MInf Project (Part 2) Report Master of Informatics School of Informatics University of Edinburgh 2020 Page 2. Page 3. 3 Abstract In this work, I …","url":["https://groups.inf.ed.ac.uk/tulips/projects/19-20/waddell-2020.pdf"]} -{"year":"2020","title":"CauseNet: Towards a Causality Graph Extracted from the Web","authors":["S Heindorf, Y Scholten, H Wachsmuth, ACN Ngomo…"],"snippet":"… Web To extract causal relations from the web at scale, we analyze the ClueWeb12 web crawl, which comprises about 733,019,372 English web pages crawled between February and May 2012.3 We chose this crawl over …","url":["https://webis.de/downloads/publications/papers/potthast_2020a.pdf"]} -{"year":"2020","title":"CC-News-En: A Large English News Corpus","authors":["J Mackenzie, R Benham, M Petri, JR Trippas…"],"snippet":"… Temporal Growth. The Common Crawl foundation are constantly adding new documents to CC-News … 10DMOZ is now superseded by Curlie: https://www.curlie. org 11https://github.com/commoncrawl/news-crawl/issues/8 Page 4 …","url":["https://www.johannetrippas.com/papers/mackenzie2020ccnews.pdf"]} -{"year":"2020","title":"CCAligned: A Massive Collection of Cross-Lingual Web-Document Pairs","authors":["A El-Kishky, V Chaudhary, F Guzmán, P Koehn - Proc. of EMNLP, 2020"],"snippet":"… we mined over 392 million aligned documents (100M with English and 292M without English) across 68 Common Crawl snapshots. We assess the efficacy of this rule-based alignment in the next section. We select a small subset …","url":["https://www.researchgate.net/profile/Ahmed_El-Kishky/publication/337273813_A_Massive_Collection_of_Cross-Lingual_Web-Document_Pairs/links/5f992509458515b7cfa40eb4/A-Massive-Collection-of-Cross-Lingual-Web-Document-Pairs.pdf"]} -{"year":"2020","title":"Chart-based Zero-shot Constituency Parsing on Multiple Languages","authors":["T Kim, B Li, S Lee"],"snippet":"… (2019)), the XLM model trained with masked language modeling on 100 languages (XLM, Conneau and Lample (2019)), and the XLM-R and XLM-R-large models that are trained with the filtered CommonCrawl data (Wenzek et al. 2019) by Conneau et al. (2019) …","url":["https://openreview.net/pdf?id=JY-3BheD5LB"]} -{"year":"2020","title":"ChemTables: A Dataset for Semantic Classification of Tables in Chemical Patents","authors":["Z Zhai, C Druckenbrodt, C Thorne, SA Akhondi… - 2020"],"snippet":"… This model and the baselines it compares to are evaluated on a web table dataset, built by extracting tables from top 500 web pages which contain the highest numbers of tables in a subset of the April 2016 Common Crawl corpus [20] …","url":["https://www.researchsquare.com/article/rs-127219/latest.pdf"]} -{"year":"2020","title":"Circles are like Ellipses, or Ellipses are like Circles? Measuring the Degree of Asymmetry of Static and Contextual Embeddings and the Implications to Representation …","authors":["W Zhang, M Campbell, Y Yu, S Kumaravel - arXiv preprint arXiv:2012.01631, 2020"],"snippet":"Page 1. Circles are like Ellipses, or Ellipses are like Circles? Measuring the Degree of Asymmetry of Static and Contextual Word Embeddings and the Implications to Representation Learning Wei Zhang 1, Murray Campbell 1 …","url":["https://arxiv.org/pdf/2012.01631"]} -{"year":"2020","title":"CiTIUS at the TREC 2020 Health Misinformation Track","authors":["M Fernández-Pichel, DE Losada, JC Pichel…"],"snippet":"… 2 DOCUMENTS AND TOPICS In the TREC 2020 Health Misinformation Track, a news corpus from January 2020 to April 2020 was provided. The documents were obtained from CommonCrawl News, which contains news articles from all over the world …","url":["http://persoal.citius.usc.es/jcpichel/docs/2020_TREC_MFernandezPichel.pdf"]} -{"year":"2020","title":"Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network","authors":["M Karim, BR Chakravarthi, M Arcan, JP McCrae… - arXiv preprint arXiv …, 2020"],"snippet":"… The fourth one called fastText [17], which is trained on common crawl and Wikipedia using CBOW with position-weights, in di- mension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. Eventually …","url":["https://arxiv.org/pdf/2004.07807"]} -{"year":"2020","title":"Classification of cancer pathology reports with Deep Learning methods","authors":["S Martina"],"snippet":"Page 1. PHD PROGRAM IN SMART COMPUTING DIPARTIMENTO DI INGEGNERIA DELL'INFORMAZIONE (DINFO) Classification of cancer pathology reports with Deep Learning methods Stefano Martina Dissertation presented …","url":["https://flore.unifi.it/bitstream/2158/1187936/1/thesis.pdf"]} -{"year":"2020","title":"Classification of cancer pathology reports: a large-scale comparative study","authors":["S Martina, L Ventura, P Frasconi - IEEE Journal of Biomedical and Health Informatics, 2020"],"snippet":"… GloVe) [20]). It is a common practice to use pre-compiled libraries of word vectors trained on several billion tokens extracted from various sources such as Wikipedia, the English Gigaword 5, Common Crawl, or Twitter. These …","url":["https://arxiv.org/pdf/2006.16370"]} -{"year":"2020","title":"Classification of Cyberbullying Text in Arabic","authors":["BA Rachid, H Azza, HHB Ghezala - 2020 International Joint Conference on Neural …, 2020"],"snippet":"… The second set of pre-trained embeddings is the one provided in [24], in which word vectors were trained on online encyclopedia Wikipedia and the Common Crawl corpus using an extension of the fastText model (Fasttext embeddings) …","url":["https://ieeexplore.ieee.org/abstract/document/9206643/"]} -{"year":"2020","title":"Classifying Sequences of Extreme Length with Constant Memory Applied to Malware Detection","authors":["E Raff, W Fleshman, R Zak, HS Anderson, B Filar… - arXiv preprint arXiv …, 2020"],"snippet":"… This better demonstrates the gap between current deep learning and domain knowledge based approaches for classifying malware. We also use the Common Crawl to collect 676,843 benign PDF 103 104 105 106 107 108 …","url":["https://arxiv.org/pdf/2012.09390"]} -{"year":"2020","title":"CLEF eHealth Evaluation Lab 2020","authors":["M Krallinger"],"snippet":"… This collection consists of over 5 million medical webpages from selected domains acquired from the CommonCrawl [7]. Given the positive feedback received for this document collection, it will be used again in the 2020 CHS task …","url":["https://link.springer.com/content/pdf/10.1007/978-3-030-45442-5_76.pdf"]} -{"year":"2020","title":"Clinical XLNet: Modeling Sequential Clinical Notes and Predicting Prolonged Mechanical Ventilation","authors":["K Huang, A Singh, S Chen, ET Moseley, C Deng… - arXiv preprint arXiv …, 2019"],"snippet":"… Pretraining Clinical XLNet. The text representation generated from large pre-training models depends on the corpus it is pre-trained on. XLNet is pre-trained on common language corpora such as BookCorpus, Wikipedia, Common Crawl and etc …","url":["https://arxiv.org/pdf/1912.11975"]} -{"year":"2020","title":"CLUE: A Chinese Language Understanding Evaluation Benchmark","authors":["L Xu, X Zhang, L Li, H Hu, C Cao, W Liu, J Li, Y Li… - arXiv preprint arXiv …, 2020"],"snippet":"… CLUECorpus2020 (Xu et al., 2020) It contains 100 GB Chinese raw corpus, which is retrieved from Common Crawl … CLUEOSCAR6 OSCAR is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus …","url":["https://arxiv.org/pdf/2004.05986"]} -{"year":"2020","title":"CLUECorpus2020: A Large-scale Chinese Corpus for Pre-trainingLanguage Model","authors":["L Xu, X Zhang, Q Dong - arXiv preprint arXiv:2003.01355, 2020"],"snippet":"… Common Crawl is an organization that crawls the web and freely provides its archives and datasets to the public. Common Crawl usually crawls internet web content once a month. Common Crawl's web archives consist of petabytes of data collected since 2011 …","url":["https://arxiv.org/pdf/2003.01355"]} -{"year":"2020","title":"Clustering Approach to Topic Modeling in Users Dialogue","authors":["E Feldina, O Makhnytkina - Proceedings of SAI Intelligent Systems Conference, 2020"],"snippet":"… FastText using FastText weights based on the pre-trained CBOW model with a word window size of five trained on Common Crawl and Wikipedia, vector size 300. Table 2. Results of the implementation of the clustering …","url":["https://link.springer.com/chapter/10.1007/978-3-030-55187-2_44"]} -{"year":"2020","title":"CO-EVOLUTION OF CULTURE AND MEANING REVEALED THROUGH LARGE-SCALE SEMANTIC ALIGNMENT","authors":["B THOMPSON, S ROBERTS, G LUPYAN"],"snippet":"… We also replicated on word embeddings derived from the OpenSubtitles database (Li- son & Tiedemann, 2016) and a combination of Wikipedia and the Common Crawl dataset (Grave, Bojanowski, Gupta, Joulin, & Mikolov, 2018)) …","url":["https://brussels.evolang.org/proceedings/papers/EvoLang13_paper_62.pdf"]} -{"year":"2020","title":"COD3S: Diverse Generation with Discrete Semantic Signatures","authors":["N Weir, J Sedoc, B Van Durme - arXiv preprint arXiv:2010.02882, 2020"],"snippet":"… The model is trained on the co-released corpus CausalBank, which comprises causal statements harvested from English Common Crawl (Buck et al., 2014) … 2014. N-gram counts and language models from the common crawl …","url":["https://arxiv.org/pdf/2010.02882"]} -{"year":"2020","title":"Combination of Neural Machine Translation Systems at WMT20","authors":["B Marie, R Rubino, A Fujita - Proceedings of the Fifth Conference on Machine …, 2020"],"snippet":"… As En- glish monolingual data, we used all the provided data, but sampled only 200M lines from the “Common Crawl” corpora, except the “News Discussions” and “Wiki Dumps” corpora … corpora but also sampled only …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.23.pdf"]} -{"year":"2020","title":"Combinatorial feature embedding based on CNN and LSTM for biomedical named entity recognition","authors":["M Cho, C Park, J Ha, S Park - Journal of Biomedical Informatics, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S1532046420300083"]} -{"year":"2020","title":"Combining Character and Word Embeddings for Affect in Arabic Informal Social Media Microblogs","authors":["AI Alharbi, M Lee - International Conference on Applications of Natural …, 2020"],"snippet":"… Here, the researchers derived the training data from three separate sources: Wikipedia, Twitter and Common Crawl webpages crawl data; they employed two word-level models to learn word representations for general NLP tasks …","url":["https://link.springer.com/chapter/10.1007/978-3-030-51310-8_20"]} -{"year":"2020","title":"Combining Different Parsers and Datasets for CAPITEL UD Parsing","authors":["F Sánchez-León - Proceedings of the Iberian Languages Evaluation …, 2020"],"snippet":"… Besides, we have used fastText word embeddings trained on Common Crawl and Wikipedia corpora.5 Table 1 shows results on development set of a model built with the training material using different word embeddings.6 Increasing …","url":["http://ceur-ws.org/Vol-2664/capitel_paper1.pdf"]} -{"year":"2020","title":"Combining Visual and Textual Features for Semantic Segmentation of Historical Newspapers","authors":["R Barman, M Ehrmann, S Clematide, SA Oliveira… - arXiv preprint arXiv …, 2020"],"snippet":"… First, four pre-trained embeddings of the Flair library14 are used with their default implementation settings, as follows: - fastText-fr, ie the French fastText embeddings of size 300 pre-trained on Common Crawl and Wikipedia; …","url":["https://arxiv.org/pdf/2002.06144"]} -{"year":"2020","title":"Commonsense Aesthetics","authors":["AK Roek - 2020"],"snippet":"Page 1. COMMONSENSE AESTHETICS Aaron Kurosu Roek A DISSERTATION PRESENTED TO THE FACULTY OF PRINCETON UNIVERSITY IN CANDIDACY FOR THE DEGREE OF DOCTOR OF …","url":["http://search.proquest.com/openview/ec2e04cd24fc776ff0cb09566fcf7621/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2020","title":"Commonsense Learning: An Indispensable Path towards Human-centric Multimedia","authors":["B Huang, S Tang, G Shen, G Li, X Wang, W Zhu - … of the 1st International Workshop on …, 2020"],"snippet":"… BERT uses Bookcorpus [84] and English Wikipedia with a size of about 13GB, while XLNet uses more than 100GB of corpus. T5 uses Colossal Clean Crawled Corpus captured from the Common Crawl website with a size of 750GB …","url":["https://dl.acm.org/doi/abs/10.1145/3422852.3423484"]} -{"year":"2020","title":"Communication-Efficient String Sorting","authors":["T Bingmann, P Sanders, M Schimek - arXiv preprint arXiv:2001.08516, 2020"],"snippet":"Page 1. arXiv:2001.08516v1 [cs.DC] 23 Jan 2020 Communication-Efficient String Sorting Timo Bingmann, Peter Sanders, Matthias Schimek Karlsruhe Institute of Technology, Karlsruhe, Germany {bingmann,sanders}@kit.edu, matthias schimek@gmx.de …","url":["https://arxiv.org/pdf/2001.08516"]} -{"year":"2020","title":"Comparative Analysis of Deep Learning Models for Myanmar Text Classification","authors":["MS Phyu, KT Nwet - Asian Conference on Intelligent Information and …, 2020"],"snippet":"… Grave et al. [3] published pre-trained word vectors for two hundred forty-six languages trained on common crawl and Wikipedia. They proposed bag-of-character n-grams based on skip-gram that could capture sub-word information to enrich word vectors …","url":["https://link.springer.com/chapter/10.1007/978-3-030-41964-6_7"]} -{"year":"2020","title":"Comparative Analysis of Machine Learning Algorithms for Computer-Assisted Reporting Based on Fully Automated Cross-Lingual RadLex® Mappings","authors":["ME Maros, CG Cho, AG Junge, B Kämpgen, V Saase… - 2020"],"snippet":"… However, pre-trained word vector models for 157 languages, which were pre-trained on Common Crawl and Wikipedia by the fastText package authors are available for direct download (https://fasttext.cc/docs/en/crawl-vectors.html) [58] …","url":["https://www.preprints.org/manuscript/202004.0354/download/final_file"]} -{"year":"2020","title":"COMPARATIVE ANALYSIS OF SUBDOMAIN ENUMERATION TOOLS AND STATIC CODE ANALYSIS","authors":["GJ Kathrine, RT Baby, V Ebenzer"],"snippet":"… Certificates= censys,certspotter, Google CT APIs: AlienVault, BinaryEdge, BufferOver, CIRCL, CommonCrawl, DNSDB, GitHub, HackerTarget, NetworksDB, PassiveTotal, Pastebin.. Web Archives: ArchiveIt, ArchiveToday, Arquivo, Wayback and others …","url":["https://www.researchgate.net/profile/Ronnie_Joseph2/publication/342501456_COMPARATIVE_ANALYSIS_OF_SUBDOMAIN_ENUMERATION_TOOLS_AND_STATIC_CODE_ANALYSIS/links/5ef76e2d299bf18816eae517/COMPARATIVE-ANALYSIS-OF-SUBDOMAIN-ENUMERATION-TOOLS-AND-STATIC-CODE-ANALYSIS.pdf"]} -{"year":"2020","title":"Comparative Analysis of Word Embeddings for Capturing Word Similarities","authors":["M Toshevska, F Stojanovska, J Kalajdjieski - arXiv preprint arXiv:2005.03812, 2020"],"snippet":"… architectures [23]. In our experiments, we have used pre-trained models both trained with subword information on Wikipedia 2017 (16B tokens) and trained with subword information on Common Crawl (600B tokens)4. 2 https …","url":["https://arxiv.org/pdf/2005.03812"]} -{"year":"2020","title":"Comparing Different Methods for Named Entity Recognition in Portuguese Neurology Text","authors":["F Lopes, C Teixeira, HG Oliveira - Journal of Medical Systems, 2020"],"snippet":"… In order to check which was preferable, two different WE models were used: A pre-trained general Portuguese FastText model, Footnote 3 based on billions of tokens from Wikipedia and Common Crawl [41], with a 5-character window (general language) …","url":["https://link.springer.com/article/10.1007/s10916-020-1542-8"]} -{"year":"2020","title":"Comparing High Dimensional Word Embeddings Trained on Medical Text to Bag-of-Words for Predicting Medical Codes","authors":["V Yogarajan, H Gouk, T Smith, M Mayo, B Pfahringer - Asian Conference on …, 2020"],"snippet":"… Our embeddings are trained to the exact same specifications as the Wikipedia and common crawl fastText models in [10] … For 300-dimensional embeddings, W300 are word embeddings that are trained by fastText on Wikipedia and other common crawl text …","url":["https://link.springer.com/chapter/10.1007/978-3-030-41964-6_9"]} -{"year":"2020","title":"Comparing Neural Network Parsers for a Less-resourced and Morphologically-rich Language: Amharic Dependency Parser","authors":["BE Seyoum, Y Miyao, BY Mekonnen - Proceedings of the first workshop on …, 2020"],"snippet":"… For this purpose, we used the trained model for Amharic using fasttext7. The data for training the model is from Wikipedia and Common Crawl8. The models were trained using continuous bag of words (CBOW) …","url":["https://www.aclweb.org/anthology/2020.rail-1.5.pdf"]} -{"year":"2020","title":"Comparing pre-trained language models for Spanish hate speech detection","authors":["FM Plaza-del-Arco, MD Molina-González… - Expert Systems with …, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S095741742030868X"]} -{"year":"2020","title":"Comparing Probabilistic, Distributional and Transformer-Based Models on Logical Metonymy Interpretation","authors":["G Rambelli, E Chersoni, A Lenci, P Blache, CR Huang - … of the 1st Conference of the …, 2020"],"snippet":"… in random order. XLNet's training corpora were the same as BERT plus Giga5, ClueWeb 2012-B and Common Crawl, for a total of 32.89B subword piece. Also in this case, we used the large pre-trained model. GPT-2 (Radford …","url":["https://www.aclweb.org/anthology/2020.aacl-main.26.pdf"]} -{"year":"2020","title":"Comparing supervised learning algorithms for Spatial Nominal Entity recognition","authors":["A Medad, M Gaio, L Moncla, S Mustière, YL Nir - AGILE: GIScience Series, 2020"],"snippet":"… We have made the hypothesis that as FastText is a model of pretrained vectors (300 dimensions) on Wikipedia and Common Crawl, it provides a generic representation of words … CC BY 4.0 License. Page 12. Common Crawl and Wikipedia using the CBOW method …","url":["https://agile-giss.copernicus.org/articles/1/15/2020/agile-giss-1-15-2020.pdf"]} -{"year":"2020","title":"Comparison between machine learning and human learning from examples generated with machine teaching","authors":["GE Jaimovitch López - 2020"],"snippet":"… For instance, progress in areas like NLP (Natural Language Processing) has led to the development of outstanding deep neural networks such as GPT-3, a task-agnostic model trained using huge data repositories like …","url":["https://riunet.upv.es/bitstream/handle/10251/152771/Jaimovitch%20-%20Comparaci%C3%B3n%20entre%20el%20aprendizaje%20de%20machine%20learning%20y%20humanos%20desde%20ejemplos%20genera....pdf?sequence=1"]} -{"year":"2020","title":"Comparison of Named Entity Recognition Tools Applied to News Articles","authors":["S Vychegzhanin, E Kotelnikov - 2019 Ivannikov Ispras Open Conference (ISPRAS), 2019"],"snippet":"… spaCy Python MIT Bloom embeddings and a residual convolutional neural network en_core_web_sm OntoNotes en_core_web_md OntoNotes, Common Crawl en_core_web_lg OntoNotes, Common Crawl xx_ent_wiki_sm WikiNER ru2 …","url":["https://ieeexplore.ieee.org/abstract/document/8991165/"]} -{"year":"2020","title":"Comprehensive Stereotype Content Dictionaries Using a Semi‐Automated Method","authors":["G Nicolas, X Bai, ST Fiske - European Journal of Social Psychology"],"snippet":"… word embeddings used here are Word2Vec's model pretrained on Google News (Mikolov, Chen, Corrado, & Dean, 2013) and Glove' model pretrained on the Common Crawl (Pennington, Socher, & Manning, 2014; presented in Supplement) …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/ejsp.2724"]} -{"year":"2020","title":"Computational Approaches for Identifying Sensational Soft News","authors":["V Indurthi - 2020"],"snippet":"Page 1. Computational Approaches for Identifying Sensational Soft News Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science and Engineering by Research by Vijayasaradhi Indurthi 201450803 …","url":["http://web2py.iiit.ac.in/research_centres/publications/download/mastersthesis.pdf.a54f3e371a9ec80e.46696e616c5f5468657369735f4d535f42795f52657365617263685f56696a617961736172616468695f496e6475727468692e706466.pdf"]} -{"year":"2020","title":"Computational explorations of semantic cognition","authors":["AS Rotaru - 2020"],"snippet":"Page 1. 1 UNIVERSITY COLLEGE LONDON (UCL) Computational explorations of semantic cognition PHD THESIS Armand Stefan Rotaru Supervisors: Primary: Prof. Gabriella Vigliocco Secondary: Prof. Lewis Griffin Page 2. 2 …","url":["https://discovery.ucl.ac.uk/id/eprint/10106344/13/Rotaru_10106344_thesis.pdf"]} -{"year":"2020","title":"Computational Mechanisms of Language Understanding and Use in the Brain and Behaviour","authors":["I Kajic - 2020"],"snippet":"Page 1. Computational Mechanisms of Language Understanding and Use in the Brain and Behaviour by Ivana Kajić A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Computer Science …","url":["https://uwspace.uwaterloo.ca/bitstream/handle/10012/16439/Kajic_Ivana.pdf?sequence=1"]} -{"year":"2020","title":"Connections and selections: Comparing multivariate predictions and parameter associations from latent variable models of picture naming","authors":["GM Walker, J Fridriksson, G Hickok - Cognitive Neuropsychology, 2020"],"snippet":"Connectionist simulation models and processing tree mathematical models of picture naming have complementary advantages and disadvantages. These model types were compared in terms of their predicti...","url":["https://www.tandfonline.com/doi/abs/10.1080/02643294.2020.1837092"]} -{"year":"2020","title":"Constraining the Transformer NMT Model with Heuristic Grid Beam Search","authors":["G Xie, A Way, J Du, L Wang"],"snippet":"… the training corpus consists of 4.4 Million segments from Europarl (Koehn, 2005) and CommonCrawl (Smith et al., 2013); … Smith, JR, Saint-Amand, H., Plamada, M., Koehn, P., Callison-Burch, C., and Lopez, A. (2013). Dirt cheap …","url":["http://www.computing.dcu.ie/~away/PUBS/2020/TransformerGridSearch.pdf"]} -{"year":"2020","title":"Contemporary Polish Language Model (Version 2) Using Big Data and Sub-Word Approach","authors":["K Wołk"],"snippet":"… In this paper, we present a set of 6-gram language models based on a big-data training of the contemporary Polish language, using the Common Crawl corpus (a compilation of over 3.25 billion webpages) and other resources …","url":["https://indico2.conference4me.psnc.pl/event/35/contributions/3915/attachments/957/996/Thu-3-8-7.pdf"]} -{"year":"2020","title":"Context-aware Feature Generation for Zero-shot Semantic Segmentation","authors":["Z Gu, S Zhou, L Niu, Z Zhao, L Zhang - arXiv preprint arXiv:2008.06893, 2020"],"snippet":"… in the supplementary. Following SPNet [43], we concatenate two different types of word embeddings (d = 600, 300 for each), ie, word2vec [30] trained on Google News and fast-Text [15] trained on Common Crawl. The word …","url":["https://arxiv.org/pdf/2008.06893"]} -{"year":"2020","title":"Contextual Question Answering with Improved Embedding Models","authors":["G He"],"snippet":"… In the GloVe word embedding based BiDAF++ model, we utilize pretrained GloVe(Pennington et al., 2014) embeddings with 300 output dimensions (840B.300d). These embeddings have been prertrained on a common crawl of 840 billion to- kens …","url":["https://georgehe.me/coqa.pdf"]} -{"year":"2020","title":"Contextualized Embeddings in Named-Entity Recognition: An Empirical Study on Generalization","authors":["B Taillé, V Guigue, P Gallinari - arXiv preprint arXiv:2001.08053, 2020"],"snippet":"… 3 Word Representations Word embeddings map each word to a single vector which results in a lexical representation. We take GloVe 840B embeddings [13] trained on Common Crawl as the pretrained word embeddings baseline …","url":["https://arxiv.org/pdf/2001.08053"]} -{"year":"2020","title":"Contextualized Emotion Recognition in Conversation as Sequence Tagging","authors":["Y Wang, J Zhang, J Ma, S Wang, J Xiao - Proceedings of the 21th Annual Meeting of …, 2020"],"snippet":"… GloVe vectors trained on Common Crawl 840B with 300 dimensions are used as fixed word em- beddings. We use a 12-layers 4-heads Transformer encoder of which the inner-layer dimensionality is 2048 and the hidden size is 100 …","url":["https://www.aclweb.org/anthology/2020.sigdial-1.23.pdf"]} -{"year":"2020","title":"Controllable Text Generation","authors":["S Prabhumoye - 2020"],"snippet":"Page 1. CARNEGIE MELLON UNIVERSITY Controllable Text Generation Should machines re ect the way humans interact in society? esis Proposal by Shrimai Prabhumoye esis proposal submi ed in partial ful llment for the degree of Doctor of Philosophy esis committee …","url":["https://www.cs.cmu.edu/~sprabhum/docs/proposal.pdf"]} -{"year":"2020","title":"ConvBERT: Improving BERT with Span-based Dynamic Convolution","authors":["Z Jiang, W Yu, D Zhou, Y Chen, J Feng, S Yan - arXiv preprint arXiv:2008.02496, 2020"],"snippet":"Page 1. ConvBERT: Improving BERT with Span-based Dynamic Convolution Zihang Jiang1∗, Weihao Yu1∗, Daquan Zhou1, Yunpeng Chen2, Jiashi Feng1, Shuicheng Yan2 1National University of Singapore, 2Yitu Technology …","url":["https://res.arxiv.org/pdf/2008.02496"]} -{"year":"2020","title":"Correcting the Autocorrect: Context-Aware Typographical Error Correction via Training Data Augmentation","authors":["K Shah, G de Melo - arXiv preprint arXiv:2005.01158, 2020"],"snippet":"… from 2We do not rely on embeddings trained on CommonCrawl, as Web data contains substantially more misspelling forms. 3Specifically, those with a character length three standard deviations above or below mean. Hence …","url":["https://arxiv.org/pdf/2005.01158"]} -{"year":"2020","title":"Crawling the German Health Web: Exploratory Study and Graph Analysis","authors":["R Zowalla, T Wetter, D Pfeifer - Journal of Medical Internet Research, 2020"],"snippet":"Journal of Medical Internet Research - International Scientific Journal for Medical Research, Information and Communication on the Internet.","url":["https://www.jmir.org/2020/7/e17853/"]} -{"year":"2020","title":"Creating semantic representations","authors":["FÅ Nielsen, LK Hansen - Statistical Semantics, 2020"],"snippet":"… Evaluations in 2017 with fastText trained for 3 days on either the very large Common Crawl data set or a combination of the English Wikipedia and news datasets set a new state-of-the-art on 88.5% for the accuracy on a …","url":["https://link.springer.com/chapter/10.1007/978-3-030-37250-7_2"]} -{"year":"2020","title":"Creation of a database based on artificial intelligence in order to understand the role played by biofilms on outbreaks","authors":["APP de Melo - 2020"],"snippet":"Page 1. FACULDADE DE ENGENHARIA DA UNIVERSIDADE DO PORTO Creation of a database based on artificial intelligence in order to understand the role played by biofilms on outbreaks Ana Patrícia Pinheiro de Melo Integrated Master in Bioengineering …","url":["https://repositorio-aberto.up.pt/bitstream/10216/130013/2/428683.pdf"]} -{"year":"2020","title":"Creative Natural Language Generation: Humor and Beyond","authors":["NT Hossain - 2020"],"snippet":"Page 1. Creative Natural Language Generation: Humor and Beyond by Nabil Tarique Hossain Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Supervised by Professor Henry Kautz and Dr …","url":["https://urresearch.rochester.edu/fileDownloadForInstitutionalItem.action?itemId=36525&itemFileId=189977"]} -{"year":"2020","title":"Credibility Assessment of User Generated health information of the Bengali language in microblogging sites employing NLP techniques","authors":["A Benazir, S Sharmin"],"snippet":"… word. We use the pre-trained word vectors of Fasttext 5 over Word2Vec 6 and GloVe 7 since it has the best multi lingual word vectors amongst the three, supporting 157 languages, trained on Common Crawl and Wikipedia. We …","url":["https://www.researchgate.net/profile/Afsara_Benazir2/publication/347524581_Credibility_Assessment_of_User_Generated_health_information_of_the_Bengali_language_in_microblogging_sites_employing_NLP_techniques/links/5fe0f9eaa6fdccdcb8ef603e/Credibility-Assessment-of-User-Generated-health-information-of-the-Bengali-language-in-microblogging-sites-employing-NLP-techniques.pdf"]} -{"year":"2020","title":"Cross-Cultural Polarity and Emotion Detection Using Sentiment Analysis and Deep Learning--a Case Study on COVID-19","authors":["AS Imran, SM Doudpota, Z Kastrati, R Bhatra - arXiv preprint arXiv:2008.10031, 2020"],"snippet":"… corpora, a 2010 Wikipedia dump with 1 billion tokens; a 2014 Wikipedia dump with 1.6 billion tokens; Gigaword 5 which has 4.3 billion tokens; the combination Gigaword5 + Wikipedia2014, which has 6 billion tokens; and …","url":["https://arxiv.org/pdf/2008.10031"]} -{"year":"2020","title":"Cross-lingual Inductive Transfer to Detect Offensive Language","authors":["K Pant, T Dadu - arXiv preprint arXiv:2007.03771, 2020"],"snippet":"… We further finetune XLM-R, pretrained on 2.5 TB Common Crawl corpus spanning 100 languages … XLM-R is a transformer-based cross-lingual model pretrained using a multilingual masked language model objective on 2.5 …","url":["https://arxiv.org/pdf/2007.03771"]} -{"year":"2020","title":"Cross-Lingual Information Retrieval in the Medical Domain","authors":["S Saleh - 2020"],"snippet":"Page 1. DOCTORAL THESIS Shadi Saleh Cross-lingual Information Retrieval in the Medical Domain Institute of Formal and Applied Linguistics Supervisor of the doctoral thesis: doc. RNDr. Pavel Pecina, PhD. Study programme …","url":["https://dspace.cuni.cz/bitstream/handle/20.500.11956/123570/140087429.pdf?sequence=1"]} -{"year":"2020","title":"Cross-Lingual Relation Extraction with Transformers","authors":["J Ni, T Moon, P Awasthy, R Florian - arXiv preprint arXiv:2010.08652, 2020"],"snippet":"… Conneau et al., 2020). mBERT was pre-trained with Wikipedia text of 104 languages with the largest sizes, and XLM-R were pre-trained with Wikipedia text and CommonCrawl Corpus of 100 languages. Both models use no …","url":["https://arxiv.org/pdf/2010.08652"]} -{"year":"2020","title":"Cross-lingual Retrieval for Iterative Self-Supervised Training","authors":["C Tran, Y Tang, X Li, J Gu - arXiv preprint arXiv:2006.09526, 2020"],"snippet":"… 5 Experiment Evaluation We pretrained an mBART model with Common Crawl dataset constrained to the 25 languages as in [19] for which we have evaluation data … We subsample the resulting common crawl data to 100 million sentences in each language …","url":["https://arxiv.org/pdf/2006.09526"]} -{"year":"2020","title":"Cross-lingual Transfer Learning for Semantic Role Labeling in Russian","authors":["I Alimova, E Tutubalina, A Kirillovich"],"snippet":"… The model is also based on Transformer architecture (Vaswani et al., 2017). We applied the XLM-R Masked Language Model, which is pretrained on 2.5 TB of CommonCrawl data, in 100 languages, with 8 heads, 6 layers, 1024 hidden units per layer …","url":["https://www.researchgate.net/profile/Alexander_Kirillovich/publication/342734555_Cross-lingual_Transfer_Learning_for_Semantic_Role_Labeling_in_Russian/links/5f0410d0299bf1881607dae8/Cross-lingual-Transfer-Learning-for-Semantic-Role-Labeling-in-Russian.pdf"]} -{"year":"2020","title":"Cross-Lingual Word Embeddings for Turkic Languages","authors":["E Kuriyozov, Y Doval, C Gómez-Rodríguez - arXiv preprint arXiv:2005.08340, 2020"],"snippet":"… Cross-lingual embeddings used for both experiments were trained under the following conditions: • Monolingual word embeddings were obtained from available pre-trained word vectors (Grave et al., 2018) trained on …","url":["https://arxiv.org/pdf/2005.08340"]} -{"year":"2020","title":"Cross-Modal Transfer Learning for Multilingual Speech-to-Text Translation","authors":["C Tran, C Wang, Y Tang, Y Tang, J Pino, X Li - arXiv preprint arXiv:2010.12829, 2020"],"snippet":"… This model is pretrained using two types of noise in g — random span masking and order permutation — as described in [3]. We re-use the finetuned mBART50 models from [13] which are pretrained on …","url":["https://arxiv.org/pdf/2010.12829"]} -{"year":"2020","title":"CS-NLP team at SemEval-2020 Task 4: Evaluation of State-of-the-artNLP Deep Learning Architectures on Commonsense Reasoning Task","authors":["S Saeedi, A Panahi, S Saeedi, AC Fong - arXiv preprint arXiv:2006.01205, 2020"],"snippet":"… The architecture of RoBERTalarge is comprised of of 24-layer, 1024-hidden dimension, 16-self attention heads, 355M parameters and pretrained on book corpus plus English Wikipedia, English CommonCrawl News, and WebText corpus …","url":["https://arxiv.org/pdf/2006.01205"]} -{"year":"2020","title":"Culprit Analytics from Detective Novels","authors":["A Motwani - 2020"],"snippet":"Page 1. Culprit Analytics from Detective Novels Thesis submitted in partial fulfillment of the requirements for the degree of Masters of Science in Computer Science and Engineering by Research by Aditya Motwani …","url":["http://web2py.iiit.ac.in/research_centres/publications/download/mastersthesis.pdf.8981260fae86b1a4.4d535f5468657369735f46696e616c202833292e706466.pdf"]} -{"year":"2020","title":"Cultural Cartography with Word Embeddings","authors":["DS Stoltz, MA Taylor - arXiv preprint arXiv:2007.04508, 2020"],"snippet":"… fastText embeddings are trained on Wikipedia data dumps and the 25 billion web pages of the Common Crawl), and thus are not trained on the researcher's own corpus. Corpus-trained embeddings, by contrast, are word vectors trained exclusively on the …","url":["https://arxiv.org/pdf/2007.04508"]} -{"year":"2020","title":"Cultural Differences in Bias? Origin and Gender Bias in Pre-Trained German and French Word Embeddings","authors":["M Kurpicz-Briki"],"snippet":"… The validation experiments in English were executed on the same pre-trained word embeddings as in the original experiments (Caliskan et al., 2017): • GloVe pre-trained word embeddings using the ”Common …","url":["http://ceur-ws.org/Vol-2624/paper6.pdf"]} -{"year":"2020","title":"Cultural influences on word meanings revealed through large-scale semantic alignment","authors":["B Thompson, SG Roberts, G Lupyan - Nature Human Behaviour, 2020"],"snippet":"If the structure of language vocabularies mirrors the structure of natural divisions that are universally perceived, then the meanings of words in different languages should closely align. By contrast, if shared word meanings are …","url":["https://www.nature.com/articles/s41562-020-0924-8"]} -{"year":"2020","title":"Current Limitations of Language Models: What You Need is Retrieval","authors":["A Komatsuzaki - arXiv preprint arXiv:2009.06857, 2020"],"snippet":"… Most naturally available samples as well as the reasonable output of most tasks have rather limited length, though others (eg books) do not. For example, the average sample length of WebText is only about 1000 tokens …","url":["https://arxiv.org/pdf/2009.06857"]} -{"year":"2020","title":"Curriculum Pre-training for End-to-End Speech Translation","authors":["C Wang, Y Wu, S Liu, M Zhou, Z Yang - arXiv preprint arXiv:2004.10093, 2020"],"snippet":"… (2017) 6LibriSpeech En-Fr, IWSLT En-De and Fisher-CallHome Es-En 7https://wit3.fbk.eu/mt.php?release= 2017-01-trnted 8Europarl v7, Common Crawl, News Comentary v13 and Rapid corpus of EU press releases. using …","url":["https://arxiv.org/pdf/2004.10093"]} -{"year":"2020","title":"CX DB8: A queryable extractive summarizer and semantic search engine","authors":["A Roush - arXiv preprint arXiv:2012.03942, 2020"],"snippet":"… unsupervised models. Since unsupervised models are usually trained on massive corpuses, like Wikipedia or Common Crawl (Penninglon et al., 2014), they do not overfit as much to any particular topic or domain. Furthermore …","url":["https://arxiv.org/pdf/2012.03942"]} -{"year":"2020","title":"Dávid Márk Nemeskey Natural Language Processing Methods for Language Modeling","authors":["CV Erzsébet, Z Horváth, A Benczúr, A Kornai"],"snippet":"… 83 4.2.2 Common Crawl . . . . . 84 … Chapter 4 details our work of compiling Webcorpus 2.0, a new Hungarian gigaword corpus, from the Common Crawl and the Hungarian Wikipedia. Its main purpose being a …","url":["https://hlt.bme.hu/media/pdf/thesis.pdf"]} -{"year":"2020","title":"DAN+: Danish Nested Named Entities and Lexical Normalization","authors":["B Plank, KN Jensen, R van der Goot"],"snippet":"… Twitter data. Bert variants For Danish BERT we use the model trained by Botxo (https://github.com/ botxo/nordic_bert), which is pre-trained on Wikipedia, Common Crawl, Danish debate forums and Danish open subtitles. For …","url":["http://www.robvandergoot.com/doc/danP.pdf"]} -{"year":"2020","title":"Danish Clinical Event Extraction Developing a clinical event extraction system for electronic health records using deep learning and active learning","authors":["F WONSILD, MG MØLLER - 2020"],"snippet":"Page 1. Danish Clinical Event Extraction Developing a clinical event extraction system for electronic health records using deep learning and active learning FREDERIK WONSILD MATHIAS GIOVANNI MØLLER Master's thesis …","url":["https://www.derczynski.com/itu/docs/clin-events_frwo_mgmo.pdf"]} -{"year":"2020","title":"Data augmentation techniques for the Video Question Answering task","authors":["A Falcon, O Lanz, G Serra - arXiv preprint arXiv:2008.09849, 2020"],"snippet":"… To compute the word embeddings for the question and the answers, we consider GloVe [23], pretrained on the Common Crawl dataset3, which outputs a vector of size E = 300 for 3 The Common Crawl dataset is available …","url":["https://arxiv.org/pdf/2008.09849"]} -{"year":"2020","title":"Data selection for unsupervised translation of German–Upper Sorbian","authors":["L Edman, A Toral, G van Noord - Proceedings of the Fifth Conference on Machine …, 2020"],"snippet":"… gz. For German, we use monolingual data from News Crawl and Common Crawl … 19.57 Table 3: BLEU scores of models trained using 5 million sentences from News Crawl and various amounts of sentences from Common Crawl …","url":["https://www.aclweb.org/anthology/2020.wmt-1.130.pdf"]} -{"year":"2020","title":"Data-driven Crosslinguistic Modeling of Constituent Ordering Preferences","authors":["ZY Liu - 2020"],"snippet":"Page 1. Data-driven Crosslinguistic Modeling of Constituent Ordering Preferences By Zoey (Ying) Liu Dissertation Submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Linguistics in the Office of Graduate Studies of the …","url":["https://www.researchgate.net/profile/Zoey_Liu2/publication/343836788_Data-driven_Crosslinguistic_Modeling_of_Constituent_Ordering_Preferences/links/5f4408cb92851cd3022569fe/Data-driven-Crosslinguistic-Modeling-of-Constituent-Ordering-Preferences.pdf"]} -{"year":"2020","title":"Data-driven models and computational tools for neurolinguistics: a language technology perspective","authors":["E Artemova, A Bakarov, A Artemov, E Burnaev… - arXiv preprint arXiv …, 2020"],"snippet":"… The corpus size can be estimated by the number of tokens8: so, the size of English Wikipedia is 1M tokens, the size of Google news corpus is 1B tokens, and the size of CommonCrawl corpus is 600B tokens. Structured expert-based …","url":["https://arxiv.org/pdf/2003.10540"]} -{"year":"2020","title":"Dataless Short Text Classification Based on Biterm Topic Model and Word Embeddings","authors":["Y Yang, H Wang, J Zhu, Y Wu, K Jiang, W Guo, W Shi"],"snippet":"… We set the number of iterations to 50 as our models achieve competitive performance since then. For word embeddings, we employ the widely used GloVe Common Crawl as mentioned before. It contains 840B to- kens, 2.2M vocab and 300d vectors …","url":["https://www.ijcai.org/Proceedings/2020/0549.pdf"]} -{"year":"2020","title":"Dataset for Automatic Summarization of Russian News","authors":["I Gusev - arXiv preprint arXiv:2006.11063, 2020"],"snippet":"… 3.3 Abstractive methods All of the tested models are based on a sequence-to-sequence framework. Pointergenerator and CopyNet were trained only on our train dataset, and mBART was pretrained on texts of 25 languages extracted from the Common Crawl …","url":["https://arxiv.org/pdf/2006.11063"]} -{"year":"2020","title":"Dataset for Automatic Summarization of Russian","authors":["I Gusev - arXiv preprint arXiv:2006.11063, 2020"],"snippet":"… sequence-to-sequence framework. Pointer-generator and CopyNet were trained only on our training dataset, and mBART was pretrained on texts of 25 languages extracted from the Common Crawl. We performed no additional …","url":["https://www.researchgate.net/profile/Ilya_Gusev2/publication/342352344_Dataset_for_Automatic_Summarization_of_Russian_News/links/5f16164a92851c1eff22059b/Dataset-for-Automatic-Summarization-of-Russian-News.pdf"]} -{"year":"2020","title":"Datasets and Performance Metrics for Greek Named Entity Recognition","authors":["N Bartziokas, T Mavropoulos, C Kotropoulos - 11th Hellenic Conference on Artificial …, 2020"],"snippet":"… performance. Thus, most works omit such supplementary features. Established word embeddings, usually pre-trained on large corpora, such as Common Crawl's or Wikipedia's collections, are fine-tuned to a great extent. Google …","url":["https://dl.acm.org/doi/abs/10.1145/3411408.3411437"]} -{"year":"2020","title":"DeBERTa: Decoding-enhanced BERT with Disentangled Attention","authors":["P He, X Liu, J Gao, W Chen - arXiv preprint arXiv:2006.03654, 2020"],"snippet":"… the setting of BERT [4], except that we use the BPE vocabulary as [2, 5]. For training data, we use Wikipedia (English Wikipedia dump3; 12GB), BookCorpus [26] (6GB), OPENWEBTEXT (public Reddit content [27]; 38GB) …","url":["https://arxiv.org/pdf/2006.03654"]} -{"year":"2020","title":"DECAB-LSTM: Deep Contextualized Attentional Bidirectional LSTM for cancer hallmark classification","authors":["L Jiang, X Sun, F Mercaldo, A Santone - Knowledge-Based Systems, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0950705120306158"]} -{"year":"2020","title":"Decoding individual identity from brain activity elicited in imagining common experiences","authors":["AJ Anderson, K McDermott, B Rooks, KL Heffner… - Nature Communications, 2020"],"snippet":"Everyone experiences common events differently. This leads to personal memories that presumably provide neural signatures of individual identity when events are reimagined. We present initial evidence that these signatures …","url":["https://www.nature.com/articles/s41467-020-19630-y"]} -{"year":"2020","title":"Deep Exogenous and Endogenous Influence Combination for Social Chatter Intensity Prediction","authors":["S Dutta, S Masud, S Chakrabarti, T Chakraborty - arXiv preprint arXiv:2006.07812, 2020"],"snippet":"… news on discussions. We report on extensive experiments using a two-month-long discussion corpus of Reddit, and a contemporaneous corpus of online news articles from the Common Crawl. ChatterNet shows considerable …","url":["https://arxiv.org/pdf/2006.07812"]} -{"year":"2020","title":"Deep Intelligent Contextual Embedding for Twitter Sentiment Analysis","authors":["K Musial-Gabrys, U Naseem - International Conference on Document Analysis and …, 2019"],"snippet":"… and GloVe. In our model we have used pre-trained GloVe embedding of 300 dimensions which are trained on 840 billion token from common crawl because it gives better results as compared to Word2Vec in our case. GloVe …","url":["https://opus.lib.uts.edu.au/bitstream/10453/137711/1/ICDAR_2019_paper_279.pdf"]} -{"year":"2020","title":"Deep Learning Based Multi-Label Text Classification of UNGA Resolutions","authors":["F Sovrano, M Palmirani, F Vitali - arXiv preprint arXiv:2004.03455, 2020"],"snippet":"… The pre-trained models we are going to use are: a GloVe model from Spacy [19] and pre-trained on data from Common Crawl [20], and the Universal Sentence Encoder (USE) model for document embedding coming from …","url":["https://arxiv.org/pdf/2004.03455"]} -{"year":"2020","title":"Deep Learning for Twitter Sentiment Analysis: The Effect of Pre-trained Word Embedding","authors":["A Krouska, C Troussas, M Virvou - Machine Learning Paradigms, 2020"],"snippet":"… The model contains 300-dimensional vectors for 3 million words and phrases. Crawl GloVe was trained on a Common Crawl dataset of 42 billion tokens (words), providing a vocabulary of 2 million words with an embedding vector …","url":["https://link.springer.com/chapter/10.1007/978-3-030-49724-8_5"]} -{"year":"2020","title":"Deep learning model for end-to-end approximation of COSMIC functional size based on use-case names","authors":["M Ochodek, S Kopczyńska, M Staron - Information and Software Technology, 2020"],"snippet":"… we investigate different pre-trained word embeddings to learn that using the embeddings trained on Wikipedia+Gigaworld (300d), Common Crawl 840B/42B (300d), and Stack Overflow (200d) give the best prediction accuracy. This paper is structured as follows …","url":["https://www.sciencedirect.com/science/article/pii/S0950584920300628"]} -{"year":"2020","title":"Deep N-ary Error Correcting Output Codes","authors":["H Zhang, JT Zhou, T Wang, IW Tsang, RSM Goh - arXiv preprint arXiv:2009.10465, 2020"],"snippet":"… For the Bi-LSTMs model of TREC and SST text datasets, we use the 300-dimensional publicly available pre-trained word embeddings as the word-level feature representation, which is trained by fastText4 package …","url":["https://arxiv.org/pdf/2009.10465"]} -{"year":"2020","title":"Deep Neural Attention-Based Model for the Evaluation of Italian Sentences Complexity","authors":["D Schicchi, G Pilato, GL Bosco - 2020 IEEE 14th International Conference on …, 2020"],"snippet":"… based algorithms. To the best of our knowledge, the most prominent sentence-based corpus for the Italian language is the PACCSS-IT corpus [19]. It has 1www.wikipedia. org 2www.commoncrawl.org 254 Page 3. been created …","url":["https://ieeexplore.ieee.org/abstract/document/9031472/"]} -{"year":"2020","title":"Deep Neural Networks Ensemble with Word Vector Representation Models to Resolve Coreference Resolution in Russian","authors":["A Sboev, R Rybka, A Gryaznov - Advanced Technologies in Robotics and Intelligent …, 2020"],"snippet":"… Among the context-insensitive vectorization models, the following were compared: Word2vec model, 3 trained on corpus of articles from the Russian part of Wikipedia (further referred to as RuWiki) and data from CommonCrawl 4 ; …","url":["https://link.springer.com/chapter/10.1007/978-3-030-33491-8_4"]} -{"year":"2020","title":"Deep Neural Networks for Sentiment Analysis in Tweets with Emoticons","authors":["M Narayanaperumal - 2020"],"snippet":"Page 1. DEEP NEURAL NETWORKS FOR SENTIMENT ANALYSIS IN TWEETS WITH EMOTICONS by Mutharasu Narayanaperumal A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Systems …","url":["http://search.proquest.com/openview/ce5f7af40a2bea968b30a3ab132f22bb/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2020","title":"Deep Question Answering: A New Teacher For DistilBERT","authors":["F Tamburini, P Cimiano, S Preite"],"snippet":"Page 1. Alma Mater Studiorum · Universit`a di Bologna SCUOLA DI SCIENZE Corso di Laurea Magistrale in Informatica Deep Question Answering: A New Teacher For DistilBERT Relatore: Chiar.mo Prof. Fabio Tamburini Correlatore: Chiar.mo Prof. Philipp Cimiano …","url":["https://amslaurea.unibo.it/20384/1/MasterThesisBologna.pdf"]} -{"year":"2020","title":"DeepPhish: Automated Phishing Detection Using Recurrent Neural Network","authors":["M Arivukarasi, A Antonidoss - Advances in Smart System Technologies"],"snippet":"… At that point, we prepared an irregular timberland classifier with 100 choice trees. 6 Conclusions. To assess the methodologies, we utilized a database that included one million real URLs from the common crawl database and …","url":["https://link.springer.com/chapter/10.1007/978-981-15-5029-4_18"]} -{"year":"2020","title":"DeepSinger: Singing Voice Synthesis with Data Mined From the Web","authors":["Y Ren, X Tan, T Qin, J Luan, Z Zhao, TY Liu - arXiv preprint arXiv:2007.04590, 2020"],"snippet":"… A variety of tasks collect training data from the Web, such as the large-scale web-crawled text dataset ClueWeb [3] and Common Crawl3 for language modeling [40], LETOR [31] for search ranking [4], and WebVision [22] for image classification …","url":["https://arxiv.org/pdf/2007.04590"]} -{"year":"2020","title":"Definition of Phishing Sites Based on the Team Model of Fuzzy Neural Networks","authors":["II Ismagilov, AA Murtazin, DV Kataseva, AS Katasev… - Helix, 2020"],"snippet":"… To obtain a set of data on legitimate sites, two sources were used: Alexa Internet and Common Crawl … Common Crawl is a non-profit organization; it crawls monthly the Internet and makes its archives and datasets available …","url":["https://helixscientific.pub/index.php/home/article/download/237/190"]} -{"year":"2020","title":"DeL-haTE: A Deep Learning Tunable Ensemble for Hate Speech Detection","authors":["J Melton, A Bagavathi, S Krishnan - arXiv preprint arXiv:2011.01861, 2020"],"snippet":"… We compare the following five word embedding methods: Word2Vec vectors trained on Google News corpus [17], GloVe vectors trained on CommonCrawl (GLoVe-CC) and Twitter (GLoVe-Twitter) corpora [18], and FastText vectors …","url":["https://arxiv.org/pdf/2011.01861"]} -{"year":"2020","title":"Delay Mitigation for Backchannel Prediction in Spoken Dialog System","authors":["AI Adiba, T Homma, D Bertero, T Sumiyoshi… - Conversational Dialogue Systems …"],"snippet":"… individual word. The word embedding is then used as input for our model architecture. We found that our dataset in the fastText model trained with the Common Crawl dataset 2 had the smallest number of unknown words. Thus, the …","url":["https://link.springer.com/chapter/10.1007/978-981-15-8395-7_10"]} -{"year":"2020","title":"DeLFT and entity-fishing: Tools for CLEF HIPE 2020 Shared Task","authors":["T Kristanti, L Romary - CLEF 2020-Conference and Labs of the Evaluation …, 2020"],"snippet":"… Word Embeddings We use various static word embeddings: Global Vectors for Word Representation (GloVe) [14], English fastText Common Crawl [1,11], and French Wikipedia fastText.5 We also use ELMo [16] contextualized …","url":["https://hal.inria.fr/hal-02974946/document"]} -{"year":"2020","title":"Depthwise Separable Convolutional Neural Network for Confidential Information Analysis","authors":["Y Lu, J Jiang, M Yu, C Liu, C Liu, W Huang, Z Lv - International Conference on …, 2020"],"snippet":"… Word2Vec. The Word2VecModified-Wikipedia are trained on Wikipedia through modified Word2vec. The GloVe-Crawl840B are trained on Common Crawl through GloVe. The GloVe-Wikipedia are trained on Wikipedia through GloVe …","url":["https://link.springer.com/chapter/10.1007/978-3-030-55393-7_40"]} -{"year":"2020","title":"Design2Struct: Generating website structures from design images using neural networks","authors":["MM Velzel - 2020"],"snippet":"… The second contribution is the release of a large CommonCrawl2 based dataset, filtered and transformed to be used in the field of GUI to structure conversion. The dataset is 1https://github.com/mvelzel/Design2Struct 2https://commoncrawl.org …","url":["http://essay.utwente.nl/81988/1/VELZEL_BA_EEMCS.pdf"]} -{"year":"2020","title":"Detecting Alzheimer's Disease by Exploiting Linguistic Information from Nepali Transcript","authors":["S Thapa, S Adhikari, U Naseem, P Singh, G Bharathy… - International Conference on …, 2020"],"snippet":"… For pre-trained Nepali Word2Vec model, the model created by Lamsal [15] is used in the study. Similarly, for the pre-trained fastText embeddings, the pre-trained word vectors trained on Common Crawl and Wikipedia using fastText were used [14] …","url":["https://link.springer.com/chapter/10.1007/978-3-030-63820-7_20"]} -{"year":"2020","title":"Detecting and Visualizing Hate Speech in Social Media: A Cyber Watchdog for Surveillance","authors":["S Modha, P Majumder, T Mandl, C Mandalia - Expert Systems with Applications, 2020"],"snippet":"… The word vectors were trained on 600 billion tokens of the Common Crawl corpus (Simonite, 2013). The Common Crawl is a nonprofit organization that crawls the web and freely provides its archives and datasets to the public …","url":["https://www.sciencedirect.com/science/article/pii/S0957417420305492"]} -{"year":"2020","title":"Detecting Deceptive Language in Crime Interrogation","authors":["YY Kao, PH Chen, CC Tzeng, ZY Chen, B Shmueli… - International Conference on …, 2020"],"snippet":"… fastText is a lightweight library for text representation. Its pre-trained model, trained on Common Crawl and Wikipedia corpus, has the ability to capture hidden information about a language such as word analogies or semantic …","url":["https://link.springer.com/chapter/10.1007/978-3-030-50341-3_7"]} -{"year":"2020","title":"Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases","authors":["W Guo, A Caliskan - arXiv preprint arXiv:2006.03955, 2020"],"snippet":"… intersectional group members. Caliskan et al. [1] have shown that social biases are embedded in linguistic regularities learned by GloVe. These embeddings are trained on the word co-occurrence statistics of the Common Crawl corpus …","url":["https://arxiv.org/pdf/2006.03955"]} -{"year":"2020","title":"Detecting Entailment in Code-Mixed Hindi-English Conversations","authors":["S Chakravarthy, A Umapathy, AW Black - Proceedings of the Sixth Workshop on …, 2020"],"snippet":"… XLM-RoBERTa (XLM-R) (Conneau et al., 2020) is trained on the CommonCrawl corpus, which in- cludes Romanized Hindi text, making this model the closest one to being pre-trained on Hinglish. 3 Task Definition Khanuja et al …","url":["https://www.aclweb.org/anthology/2020.wnut-1.22.pdf"]} -{"year":"2020","title":"Detecting Fine-Grained Cross-Lingual Semantic Divergences without Supervision by Learning to Rank","authors":["E Briakou, M Carpuat - arXiv preprint arXiv:2010.03662, 2020"],"snippet":"… (2018) work on subtitles and Common Crawl corpora where sentence alignment errors abound, and Pham et al … di- vergence English-French parallel sentences drawn from OpenSubtitles and CommonCrawl corpora by prior work (Vyas et al., 2018) …","url":["https://arxiv.org/pdf/2010.03662"]} -{"year":"2020","title":"Detecting Hallucinated Content in Conditional Neural Sequence Generation","authors":["C Zhou, J Gu, M Diab, P Guzman, L Zettlemoyer… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. DETECTING HALLUCINATED CONTENT IN CONDITIONAL NEURAL SEQUENCE GENERATION Chunting Zhou1∗, Jiatao Gu2, Mona Diab2, Paco Guzman2, Luke Zettlemoyer2, Marjan Ghazvininejad2 Language …","url":["https://arxiv.org/pdf/2011.02593"]} -{"year":"2020","title":"Detecting Incivility and Impoliteness in Online Discussions.","authors":["AK Stoll, M Ziegele, O Quiring - Computational Communication Research, 2020"],"snippet":"Page 1. VOL. 2, NO. 1, 2020 109 Detecting Impoliteness and Incivility in Online Discussions Classification Approaches for German User Comments Anke Stoll, Marc Ziegele, Oliver Quiring CCR 2 (1): 109–134 DOI: 10.5117/CCR2020.1.005.KATH …","url":["https://computationalcommunication.org/ccr/article/download/19/10"]} -{"year":"2020","title":"Detecting misogyny in Spanish Tweets. An approach based on linguistics features and word embeddings","authors":["JA García-Díaz, M Cánovas-García… - Future Generation …, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0167739X20301928"]} -{"year":"2020","title":"DETECTING OUT-OF-DISTRIBUTION TRANSLATIONS WITH VARIATIONAL TRANSFORMERS","authors":["WAT ZEI"],"snippet":"… The following datasets were used in our experiments: (1) WMT EN ↔ DE: The training set for translation tasks between English (EN) and German (DE) composed of news-commentary-v13 with 284k sentences pairs …","url":["https://openreview.net/pdf/e2667f2c5169fcbdca8e1d0596e67792da06d3a0.pdf"]} -{"year":"2020","title":"Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks","authors":["D Emelin, I Titov, R Sennrich - arXiv preprint arXiv:2011.01846, 2020"],"snippet":"Page 1. Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks Denis Emelin1, Ivan Titov1, 2, and Rico Sennrich3, 1 1University of Edinburgh, Scotland 2University of Amsterdam …","url":["https://arxiv.org/pdf/2011.01846"]} -{"year":"2020","title":"Detecting, Classifying, and Mapping Retail Storefronts Using Street-level Imagery","authors":["S Sharifi Noorian, S Qiu, A Psyllidis, A Bozzon… - Proceedings of the 2020 …, 2020","SS Noorian, S Qiu, A Psyllidis, A Bozzon, GJ Houben"],"snippet":"… a bag of character n-grams. As our evaluation will focus on the Manhattan Borough of New York City, the pre-trained (on Common Crawl and Wikipedia 3) word vectors for English are used. According to the desired language …","url":["https://dl.acm.org/doi/pdf/10.1145/3372278.3390706","https://qiusihang.github.io/files/publications/icmr2020detecting.pdf"]} -{"year":"2020","title":"Detection of Emerging Words in Portuguese Tweets","authors":["A Pinto, H Moniz, F Batista - 9th Symposium on Languages, Applications and …, 2020"],"snippet":"… We have used two pre-trained word vectors for Portuguese4, the first one trained on the Common Crawl5, and the second trained on the … www.nilc.icmc. usp.br/nilc/projects/unitex-pb/web/dicionarios.html 4 https://fasttext.cc/docs …","url":["https://drops.dagstuhl.de/opus/volltexte/2020/13016/pdf/OASIcs-SLATE-2020-3.pdf"]} -{"year":"2020","title":"Detection of Harassment on Twitter with Deep Learning Techniques","authors":["I Espinoza, F Weiss - Machine Learning and Knowledge Discovery in …, 2020"],"snippet":"… trained embedding model. We use the implementation off Spacy library for Python4 with the pre-trained model called 'en vectors web lg', which has 300 dimensions and it's trained over common crawl texts. With Spacy we have …","url":["https://link.springer.com/content/pdf/10.1007/978-3-030-43887-6_24.pdf"]} -{"year":"2020","title":"Determining Event Outcomes: The Case of# fail","authors":["S Murugan, D Chinnappa, E Blanco - Proceedings of the 2020 Conference on …, 2020"],"snippet":"… has variable length). Additionally, the word embeddings (GloVe embeddings pre-trained with CommonCrawl) allow us to leverage a distributional representation of tags, including those not seen during training. The second …","url":["https://www.aclweb.org/anthology/2020.findings-emnlp.359.pdf"]} -{"year":"2020","title":"Developing a Twitter bot that can join a discussion using state-of-the-art architectures","authors":["YM Çetinkaya, İH Toroslu, H Davulcu - Social Network Analysis and Mining, 2020"],"snippet":"… requests. Radford et al. (2019) construct an auto-regressive feed-forward model instead of seq2seq-RNN as a language model using Common Crawl as a dataset and generate sentences with predicting next word. Generative …","url":["https://link.springer.com/article/10.1007/s13278-020-00665-4"]} -{"year":"2020","title":"Developing an online hate classifier for multiple social media platforms","authors":["J Salminen, M Hopf, SA Chowdhury, S Jung… - Human-centric Computing …, 2020"],"snippet":"The proliferation of social media enables people to express their opinions widely online. However, at the same time, this has resulted in the emergence of conflict and hate, making online...","url":["https://link.springer.com/article/10.1186/s13673-019-0205-6"]} -{"year":"2020","title":"Development and evaluation of a Polish Automatic Speech Recognition system using the TLK toolkit","authors":["NU Roselló Beneitez - 2020"],"snippet":"… 12 2.8 Trellis representing the decoding step . . . . . 15 3.1 Examples of sentences extracted from the Common Crawl corpus . . . . . 21 4.1 Actions performed to create the final acoustic models . . . . . 24 4.2 Phonetic transcription of a Polish word …","url":["https://riunet.upv.es/bitstream/handle/10251/150495/Rosell%C3%B3%20-%20Desarrollo%20y%20evaluaci%C3%B3n%20de%20un%20sistema%20de%20Reconocimiento%20Autom%C3%A1tico%20del%20Habla%20en%20Polaco%20....pdf?sequence=1"]} -{"year":"2020","title":"Development of a Search Engine to Answer Comparative Queries","authors":["J Huck"],"snippet":"… extraction and tuning the retrieval model. Page 6. References 1. Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic chatnoir: Search engine for the clueweb and the common crawl. In: ECIR (2018) 2. Bondarenko, A., Fröbe …","url":["http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2020/paper_178.pdf"]} -{"year":"2020","title":"Development of Word Embeddings for Uzbek Language","authors":["B Mansurov, A Mansurov - arXiv preprint arXiv:2009.14384, 2020"],"snippet":"… variant of Uzbek. As far as we're aware, only fastText [5] word embeddings exist for the Latin variant. However, fastText was trained on the relatively low quality Uzbek Wikipedia and noisy Common Crawl corpus. In this paper …","url":["https://arxiv.org/pdf/2009.14384"]} -{"year":"2020","title":"Dialog Response Generation Using Adversarially Learned Latent Bag-of-Words","authors":["K Khan - 2020"],"snippet":"Page 1. Dialog Response Generation Using Adversarially Learned Latent Bag-of-Words by Kashif Khan A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Mathematics in Computer Science …","url":["https://uwspace.uwaterloo.ca/bitstream/handle/10012/16188/Khan_Kashif.pdf?sequence=3&isAllowed=y"]} -{"year":"2020","title":"Dictionary for Computer-Assisted Text Analysis of Cyber Security (TACS)","authors":["A Levordashka, A Joinson, S Jones"],"snippet":"… algorithm [372] implemented via the Python package textacy [7]. We then grouped the terms by semantic similarity, with the help of a vector space model available via the Python library spacy ('en_core_web_lg'), with 685k …","url":["https://nbviewer.jupyter.org/github/anidroid/tacs/blob/master/tacs-soups.pdf"]} -{"year":"2020","title":"Dictionary-based Data Augmentation for Cross-Domain Neural Machine Translation","authors":["W Peng, C Huang, T Li, Y Chen, Q Liu - arXiv preprint arXiv:2004.02577, 2020"],"snippet":"… The OOD data used for pre-training for the baseline model are extracted from WMT 144 including Eu- roparl V7, New-commentary V9 and Common Crawl corpora … Train Dataset (OOD) Europarl,News-commentary, Common …","url":["https://arxiv.org/pdf/2004.02577"]} -{"year":"2020","title":"Differences Beyond Identity: Perceived Construal Distance and Interparty Animosity in the United States","authors":["A van Loon, A Goldberg, S Srivastava - SocArXiv. July, 2020"],"snippet":"Page 1. Differences Beyond Identity: Perceived Construal Distance and Interparty Animosity in the United States ∗ Austin van Loon Stanford University Amir Goldberg Stanford University Sameer B. Srivastava …","url":["https://osf.io/j2f6u/download"]} -{"year":"2020","title":"Dilated Convolution Networks for Classification of ICD-9 based Clinical Summaries","authors":["M Morisio, N Kanwal, I Tutor, DG Rizzo - 2020","N Kanwal - 2020"],"snippet":"… This architecture uses multiple dilation layers with a label-specific dot-based attention mechanism. We have extracted the embeddings from Common Crawl Glove (Global Vector). The architecture of the model is designed to calculate attention to words and their context …","url":["https://webthesis.biblio.polito.it/14400/","https://webthesis.biblio.polito.it/14400/1/tesi.pdf"]} -{"year":"2020","title":"Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures","authors":["J Launay, I Poli, F Boniface, F Krzakala - arXiv preprint arXiv:2006.12878, 2020"],"snippet":"Page 1. Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures Julien Launay1,2 Iacopo Poli1 François Boniface1 Florent Krzakala1,2 1LightOn 2École Normale Supérieure {julien, iacopo, francois, florent}@lighton.ai Abstract …","url":["https://arxiv.org/pdf/2006.12878"]} -{"year":"2020","title":"Discovering key topics from short, real-world medical inquiries via natural language processing and unsupervised learning","authors":["A Ziletti, C Berns, O Treichel, T Weber, J Liang… - arXiv preprint arXiv …, 2020"],"snippet":"… Table IIA1 presents a qualitative comparison of a standard embedding (en core web lg, trained on the Common Crawl) and a specialized biomedical embedding (scispaCy en core sci lg, trained also on PubMed). Specifically …","url":["https://arxiv.org/pdf/2012.04545"]} -{"year":"2020","title":"Discovering Relational Intelligence in Online Social Networks","authors":["L Tan, T Pham, HK Ho, TS Kok - International Conference on Database and Expert …, 2020"],"snippet":"… 100 mil tweets. 283. \\(^\\text {a}\\)https://archive.ics.uci.edu/ml/datasets /bag+of+words. \\(^\\text {b}\\)http://commoncrawl.org/2014/07/april-2014-crawldata-available/. \\(^text {c}\\)https://developer.twitter.com/en/docs.html. \\(^\\text …","url":["https://link.springer.com/chapter/10.1007/978-3-030-59003-1_22"]} -{"year":"2020","title":"Discovering web services in social web service repositories using deep variational autoencoders","authors":["I Lizarralde, C Mateos, A Zunino, TA Majchrzak… - Information Processing & …, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0306457319310878"]} -{"year":"2020","title":"Disentangling semantic composition and semantic association in the left temporal lobe Abbreviated title: Semantic composition versus association","authors":["J Li, S Island, A Dhabi, L Pylkkänen"],"snippet":"… derived using the well-known GloVe word embeddings model (Pennington et al., 2014; freely available at https://nlp.stanford.edu/projects/glove/) trained on Common Crawl (https://commoncrawl.org/), which contains petabytes …","url":["https://www.biorxiv.org/content/10.1101/2020.08.17.254482v2.full.pdf"]} -{"year":"2020","title":"Distractor Analysis and Selection for Multiple-Choice Cloze Questions for Second-Language Learners","authors":["L Gao, K Gimpel, A Jensson"],"snippet":"… form pair. For features that require embedding words, we use the 300-dimensional GloVe word embedPage 4. dings (Pennington et al., 2014) pretrained on the 42 billion token Common Crawl corpus. The GloVe embeddings …","url":["https://ttic.uchicago.edu/~kgimpel/papers/gao+etal.bea2020.pdf"]} -{"year":"2020","title":"Distributed frameworks for approximate data analytics","authors":["G Hu - 2020"],"snippet":"… analytical queries can be expensive, eg, the Google book Ngrams dataset contains 2.2 TB of text data [17], and CommonCrawl corpus petabytes of web pages [18]. The above challenge is exacerbated when it is desirable to run different types of …","url":["https://rucore.libraries.rutgers.edu/rutgers-lib/65036/PDF/1/play/"]} -{"year":"2020","title":"Distributed Training of Graph Convolutional Networks using Subgraph Approximation","authors":["A Angerd, K Balasubramanian, M Annavaram - arXiv preprint arXiv:2012.04930, 2020"],"snippet":"… The vertex label is the community a post belongs to. The features consist of an embedding of post information, created using GloVe CommonCrawl (Pennington et al., 2014). The first 20 days of posts are used for training, while the rest are used for testing and validation …","url":["https://arxiv.org/pdf/2012.04930"]} -{"year":"2020","title":"Distributional and Lexical Exploration of Semantics of Derivational Morphology","authors":["UC Kunter, GN Özdemir, C Bozşahin"],"snippet":"… using Wikipedia datasets. The second one is presented in Grave et al. (2018), covering 157 languages including Turkish. Their models used Common Crawl and Wikipedia datasets and trained on fastText. The models were …","url":["http://www.academia.edu/download/63487208/Distributional_and_Lexical_Exploration_of_Semantics_of_DM20200531-40903-1x0rwv8.pdf"]} -{"year":"2020","title":"Distributional Models in the Task of Hypernym Discovery","authors":["V Yadrintsev, A Ryzhova, I Sochenkov - Russian Conference on Artificial Intelligence, 2020"],"snippet":"… Most likely, the largest text corpus was used for the first model, which includes Wikipedia and Common Crawl (we do not know the exact volume of crawl-data for the Russian, but roughly 24 terabytes of plain text was used …","url":["https://link.springer.com/chapter/10.1007/978-3-030-59535-7_25"]} -{"year":"2020","title":"Distributional semantic modeling: a revised technique to train term/word vector space models applying the ontology-related approach","authors":["O Palagin, V Velychko, K Malakhov, O Shchurov - arXiv preprint arXiv:2003.03350, 2020"],"snippet":"… Ac- cessed: 2020-03-03. [42] Firefly documentation. https://rorodata.github. io/firefly/. Accessed: 2020-03-03. [43] Common crawl. http://commoncrawl org/. Accessed: 2020-03-03. [44] Google dataset search. https://datasetsearch …","url":["https://arxiv.org/pdf/2003.03350"]} -{"year":"2020","title":"Do Neural Language Models Show Preferences for Syntactic Formalisms?","authors":["A Kulmizev, V Ravishankar, M Abdou, J Nivre - arXiv preprint arXiv:2004.14096, 2020"],"snippet":"… Che et al. (2018). These models are trained on 20 million words randomly sampled from the concatenation of WikiDump and CommonCrawl datasets for 44 different languages, including our 13 languages. Each model features …","url":["https://arxiv.org/pdf/2004.14096"]} -{"year":"2020","title":"Document Representations for Fast and Accurate Retrieval of Mathematical Information","authors":["V Novotný"],"snippet":"Page 1. Masaryk University Faculty of Informatics Document Representations for Fast and Accurate Retrieval of Mathematical Information Rigorous Thesis Vít Novotný Advisor: Doc. RNDr. Petr Sojka, Ph. D. Brno, Fall 2019 Signature …","url":["https://is.muni.cz/th/x86jd/thesis-with-papers.pdf"]} -{"year":"2020","title":"Domain Name System Security and Privacy: A Contemporary Survey","authors":["A Khormali, J Park, H Alasmary, A Anwar, D Mohaisen - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Domain Name System Security and Privacy: A Contemporary Survey Aminollah Khormali, Jeman Park, Hisham Alasmary, Afsah Anwar, David Mohaisen University of Central Florida Abstract The domain name system …","url":["https://arxiv.org/pdf/2006.15277"]} -{"year":"2020","title":"Domain Specific Complex Sentence (DCSC) Semantic Similarity Dataset","authors":["D Chandrasekaran, V Mago - arXiv preprint arXiv:2010.12637, 2020"],"snippet":"… trained. While BERT was trained on Book Corpus and Wikipedia corpus, RoBERTa model was trained on four different corpora namely Book Corpus, Common Crawl News dataset, OpenWebText dataset and the Stories dataset …","url":["https://arxiv.org/pdf/2010.12637"]} -{"year":"2020","title":"Domain-Specific Meta-Embedding with Latent Semantic Structures","authors":["Q Liu, J Lu, G Zhang, T Shen, Z Zhang, H Huang - Information Sciences, 2020"],"snippet":"… For example, GloVe is trained on aggregated global word-word co-occurrence statistics from a corpus of over 840B tokens and fastText [2] pre-trained word representations for 157 languages on Common Crawl and the Wikipedia Corpora …","url":["https://www.sciencedirect.com/science/article/pii/S002002552031029X"]} -{"year":"2020","title":"DOMINANCE STYLE AND VOCAL COMMUNICATION IN NON-HUMAN PRIMATES","authors":["ZC CHEN-KRAUS, C COYE10, M EMERY… - LANGUAGE of, 2020"],"snippet":"Page 441. DOMINANCE STYLE AND VOCAL COMMUNICATION IN NON-HUMAN PRIMATES K KATIE SLOCOMBE* 1, EITHNE KAVANAGH11,, SALLY STREET2, FELIX O. ANGWELA3, THORE J. BERGMAN4, MARYJKA …","url":["https://pure.mpg.de/rest/items/item_3190925/component/file_3219601/content#page=441"]} -{"year":"2020","title":"Don't Stop Pretraining: Adapt Language Models to Domains and Tasks","authors":["S Gururangan, A Marasović, S Swayamdipta, K Lo… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Don't Stop Pretraining: Adapt Language Models to Domains and Tasks Suchin Gururangan† Ana Marasovic†♦ Swabha Swayamdipta†♦ Kyle Lo† Iz Beltagy† Doug Downey† Noah A. Smith†♦ †Allen Institute for Artificial …","url":["https://arxiv.org/pdf/2004.10964"]} -{"year":"2020","title":"DRIVING INTENT EXPANSION VIA ANOMALY DETECTION IN A MODULAR CONVERSATIONAL SYSTEM","authors":["NR Mallinar, TK HO - US Patent App. 16/180,613, 2020"],"snippet":"… In various embodiments, the dataset builder 365 uses one or more of word2vec, the enwiki 2015 document vectorizer, ppdb paragram sentence embeddings, common-crawl uncased GloVe word embeddings, and enwiki …","url":["http://www.freepatentsonline.com/y2020/0142959.html"]} -{"year":"2020","title":"Drug-Drug Interaction Classification Using Attention Based Neural Networks","authors":["D Zaikis, I Vlahavas - 11th Hellenic Conference on Artificial Intelligence, 2020"],"snippet":"… word in a given sentence. The large English statistical model was used, which is trained with GloVe vectors on the OntoNotes 5 Common Crawl corpus and has a POS syntax accuracy of 97.22 percent. The unique POS tags …","url":["https://dl.acm.org/doi/abs/10.1145/3411408.3411461"]} -{"year":"2020","title":"Dual Conditional Cross Entropy Scores and LASER Similarity Scores for the WMT20 Parallel Corpus Filtering Shared Task","authors":["F Koerner, P Koehn"],"snippet":"… sentences. The Pashto language model was trained on a concatenation of the CommonCrawl and Wikipedia corpora, with the CommonCrawl oversampled by a factor of 64 to produce a dataset of 9,273,763 sentences. The …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.109.pdf"]} -{"year":"2020","title":"Dynamic Data Selection and Weighting for Iterative Back-Translation","authors":["ZY Dou, A Anastasopoulos, G Neubig - arXiv preprint arXiv:2004.03672, 2020"],"snippet":"Page 1. Dynamic Data Selection and Weighting for Iterative Back-Translation Zi-Yi Dou, Antonios Anastasopoulos, Graham Neubig Language Technologies Institute, Carnegie Mellon University {zdou, aanastas, gneubig}@cs.cmu.edu Abstract …","url":["https://arxiv.org/pdf/2004.03672"]} -{"year":"2020","title":"DynE: Dynamic Ensemble Decoding for Multi-Document Summarization","authors":["C Hokamp, DG Ghalandari, NT Pham, J Glover - arXiv preprint arXiv:2006.08748, 2020"],"snippet":"… Ghalandari et al. (2020) presented the WCEP dataset, which is a large-scale collection of clusters of news articles with a corresponding summary, constructed using the Wikipedia Current Events Portal, with additional articles gathered from CommonCrawl …","url":["https://arxiv.org/pdf/2006.08748"]} -{"year":"2020","title":"EEGdenoiseNet: A benchmark dataset for deep learning solutions of EEG denoising","authors":["H Zhang, M Zhao, C Wei, D Mantini, Z Li, Q Liu - arXiv preprint arXiv:2009.11662, 2020"],"snippet":"Page 1. EEGdenoiseNet: A benchmark dataset for deep learning solutions of EEG denoising Haoming Zhang1,†, Mingqi Zhao1,2,†, Chen Wei1, Dante Mantini2,3, Zherui Li1, Quanying Liu1,∗ September 25, 2020 1 Department …","url":["https://arxiv.org/pdf/2009.11662"]} -{"year":"2020","title":"Effect of Character and Word Features in Bidirectional LSTM-CRF for NER","authors":["C Ronran, S Lee - 2020 IEEE International Conference on Big Data and …, 2020"],"snippet":"… CRF), respectively, using public word embedding, character features and word features [6,4]. We explore existing word embeddings that is distinct from the previous studies including (1) Glove 300 embedding of 42B, 840B …","url":["https://ieeexplore.ieee.org/abstract/document/9070329/"]} -{"year":"2020","title":"Efficient and High-Quality Neural Machine Translation with OpenNMT","authors":["G Klein, D Zhang, C Chouteau, JM Crego, J Senellart - Proceedings of the Fourth …, 2020"],"snippet":"… Europarl v9 1,838,568 Common Crawl corpus 2,399,123 News Commentary v14 338,285 Wiki Titles v1 1,305,141 Document-split Rapid 1,531,261 ParaCrawl v3 31,358,551 Total 38,770,929 news-crawl 2007-2018 …","url":["https://www.aclweb.org/anthology/2020.ngt-1.25.pdf"]} -{"year":"2020","title":"Efficient strategies for hierarchical text classification: External knowledge and auxiliary tasks","authors":["KR Rojas, G Bustamante, MAS Cabezudo, A Oncevay - arXiv preprint arXiv …, 2020"],"snippet":"… of tokens in the input document. We use pre-trained word embeddings from Common Crawl (Grave et al., 2018) for the weights of this layer, and we do not fine-tune them during training time. Encoder: It is a bidirectional GRU …","url":["https://arxiv.org/pdf/2005.02473"]} -{"year":"2020","title":"Efficient Transfer Learning for Quality Estimation with Bottleneck Adapter Layer","authors":["H Yang, M Wang, N Xie, Y Qin, Y Deng - Proceedings of the 22nd Annual Conference …, 2020"],"snippet":"… BPE is used for tokenizing, where 32000 tokens are reserved. We use UN corpus and Common Crawl parallel corpus with the size of 1https://github.com/pytorch/fairseq Page 5. Total Params Training Params …","url":["https://www.aclweb.org/anthology/2020.eamt-1.4.pdf"]} -{"year":"2020","title":"Efficiently Reusing Old Models Across Languages via Transfer Learning","authors":["T Kocmi, O Bojar - Proceedings of the 22nd Annual Conference of the …, 2020"],"snippet":"… EN - Russian 12.6M News Commentary, Yandex, and UN Corpus WMT 2012 WMT 2018 EN - French 34.3M Commoncrawl, Europarl, Giga FREN, News commentary, UN corpus WMT 2013 WMT dis. 2015 Table 2: Corpora used for each language pair …","url":["https://www.aclweb.org/anthology/2020.eamt-1.3.pdf"]} -{"year":"2020","title":"Embedding Compression with Isotropic Iterative Quantization","authors":["S Liao, J Chen, Y Wang, Q Qiu, B Yuan - arXiv preprint arXiv:2001.05314, 2020"],"snippet":"… We perform experiments with the GloVe embedding (Pennington et al. 2014) and the HDC embedding (Sun et al. 2015). The GloVe embedding is trained from 42B tokens of Common Crawl data. The HDC Table 1: Experiment …","url":["https://arxiv.org/pdf/2001.05314"]} -{"year":"2020","title":"Embedding Compression with Right Triangle Similarity Transformations","authors":["H Song, D Zou, L Hu, J Yuan - International Conference on Artificial Neural Networks, 2020"],"snippet":"… model. 4.1 Experimental Setup. Pre-trained Continuous Embeddings. We conduct experiments on GloVe [16] and fasttext [1]. GloVe embeddings have been trained on 42B tokens of Common Crawl data with 400k words. Fasttext …","url":["https://link.springer.com/chapter/10.1007/978-3-030-61616-8_62"]} -{"year":"2020","title":"EmoDet2: Emotion Detection in English Textual Dialogue using BERT and BiLSTM Models","authors":["H Al-Omari, MA Abdullah, S Shaikh - 2020 11th International Conference on …, 2020"],"snippet":"… Moreover, we have encoded the words in the conversation using Word2vec, Glove Wiki, and Glove Common Crawl packages … The hyperparameters as follow: Dropout = 0.4, the text in Word2Vec and Glove Wiki are lowered …","url":["https://ieeexplore.ieee.org/abstract/document/9078946/"]} -{"year":"2020","title":"Emotion Aided Dialogue Act Classification for Task-Independent Conversations in a Multi-modal Framework","authors":["T Saha, D Gupta, S Saha, P Bhattacharyya - Cognitive Computation"],"snippet":"… To extract textual features, a convolutional neural network (CNN) [48]–based approach is used. Pretrained GloVe [49] embeddings trained on the CommonCrawl corpus of dimension 300 have been used to represent words as word vectors …","url":["https://link.springer.com/article/10.1007/s12559-019-09704-5"]} -{"year":"2020","title":"Employing distributional semantics to organize task-focused vocabulary learning","authors":["HS Ponnusamy, D Meurers - arXiv preprint arXiv:2011.11115, 2020"],"snippet":"… graph, we start with a distributional semantic vector representation of each word, which we obtain from the pre-trained model of GloVe (Pennington et al., 2014) based on the co-occurrence statistics of the the words form a large …","url":["https://arxiv.org/pdf/2011.11115"]} -{"year":"2020","title":"Empowering Architects and Designers: A Classification of What Functions to Accelerate in Storage","authors":["C Zou, AA Chien"],"snippet":"Page 1. Empowering Architects and Designers: A Classification of What Functions to Accelerate in Storage Chen Zou chenzou@uchicago.edu University of Chicago Andrew A. Chien achien@cs.uchicago.edu University of Chicago …","url":["https://newtraell.cs.uchicago.edu/files/tr_authentic/TR-2020-02.pdf"]} -{"year":"2020","title":"End to end approach for i2b2 2012 challenge based on Cross-lingual models","authors":["EA Santamaría - 2020"],"snippet":"… Joshi et al., 2019). Unlike mBERT who has been trained on Wikipedia, XLM-RoBERT uses the CommonCrawl(Conneau et al., 2019a) corpus for its training. In this section we explain step by step our approach. First we adapt …","url":["https://addi.ehu.es/bitstream/handle/10810/48623/MAL-Edgar_Andres.pdf?sequence=1"]} -{"year":"2020","title":"End-to-End Simultaneous Translation System for IWSLT2020 Using Modality Agnostic Meta-Learning","authors":["HJ Han, MA Zaidi, SR Indurthi, NK Lakumarapu, B Lee… - Proceedings of the 17th …, 2020"],"snippet":"… We evaluate our system on the MuST-C Dev set. Our parallel corpus of WMT19 consists of Europarl v9, ParaCrawl v3, Common Crawl, News Commentary v14, Wiki Titles v1 and Documentsplit Rapid for the German-English language pair …","url":["https://www.aclweb.org/anthology/2020.iwslt-1.5.pdf"]} -{"year":"2020","title":"Energy-Based Models for Text","authors":["A Bakhtin, Y Deng, S Gross, M Ott, MA Ranzato… - arXiv preprint arXiv …, 2020"],"snippet":"… (2015); Kiros et al. (2015), which consists of fiction books in 16 different genres, totaling about half a billion words. • CCNews: We collect a de-duplicated subset of the English portion of the CommonCrawl news …","url":["https://arxiv.org/pdf/2004.10188"]} -{"year":"2020","title":"English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too","authors":["J Phang, PM Htut, Y Pruksachatkun, H Liu, C Vania… - arXiv preprint arXiv …, 2020"],"snippet":"… 5XLM-R Large (Conneau et al., 2019) is a 550m-parameter variant of the RoBERTa masked language model (Liu et al., 2019b) trained on a cleaned version of CommonCrawl on 100 languages. 6Excluded in this draft due to implementation issues. Page 5 …","url":["https://arxiv.org/pdf/2005.13013"]} -{"year":"2020","title":"Enhanced-RCNN: An Efficient Method for Learning Sentence Similarity","authors":["S Peng, H Cui, N Xie, S Li, J Zhang, X Li - Proceedings of The Web Conference 2020, 2020"],"snippet":"… Base model. Model parameter size time (s/batch) BERT-Base 102.2M 0.23 ± 0.20 Enhanced-RCNN 7.7M 0.02 ± 0.01 from the 840B Common Crawl corpus [21]. We set the hidden size as 192 for all BiGRU layers. Ant Financial …","url":["https://dl.acm.org/doi/pdf/10.1145/3366423.3379998"]} -{"year":"2020","title":"Enhancing Word Embeddings with Knowledge Extracted from Lexical Resources","authors":["M Biesialska, B Rafieian, MR Costa-jussà - arXiv preprint arXiv:2005.10048, 2020"],"snippet":"… Dinu et al., 2015; Artetxe et al., 2017, 2018). Moreover, GloVe vectors for English were trained on Common Crawl (Pennington et al., 2014). Linguistic Constraints. To perform semantic specialization of word vector spaces, we …","url":["https://arxiv.org/pdf/2005.10048"]} -{"year":"2020","title":"Entity-Switched Datasets: An Approach to Auditing the In-Domain Robustness of Named Entity Recognition Models","authors":["O Agarwal, Y Yang, BC Wallace, A Nenkova - arXiv preprint arXiv:2004.04123, 2020"],"snippet":"… al., 2019). For the first two, we used 300-d cased GloVe (Pennington et al., 2014) vectors trained on Common Crawl.7 For BERT, we use the public large8 uncased9 model and apply the default fine-tuning strategy. We use …","url":["https://arxiv.org/pdf/2004.04123"]} -{"year":"2020","title":"Entrepreneurial Organizations and the Use of Strategic Silence","authors":["W Shi, M Weber - Proceedings of the 54th Hawaii International …"],"snippet":"… number of competing apps for a particular keyword), chart rankings (current ranking position for a keyword), difficulty (the popularity of apps including reviews and ratings) and traffic (eg, autosuggestion when typing in the store …","url":["https://scholarspace.manoa.hawaii.edu/bitstream/10125/71247/0506.pdf"]} -{"year":"2020","title":"Establishing a New State-of-the-Art for French Named Entity Recognition","authors":["PJO Suárez, Y Dupont, B Muller, L Romary, B Sagot - LREC 2020-12th Language …, 2020"],"snippet":"… They use zero to three of the following vector representations: FastText non-contextual embeddings (Bojanowski et al., 2017), the FrELMo contextual language model ob- tained by training the ELMo architecture on the OSCAR …","url":["https://hal.inria.fr/hal-02617950/document"]} -{"year":"2020","title":"Estimating educational outcomes from students' short texts on social media","authors":["I Smirnov - EPJ Data Science, 2020"],"snippet":"… We obtained significantly better results with a model that used word-embeddings (see Methods). We also find that embeddings trained on the VK corpus outperform models trained on the Wikipedia and Common Crawl corpora (Table 1). Page 6 …","url":["https://link.springer.com/content/pdf/10.1140/epjds/s13688-020-00245-8.pdf"]} -{"year":"2020","title":"Estimating Mutual Information Between Dense Word Embeddings","authors":["V Zhelezniak, A Savkov, N Hammerla - Proceedings of the 58th Annual Meeting of …, 2020"],"snippet":"… Our focus here is on fastText vectors (Bojanowski et al., 2017) trained on Common Crawl (600B tokens), as previous literature suggests that among unsupervised vectors fastText yields the best performance for all tasks and …","url":["https://www.aclweb.org/anthology/2020.acl-main.741.pdf"]} -{"year":"2020","title":"Estimating the influence of auxiliary tasks for multi-task learning of sequence tagging tasks","authors":["F Schröder, C Biemann"],"snippet":"Page 1. Estimating the influence of auxiliary tasks for multi-task learning of sequence tagging tasks Fynn Schröder Language Technology Group Universität Hamburg Hamburg, Germany fschroeder@informatik.uni-hamburg.de …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2020-schroeder-biemann-acl20-dataset-similarity.pdf"]} -{"year":"2020","title":"Evaluating cross-lingual textual similarity on dictionary alignment problem","authors":["Y Sever, G Ercan - Language Resources and Evaluation, 2020"],"snippet":"… 2018) provides embeddings for 157 different languages trained on Wikipedia and Common Crawl Footnote 3 . For efficiency concerns, each embedding set of the 7 languages are pruned to the the most frequent \\(500\\times 10^3\\) words in each language …","url":["https://link.springer.com/article/10.1007/s10579-020-09498-1"]} -{"year":"2020","title":"Evaluating German Transformer Language Models with Syntactic Agreement Tests","authors":["K Zaczynska, N Feldhus, R Schwarzenberg…"],"snippet":"… 1 The first model which we refer to as GBERTlarge is a community model provided by the Bavarian State Library.2 It was trained on multiple German corpora including a recent Wikipedia dump, EU Bookshop corpus …","url":["http://ceur-ws.org/Vol-2624/paper7.pdf"]} -{"year":"2020","title":"Evaluating Multilingual BERT for Estonian","authors":["C Kittask, K Milintsevich, K Sirts - arXiv preprint arXiv:2010.00454, 2020"],"snippet":"… 1Corresponding Author: Claudia Kittask; E-mail: claudiakittask@gmail.com Page 2. and cross-lingual RoBERTa (XLM-RoBERTa) [3], which was trained on much larger CommonCrawl corpora and also includes 100 languages …","url":["https://arxiv.org/pdf/2010.00454"]} -{"year":"2020","title":"Evaluating Sentence Representations for Biomedical Text: Methods and Experimental Results","authors":["NS Tawfik, MR Spruit - Journal of Biomedical Informatics, 2020"],"snippet":"… 3.2. Embedding Methods. GloVe We use the pre-trained embeddings consisting of 2.2 million vocabulary words available at https://nlp.stanford.edu/projects/glove/ which were trained on the Common Crawl (840B tokens) dataset …","url":["https://www.sciencedirect.com/science/article/pii/S1532046420300253"]} -{"year":"2020","title":"Evaluating Word Embeddings on Low-Resource Languages","authors":["N Stringham, M Izbicki - Proceedings of the First Workshop on Evaluation and …, 2020"],"snippet":"… Grave et al. (2018) trained FastText embeddings on 157 languages using data from the Common Crawl project. But they were only able to explicitly evaluate 10 of these language models using the analogy task due to the …","url":["https://www.aclweb.org/anthology/2020.eval4nlp-1.17.pdf"]} -{"year":"2020","title":"Evaluation of related news recommendations using document similarity methods","authors":["M Pranjic, V Podpecan, M Robnik-Šikonja, S Pollak"],"snippet":"… RoBERTa (Liu et al., 2019). It uses the sentence piece tokenizer and it is trained with the masked language model objective (MLM) on the CommonCrawl data in 100 languages, including Croatian. Similar to the mBERT, all …","url":["http://nl.ijs.si/jtdh20/pdf/JT-DH_2020_Pranjic-et-al_Evaluation-of-related-news-recommendations-using-document-similarity-methods.pdf"]} -{"year":"2020","title":"Event Detection on Literature by Utilizing Word Embedding","authors":["J Chun, C Kim - International Conference on Database Systems for …, 2020"],"snippet":"… On the contrary, Neural-based methods have the limitation that they ignore semantic relationships in a text. 2.3 Facebook Pre-trained Word Vectors. We chose pre-trained word vectors published by Facebook, trained on …","url":["https://link.springer.com/chapter/10.1007/978-3-030-59413-8_21"]} -{"year":"2020","title":"Evidence Integration for Multi-hop Reading Comprehension with Graph Neural Networks","authors":["L Song, Z Wang, M Yu, Y Zhang, R Florian, D Gildea - IEEE Transactions on …, 2020"],"snippet":"Page 1. 1041-4347 (c) 2020 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/9051845/"]} -{"year":"2020","title":"Examining the rhetorical capacities of neural language models","authors":["Z Zhu, C Pan, M Abdalla, F Rudzicz - arXiv preprint arXiv:2010.00153, 2020"],"snippet":"… Non-contextualized word embeddings We consider two popular word embeddings here: • GloVe (Pennington et al., 2014) contains 2.2M vocabulary items and produces 300dimensional word vectors. The GloVe embedding …","url":["https://arxiv.org/pdf/2010.00153"]} -{"year":"2020","title":"Experience Grounds Language","authors":["Y Bisk, A Holtzman, J Thomason, J Andreas, Y Bengio… - arXiv preprint arXiv …, 2020"],"snippet":"… (2013) trained on 1.6 billion tokens, while Pennington et al. (2014) scaled up to 840 billion tokens from Common Crawl. Recent approaches have made progress by substantially increasing the number of model parameters …","url":["https://arxiv.org/pdf/2004.10151"]} -{"year":"2020","title":"Experiencers, Stimuli, or Targets: Which Semantic Roles Enable Machine Learning to Infer the Emotions?","authors":["L Oberländer, K Reich, R Klinger - arXiv preprint arXiv:2011.01599, 2020"],"snippet":"… 1For ET, 90% of the annotated experiencers are the authors of the tweets without corresponding span annotation. 2We use 42B tokens, pretrained on CommonCrawl (Pennington et al., 2014), https://nlp.stanford.edu …","url":["https://arxiv.org/pdf/2011.01599"]} -{"year":"2020","title":"Experiments on Paraphrase Identification Using Quora Question Pairs Dataset","authors":["A Chandra, R Stefanus - arXiv preprint arXiv:2006.02648, 2020"],"snippet":"… matching result into a fix-length matching vector and continued to last layer of the model which is a fully connected layer. The paper use GloVe as a pretrained word vector from 840B Common Crawl corpus and apply it to Quora …","url":["https://arxiv.org/pdf/2006.02648"]} -{"year":"2020","title":"Explicit Alignment Objectives for Multilingual Bidirectional Encoders","authors":["J Hu, M Johnson, O Firat, A Siddhant, G Neubig - arXiv preprint arXiv:2010.07972, 2020"],"snippet":"… Sentence Alignment Our first proposed objective encourages cross-lingual alignment of sentence 1 AMBER is trained on 26GB parallel data and 80GB monolingual Wikipedia data, while XLM-R-large is trained on 2.5TB …","url":["https://arxiv.org/pdf/2010.07972"]} -{"year":"2020","title":"Explicit Sentence Compression for Neural Machine Translation","authors":["Z Li, R Wang, K Chen, M Utiyama, E Sumita, Z Zhang… - arXiv preprint arXiv …, 2019"],"snippet":"… for NMT evaluation. For the EN-DE translation task, 4.43 M bilingual sentence pairs from the WMT14 dataset were used as training data, including Common Crawl, News Commentary, and Europarl v7. The newstest2013 and …","url":["https://arxiv.org/pdf/1912.11980"]} -{"year":"2020","title":"Exploiting Categorization of Online News for Profiling City Areas","authors":["A Bondielli, P Ducange, F Marcelloni - 2020 IEEE Conference on Evolving and …, 2020"],"snippet":"… FastText is created by Facebook and is based on Neural Networks. Pre-trained word vectors are available for 157 languages, trained on Common Crawl and Wikipedia. More specifically, we chose to use the pre-trained Italian model …","url":["https://ieeexplore.ieee.org/abstract/document/9122777/"]} -{"year":"2020","title":"Exploiting Class Labels to Boost Performance on Embedding-based Text Classification","authors":["A Zubiaga - arXiv preprint arXiv:2006.02104, 2020"],"snippet":"… 4.2 Word Embedding Models & Classifiers We tested four word embedding models: (1) Google's Word2Vec model (gw2v), (2) a Twitter Word2Vec model5 (tw2v) [10], (3) GloVe embeddings trained from Common Crawl (cglove) …","url":["https://arxiv.org/pdf/2006.02104"]} -{"year":"2020","title":"Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning","authors":["T Shen, Y Mao, P He, G Long, A Trischler, W Chen - arXiv preprint arXiv:2004.14224, 2020"],"snippet":"Page 1. EXPLOITING STRUCTURED KNOWLEDGE IN TEXT VIA GRAPH-GUIDED REPRESENTATION LEARNING Tao Shen∗ University of Technology Sydney tao.shen@student.uts.edu.au Yi Mao, Pengcheng …","url":["https://arxiv.org/pdf/2004.14224"]} -{"year":"2020","title":"Exploring Different Methods for Solving Analogies with Portuguese Word Embeddings","authors":["T Sousa, H Gonçalo Oliveira, A Alves - 9th Symposium on Languages, Applications …, 2020"],"snippet":"… from several sources. Such sources include raw text (ie, an ensemble of Google News word2vec, Common Crawl GloVe, Open Subtitles fastText) combined with the ConceptNet semantic network with retrofitting. 1 https://github …","url":["https://drops.dagstuhl.de/opus/volltexte/2020/13022/pdf/OASIcs-SLATE-2020-9.pdf"]} -{"year":"2020","title":"Exploring Event Extraction Across Languages","authors":["S Prabhu - 2020"],"snippet":"Page 1. Exploring Event Extraction Across Languages Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computational Linguistics by Research by Suhan Prabhu 201525118 suhan.prabhuk@research.iiit.ac.in …","url":["http://web2py.iiit.ac.in/research_centres/publications/download/mastersthesis.pdf.b481c850852a12a8.737568616e5f66696e616c5f7468657369732e706466.pdf"]} -{"year":"2020","title":"Exploring Neural Network Approaches in Automatic Personality Recognition of Filipino Twitter Users","authors":["E Tighe, O Aran, C Cheng"],"snippet":"… FastText differs by learning character-grams, as opposed to word-grams. We utilize the embeddings trained on Common Crawl and Wikipedia data2 – specifically, the embeddings for English and Tagalog (both 300 dimensions) …","url":["https://www.researchgate.net/profile/Edward_Tighe/publication/343189230_Exploring_Neural_Network_Approaches_in_Automatic_Personality_Recognition_of_Filipino_Twitter_Users/links/5f1ae2aea6fdcc9626ad4c4d/Exploring-Neural-Network-Approaches-in-Automatic-Personality-Recognition-of-Filipino-Twitter-Users.pdf"]} -{"year":"2020","title":"Exploring Swedish & English fastText Embeddings with the Transformer","authors":["TP Adewumi, F Liwicki, M Liwicki - arXiv preprint arXiv:2007.16007, 2020"],"snippet":"… We obtain better performance in both languages on the downstream task with far smaller training data, compared to recently released, common crawl versions and character n-grams appear useful for Swedish, a morphologically rich language …","url":["https://arxiv.org/pdf/2007.16007"]} -{"year":"2020","title":"Exploring the Dominance of the English Language on the Websites of EU Countries","authors":["A Giannakoulopoulos, M Pergantis, N Konstantinou… - Future Internet, 2020"],"snippet":"… For this purpose, we used information obtained from Common Crawl, a “repository of web crawl data that is universally accessible and analyzable” [34]. Among the data Common Crawl offers is an index of every available webpage …","url":["https://www.mdpi.com/1999-5903/12/4/76/pdf"]} -{"year":"2020","title":"Extended Overview of CLEF HIPE 2020: Named Entity Processing on Historical Newspapers","authors":["A Flückiger, S Clematide"],"snippet":"Page 1. Extended Overview of CLEF HIPE 2020: Named Entity Processing on Historical Newspapers Maud Ehrmann1[0000−0001−9900−2193], Matteo Romanello1[0000−0002− 1890−2577], Alex Flückiger2, and Simon Clematide2[0000−0003−1365−0662] …","url":["http://ceur-ws.org/Vol-2696/paper_255.pdf"]} -{"year":"2020","title":"Extended study on using pretrained language models and YiSi-1 for machine translation evaluation","authors":["C Lo - Proceedings of the Fifth Conference on Machine …, 2020"],"snippet":"… The differencesbetweenXLM-RandBERTare1)XLM-Ris trained on the CommonCrawl corpus which is significantly larger than the Wikipedia training data used by BERT; 2) instead of a uniform data sampling rate used in BERT …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.99.pdf"]} -{"year":"2020","title":"Extending Tables using a Web Table Corpus","authors":["S Sarabchi - 2020"],"snippet":"… Lehmberg et al. [12] gathered a web table corpus containing 233 million tables from the 2015 version of the CommonCrawl2, using a table extraction method similar to that of [11]. The table corpus … and 2https://commoncrawl.org/about …","url":["https://era.library.ualberta.ca/items/4f9f40b8-69ba-4c24-85b4-41f17517cc59/view/dfd3938b-a7d1-4b4c-85e0-4087c36d5713/Sarabchi_Saeed_202002_MSc.pdf"]} -{"year":"2020","title":"Extracting Family History of Patients From Clinical Narratives: Exploring an End-to-End Solution With Deep Learning Models","authors":["X Yang, H Zhang, X He, J Bian, Y Wu - JMIR Medical Informatics, 2020"],"snippet":"… We screened 4 different word embeddings following a similar procedure reported in our previous study [46] and found that the Common Crawl embeddings—released by Facebook and trained using the fastText on the …","url":["https://medinform.jmir.org/2020/12/e22982/"]} -{"year":"2020","title":"Extracting Training Data from Large Language Models","authors":["N Carlini, F Tramer, E Wallace, M Jagielski… - arXiv preprint arXiv …, 2020"],"snippet":"… In particular, we select samples from a subset of Common Crawl6 to feed as context to the model.7 6http://commoncrawl.org/ 7It is possible there is some intersection between these two datasets, effectively allowing this strategy to “cheat” …","url":["https://arxiv.org/pdf/2012.07805"]} -{"year":"2020","title":"Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation","authors":["I Chung, B Kim, Y Choi, SJ Kwon, Y Jeon, B Park… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. arXiv:2009.07453v1 [cs.LG] 16 Sep 2020 Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation Insoo Chung∗ Byeongwook Kim∗ Yoonjung Choi Se Jung Kwon Yongkweon …","url":["https://arxiv.org/pdf/2009.07453"]} -{"year":"2020","title":"Facebook AI's WMT20 News Translation Task Submission","authors":["PJ Chen, A Lee, C Wang, N Goyal, A Fan… - arXiv preprint arXiv …, 2020"],"snippet":"… we use all the available monolingual data, eg NewsCrawl + CommonCrawl + Wikipedia dumps for Tamil, and CommonCrawl for Inuktitut … unconstrained track, we use Tamil monolingual data and Tamil-English mined bitext data …","url":["https://arxiv.org/pdf/2011.08298"]} -{"year":"2020","title":"Factors affecting sentence similarity and paraphrasing identification","authors":["M Alian, A Awajan - International Journal of Speech Technology, 2020"],"snippet":"… Grave et al. (2018) contributed in a pre-trained word vector representation for 157 languages including Arabic. The word vectors have been trained on Wikipedia and the Common Crawl corpus using an extension of the FastText model with subword information …","url":["https://link.springer.com/article/10.1007/s10772-020-09753-4"]} -{"year":"2020","title":"Fairness in AI-based Recruitment and Career Pathway Optimization","authors":["DF Mujtaba - 2020"],"snippet":"Page 1. FAIRNESS IN AI-BASED RECRUITMENT AND CAREER PATHWAY OPTIMIZATION By Dena Freshta Mujtaba A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of …","url":["http://search.proquest.com/openview/f2938ed72cda2c3b656a0db1b2be7320/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2020","title":"Fake News Data Collection and Classification: Iterative Query Selection for Opaque Search Engines with Pseudo Relevance Feedback","authors":["A Elyashar, M Reuben, R Puzis - arXiv preprint arXiv:2012.12498, 2020"],"snippet":"Page 1. FAKE NEWS DATA COLLECTION AND CLASSIFICATION: ITERATIVE QUERY SELECTION FOR OPAQUE SEARCH ENGINES WITH PSEUDO RELEVANCE FEEDBACK APREPRINT Aviad Elyashar, Maor Reuben …","url":["https://arxiv.org/pdf/2012.12498"]} -{"year":"2020","title":"Fake News Detection","authors":["IP Marín, D Arroyo - Conference on Complex, Intelligent, and Software …, 2020"],"snippet":"… 4. https://spacy.io/ [Last accessed 26 Jan 2020]. 5. en_core_web_lg, pre-trained English statistical models. English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl. 6. https …","url":["https://link.springer.com/chapter/10.1007/978-3-030-57805-3_22"]} -{"year":"2020","title":"Fake news spreader detection using neural tweet aggregation","authors":["O Bakhteev, A Ogaltsov, P Ostroukhov"],"snippet":"… As a preprocessing step we lowercased tweets and removed stop-words and punctuation. We did not use any special preprocessing. For the word embeddings we used fastText [1] trained on Common Crawl and Wikipedia with dimension set to 100 …","url":["https://pan.webis.de/downloads/publications/papers/bakhteev_2020.pdf"]} -{"year":"2020","title":"Fake News Spreader Identification in Twitter using Ensemble Modeling","authors":["A Hashemi, MR Zarei, MR Moosavi, M Taheri"],"snippet":"… Page 4. The sources used in the English model for training data are OntoNotes 51 and GloVe Common Crawl2 and the Spanish model utilizes UD Spanish AnCora v2.53, WikiNER4, OSCAR (Common Crawl)5 and Wikipedia …","url":["https://pan.webis.de/downloads/publications/papers/hashemi_2020.pdf"]} -{"year":"2020","title":"Fashion-IQ 2020 Challenge 2nd Place Team's Solution","authors":["M Shin, Y Cho, S Hong - arXiv preprint arXiv:2007.06404, 2020"],"snippet":"… For training the LSTM and the GRU from scratch, we initialize the word embedding with the concatenation of three GloVe vectors2 learned from Wikipedia, Twitter, and Common Crawl that results in 900-dimensional input …","url":["https://arxiv.org/pdf/2007.06404"]} -{"year":"2020","title":"Fast entity linking in noisy text environments","authors":["SM Shah, MD Conover, PN Skomoroch, MT Hayes… - US Patent 10,733,383, 2020"],"snippet":"… entry). In some embodiments, the candidate dictionary is determined by using the hyperlinks on a Wikipedia page, or a Common Crawl page to identify surface forms (the hyperlink anchor text) that point to a specific page. Each …","url":["http://www.freepatentsonline.com/10733383.html"]} -{"year":"2020","title":"Fast Indexes for Gapped Pattern Matching","authors":["M Cáceres, SJ Puglisi, B Zhukova - International Conference on Current Trends in …, 2020"],"snippet":"… Open image in new window Fig. 2. Fig. 2. Time to search a 2GiB subset of the Common Crawl web collection (commoncrawl.org). for 20 VLG patterns (\\(k=2\\), \\(delta _i,\\varDelta _i = \\langle 100,110\\rangle \\)), composed of very …","url":["https://link.springer.com/chapter/10.1007/978-3-030-38919-2_40"]} -{"year":"2020","title":"FinBERT: A Pre-trained Financial Language Representation Model for Financial Text Mining","authors":["Z Liu, D Huang, K Huang, Z Li, J Zhao"],"snippet":"… sizes, totaling over 61 GB text: • English Wikipedia1 and BooksCorpus (Zhu et al., 2015), which are the original training data used to train BERT (totaling 13GB, 3.31B words); • FinancialWeb (totaling 24GB, 6.38B words), which …","url":["https://www.ijcai.org/Proceedings/2020/0622.pdf"]} -{"year":"2020","title":"Finding of asymmetric relation between words","authors":["M Muraoka, T Nasukawa, KMA Salam - US Patent App. 16/287,326, 2020"],"snippet":"US20200272696A1 - Finding of asymmetric relation between words - Google Patents. Finding of asymmetric relation between words. Download PDF Info. Publication number US20200272696A1. US20200272696A1 US16/287,326 …","url":["https://patents.google.com/patent/US20200272696A1/en"]} -{"year":"2020","title":"Finding the needle in the haystack: Fine-tuning transformers to classify protest events in a sea ofnews articles, with Bayesian uncertainty measures","authors":["C Ghai - 2020"],"snippet":"Page 1. Finding the needle in the haystack: Fine-tuning transformers to classify protest events in a sea of news articles, with Bayesian uncertainty measures Chris Ghai Master's Thesis, Spring 2020 Page 2. This master's thesis …","url":["https://www.duo.uio.no/bitstream/handle/10852/79984/1/chris_ghai_thesis.pdf"]} -{"year":"2020","title":"Findings of the 2020 conference on machine translation (wmt20)","authors":["L Barrault, M Biesialska, O Bojar, MR Costa-jussà… - Proceedings of the Fifth …, 2020"],"snippet":"… Distinct words – 76,013 – 6,165 178,453 85,189 Common Crawl Parallel Corpus German ↔ English Czech ↔ English Russian ↔ English French ↔ German … Common Crawl Language Model Data English German Czech Russian Polish Sent …","url":["https://www.aclweb.org/anthology/2020.wmt-1.1.pdf"]} -{"year":"2020","title":"Findings of the WMT 2020 Biomedical Translation Shared Task: Basque, Italian and Russian as New Additional Languages","authors":["R Bawden, G Di Nunzio, C Grozea, I Unanue, A Yepes… - 5th Conference on Machine …, 2020"],"snippet":"Page 1. HAL Id: hal-02986356 https://hal.inria.fr/hal-02986356 Submitted on 2 Nov 2020 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not …","url":["https://hal.inria.fr/hal-02986356/document"]} -{"year":"2020","title":"Findings of the WMT 2020 shared task on parallel corpus filtering and alignment","authors":["P Koehn, V Chaudhary, A El-Kishky, N Goyal, PJ Chen… - Proceedings of the Fifth …, 2020"],"snippet":"… Noisy parallel documents and parallel sentences were sourced from the CCAligned2 dataset (El- Kishky et al., 2020a), a massive collection of cross-lingual web documents covering over 8k language pairs aligned from …","url":["https://www.aclweb.org/anthology/2020.wmt-1.78.pdf"]} -{"year":"2020","title":"Fine-Grained Argument Unit Recognition and Classification","authors":["D Trautmann, J Daxenberger, C Stab, H Schütze…"],"snippet":"… Table 1). This also increases comparability with related work. The topics are general enough to have good coverage in Common Crawl … 4http://commoncrawl.org/2016/02/ february-2016-crawlarchive-now-available/ 5https://www.elastic.co/products/elasticsearch …","url":["https://www.researchgate.net/profile/Dietrich_Trautmann/publication/332590723_Robust_Argument_Unit_Recognition_and_Classification/links/5e32b02ca6fdccd96576e059/Robust-Argument-Unit-Recognition-and-Classification.pdf"]} -{"year":"2020","title":"Fine-grained entity type classification using GRU with self-attention","authors":["K Dhrisya, G Remya, A Mohan - International Journal of Information Technology, 2020"],"snippet":"… The dictionary for the proposed model is built with GLoVe vectors of 300 dimensions. It is a pre-trained word embedding created by utilizing common Crawl 840B. These vectors on various corpora can be downloaded from Stanford GLoVe Website. Parameter settings …","url":["https://link.springer.com/article/10.1007/s41870-020-00499-5"]} -{"year":"2020","title":"FLERT: Document-Level Features for Named Entity Recognition","authors":["S Schweter, A Akbik - arXiv preprint arXiv:2011.06993, 2020"],"snippet":"… Conneau et al. (2019). We use the xlm-roberta-large model in our experiments, trained on 2.5TB of data from a cleaned Common Crawl corpus (Wenzek et al., 2020) for 100 different languages Embeddings (+WE). For each setup …","url":["https://arxiv.org/pdf/2011.06993"]} -{"year":"2020","title":"Forum Duplicate Question Detection by Domain Adaptive Semantic Matching","authors":["Z Xu, H Yuan - IEEE Access, 2020"],"snippet":"… B. MODEL IMPLEMENTATION The word embedding was initialized with 300-dimensional GloVe [33] vectors which are pretrained in the 840B Common Crawl corpus. The Embedding was set to be trainable. The …","url":["https://ieeexplore.ieee.org/iel7/6287639/8948470/09043551.pdf"]} -{"year":"2020","title":"Four dimensions characterizing comprehensive trait judgments of faces","authors":["C Lin, U Keles, R Adolphs - 2020"],"snippet":"… and text classi cation using a neural network provided within the FastText library40; this neural network had been trained on Common Crawl data of 600 billion words to predict the identity of a word given a context. We then applied …","url":["https://www.researchsquare.com/article/rs-41215/latest.pdf"]} -{"year":"2020","title":"French Contextualized Word-Embeddings with a sip of CaBeRnet: a New French Balanced Reference Corpus","authors":["M Fabre, PJO Suárez, B Sagot, ÉV de la Clergerie - CMLC-8-8th Workshop on the …, 2020","M Popa-Fabre, PJO Suárez, B Sagot… - Proceedings of the 8th …, 2020"],"snippet":"… al., 2019), we decided to include in our comparison a corpus of French text extracted from Common Crawl8. We … 8More information available at https://commoncrawl … OSCAR gathers a set of monolingual text extracted …","url":["https://hal.inria.fr/hal-02678358/document","https://www.aclweb.org/anthology/2020.cmlc-1.3.pdf"]} -{"year":"2020","title":"Frequency-dependent Regularization in Constituent Ordering Preferences","authors":["Z Liu, E Morgan"],"snippet":"… a total of around 9 billion tokens. This corpus consists of web page data from both Common Crawl and Wikipedia and is automatically parsed with UDPipe (Straka & Straková, 2017). Within this corpus, each token is represented …","url":["https://www.researchgate.net/profile/Zoey_Liu2/publication/341712949_Frequency-dependent_Regularization_in_Constituent_Ordering_Preferences/links/5ecffdb292851c9c5e65d021/Frequency-dependent-Regularization-in-Constituent-Ordering-Preferences.pdf"]} -{"year":"2020","title":"From Chest X-Rays to Radiology Reports: A Multimodal Machine Learning Approach","authors":["S Singh, S Karimi, K Ho-Shon, L Hamey - 2019 Digital Image Computing: Techniques …, 2019"],"snippet":"… Also, on the text side, we use the Glove [30] word embeddings having a 300-dimensional embedding vector for each word, and have been trained on a generic text corpus named Common Crawl having 42B tokens, 1.9M vocab …","url":["https://ieeexplore.ieee.org/abstract/document/8945819/"]} -{"year":"2020","title":"From Dataset Recycling to Multi-Property Extraction and Beyond","authors":["T Dwojak, M Pietruszka, Ł Borchmann, J Chłędowski… - arXiv preprint arXiv …, 2020"],"snippet":"… T5. Recently proposed T5 model (Raffel et al., 2020) is a Transformer model pretrained on a cleaned version of CommonCrawl. T5 is famous for achieving excellent performance on the SuperGLUE benchmark (Wang et al., 2019) …","url":["https://arxiv.org/pdf/2011.03228"]} -{"year":"2020","title":"From Hero to Z\\'eroe: A Benchmark of Low-Level Adversarial Attacks","authors":["S Eger, Y Benz - arXiv preprint arXiv:2010.05648, 2020"],"snippet":"… The reason may be that our noises are not always natural, in the sense of having high support in large datasets such as CommonCrawl or Wikipedia, but they are still within the limits of cognitive abilities of ordinary humans …","url":["https://arxiv.org/pdf/2010.05648"]} -{"year":"2020","title":"From Pixel to Patch: Synthesize Context-aware Features for Zero-shot Semantic Segmentation","authors":["Z Gu, S Zhou, L Niu, Z Zhao, L Zhang - arXiv preprint arXiv:2009.12232, 2020"],"snippet":"Page 1. 1 From Pixel to Patch: Synthesize Context-aware Features for Zero-shot Semantic Segmentation Zhangxuan Gu, Siyuan Zhou, Li Niu*, Zihan Zhao, Liqing Zhang* Abstract—Zero-shot learning has been actively studied …","url":["https://arxiv.org/pdf/2009.12232"]} -{"year":"2020","title":"From Syntactic Structure to Semantic Relationship: Hypernym Extraction from Definitions by Recurrent Neural Networks Using the Part of Speech Information","authors":["Y Tan, X Wang, T Jia - International Semantic Web Conference, 2020"],"snippet":"… In recent years, much research pay attention to extracting hypernyms from larger data resources via the high precise of pattern-based methods. [25] extract hypernymy relations from the CommonCrawl web corpus using lexico-syntactic patterns …","url":["https://link.springer.com/chapter/10.1007/978-3-030-62419-4_30"]} -{"year":"2020","title":"From Web Crawl to Clean Register-Annotated Corpora","authors":["V Laippala, S Rönnqvist, S Hellström, J Luotolahti… - … of the 12th Web as Corpus …, 2020"],"snippet":"… crawl or extracting data from existing crawl-based datasets, such as Common Crawl1. As … CommonCrawl is a free and openly available web crawl maintained by the CommonCrawl … Lately the Common Crawl dataset has …","url":["https://www.aclweb.org/anthology/2020.wac-1.3.pdf"]} -{"year":"2020","title":"From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers","authors":["A Lauscher, V Ravishankar, I Vulić, G Glavaš - arXiv preprint arXiv:2005.00633, 2020"],"snippet":"… It is trained on the CommonCrawl-100 data (Wenzek et al., 2019) of 100 languages … Interestingly, for both high7For XLM-R, we take the reported sizes of languagespecific portions of CommonCrawl-100 from Conneau et al …","url":["https://arxiv.org/pdf/2005.00633"]} -{"year":"2020","title":"From Zero to Hero: On the Limitations of Zero-Shot Language Transfer with Multilingual Transformers","authors":["A Lauscher, V Ravishankar, I Vulić, G Glavaš - … of the 2020 Conference on Empirical …, 2020"],"snippet":"… sampling. XLM on RoBERTa (XLM-R). XLM-R (Conneau et al., 2020) is an instance of RoBERTa, robustly trained on a large multilingual CommonCrawl-100 (CC-100) corpus (Wenzek et al., 2019) covering 100 languages …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.363.pdf"]} -{"year":"2020","title":"Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing","authors":["Z Dai, G Lai, Y Yang, QV Le - arXiv preprint arXiv:2006.03236, 2020"],"snippet":"… ablation studies. 5 Page 6. • Large scale: Pretraining models for 500K steps with batch size 8K on the five datasets used by XLNet [3] and ELECTRA [5] (Wikipedia + Book Corpus + ClueWeb + Gigaword + Common Crawl). We will …","url":["https://arxiv.org/pdf/2006.03236"]} -{"year":"2020","title":"Gated Semantic Difference Based Sentence Semantic Equivalence Identification","authors":["X Liu, Q Chen, X Wu, Y Hua, J Chen, D Li, B Tang… - IEEE/ACM Transactions on …, 2020"],"snippet":"… The word embeddings for the quora corpus are 300dimensional GloVe word vectors pre-trained from the 840B Common Crawl corpus [45] and for the LCQMC are 300-dimensional word vectors pre-trained from the Chinese 5[Online] …","url":["https://ieeexplore.ieee.org/abstract/document/9222233/"]} -{"year":"2020","title":"Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer","authors":["J Zhao, S Mukherjee, S Hosseini, KW Chang… - arXiv preprint arXiv …, 2020"],"snippet":"… is an OCCUPATION-TITLE” where name is recognized in each language by using the corresponding Named Entity Recognition model from spaCy.5 To control for the same time period for datasets across languages …","url":["https://arxiv.org/pdf/2005.00699"]} -{"year":"2020","title":"Gender Bias in Multilingual Embeddings","authors":["J Zhao, S Mukherjee, S Hosseini, KW Chang…"],"snippet":"… an OCCUPATION-TITLE” where name is recognized in each language by using the corresponding Named Entity Recognition model from spaCy.5 To control for the same time period for datasets across languages …","url":["https://www.researchgate.net/profile/Subhabrata_Mukherjee/publication/340660062_Gender_Bias_in_Multilingual_Embeddings/links/5e97428692851c2f52a6200a/Gender-Bias-in-Multilingual-Embeddings.pdf"]} -{"year":"2020","title":"Gender Detection on Social Networks using Ensemble Deep Learning","authors":["K Kowsari, M Heidarysafa, T Odukoya, P Potter… - arXiv preprint arXiv …, 2020"],"snippet":"… 25d, 50d, 100d, and 200d vectors. This word embedding is trained over even bigger corpora, including Wikipedia and Common Crawl content. The objective function is as follows: f(wi − wj, ˜wk) = Pik Pjk (2) where wi is refer to …","url":["https://arxiv.org/pdf/2004.06518"]} -{"year":"2020","title":"Gender stereotype reinforcement: Measuring the gender bias conveyed by ranking algorithms","authors":["A Fabris, A Purpura, G Silvello, GA Susto - Information Processing & Management, 2020"],"snippet":"… Corrado, Dean, 2013). Most frequently, they are learnt from large text corpora available online (such as Wikipedia, Google News and Common Crawl, capturing semantic relationships of words based on their usage. Recent work …","url":["https://arxiv.org/pdf/2009.01334"]} -{"year":"2020","title":"Gender stereotypes are reflected in the distributional structure of 25 languages","authors":["M Lewis, G Lupyan - Nature Human Behaviour, 2020"],"snippet":"Cultural stereotypes such as the idea that men are more suited for paid work and women are more suited for taking care of the home and family, may contribute to gender imbalances in science, technology, engineering and …","url":["https://www.nature.com/articles/s41562-020-0918-6"]} -{"year":"2020","title":"Generalisation of Cyberbullying Detection","authors":["K Richard, L Marc-André - arXiv preprint arXiv:2009.01046, 2020","MA Larochelle, R Khoury"],"snippet":"… We use FastText pre-trained on Common Crawl data featuring 300 dimensions and 2 million word vectors with subword information6 to convert the words into vector representations, of which we concatenate a 60-dimensional binary …","url":["https://arxiv.org/pdf/2009.01046","https://web.ntpu.edu.tw/~myday/doc/ASONAM2020/ASONAM2020_Proceedings/pdf/papers/047_034_296.pdf"]} -{"year":"2020","title":"Generalize Sentence Representation with Self-Inference","authors":["KC Yang, HY Kao"],"snippet":"… Our model is trained with the phrases in the parse trees and tested on the whole sentence. Experimental Settings We initialize word embeddings using the pretrained FastText common-crawl vectors (Mikolov et al. 2018) and freeze the weights during training …","url":["https://www.aaai.org/Papers/AAAI/2020GB/AAAI-YangKC.7098.pdf"]} -{"year":"2020","title":"Generating Categories for Sets of Entities","authors":["S Zhang, K Balog, J Callan - arXiv preprint arXiv:2008.08428, 2020"],"snippet":"… entity linking for tables and table schema to predicate matching. Ritze et al. [31] propose an iterative method for matching tables to DBpedia. They develop a manually annotated dataset for matching between a Web table corpus …","url":["https://arxiv.org/pdf/2008.08428"]} -{"year":"2020","title":"Generating Diverse Conversation Responses by Creating and Ranking Multiple Candidates","authors":["YP Ruan, ZH Ling, X Zhu, Q Liu, JC Gu - Computer Speech & Language, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0885230820300048"]} -{"year":"2020","title":"Generating Fact Checking Briefs","authors":["A Fan, A Piktus, F Petroni, G Wenzek, M Saeidi… - Proceedings of the 2020 …, 2020"],"snippet":"… We take the top search hit as the evidence and retrieve the text from CommonCrawl4. Finally, the generated question and retrieved evidence document is provided to the question answering model to generate an answer. 4.1 Question Generation …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.580.pdf"]} -{"year":"2020","title":"Generating fake websites: WikiGen","authors":["M Longland - 2020"],"snippet":"… [2015]. This is a relatively small vector file. An alternative considered was the Common Crawl (840B tokens) vectors from Stanford NLP's GloVe [Pennington et al., 2014] but memory issues meant the smaller file was used instead …","url":["https://pdfs.semanticscholar.org/0776/aece84f01a6f593d1748657bf2ec4dec49b4.pdf"]} -{"year":"2020","title":"Generating Keyword Lists Related to Topics Represented by an Array of Topic Records, for Use in Targeting Online Advertisements and Other Uses","authors":["L Palaic, MH Gross, SA Schriber - US Patent App. 16/803,214, 2020"],"snippet":"… For data gathering purposes, a custom heuristic can be used that operates on a Common Crawl Corpus. Documents gathered from a Common Crawl process might be automatically annotated with appropriate topics tags so …","url":["https://patents.google.com/patent/US20200273069A1/en"]} -{"year":"2020","title":"Generating Personalized Product Descriptions from User Reviews","authors":["G Elad, K Radinsky, B Kimelfeld - 2019"],"snippet":"Page 1. Generating Personalized Product Descriptions from User Reviews Guy Elad Technion - Computer Science Department - M.Sc. Thesis MSC-2019-25 - 2019 Page 2. Technion - Computer Science Department …","url":["http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2019/MSC/MSC-2019-25.pdf"]} -{"year":"2020","title":"Generating Query Suggestions for Cross-language and Cross-terminology Health Information Retrieval","authors":["PM Santos, C Teixeira Lopes - Advances in Information Retrieval: 42nd European …, 2020"],"snippet":"… The English collection is provided by the Consumer Health Search Task in the 2018 edition of the CLEF eHealth Lab2. This task uses a set of 50 English queries and a document corpus with 5,535,120 web pages acquired from a CommonCrawl dump …","url":["https://link.springer.com/content/pdf/10.1007/978-3-030-45442-5_43.pdf"]} -{"year":"2020","title":"Generating Representative Headlines for News Stories","authors":["X Gu, Y Mao, J Han, J Liu, H Yu, Y Wu, C Yu, D Finnie… - arXiv preprint arXiv …, 2020"],"snippet":"… By fine-tuning the model on human-curated labels, we can combine the two sources of supervision and further improve performance. 6One can use CommonCrawl to fetch web articles. Page 5. Generating Representative Headlines for News Stories …","url":["https://arxiv.org/pdf/2001.09386"]} -{"year":"2020","title":"Generative Language Modeling for Automated Theorem Proving","authors":["S Polu, I Sutskever - arXiv preprint arXiv:2009.03393, 2020"],"snippet":"… We pre-train our models on both GPT-3's post-processed version of CommonCrawl as well as a more reasoning-focused mix of Github, arXiv and Math StackExchange. 7 … 5.3 Pre-training Models are pre-trained on …","url":["https://arxiv.org/pdf/2009.03393"]} -{"year":"2020","title":"Generative Models are Unsupervised Predictors of Page Quality: A Colossal-Scale Study","authors":["D Bahri, Y Tay, C Zheng, D Metzler, C Brunk… - arXiv preprint arXiv …, 2020"],"snippet":"… 3.1 Datasets This section describes the datasets used in our experiments. • Web500M. The core corpora used in our experiments consists of a random sample of 500 million English web documents obtained from the Common Crawl1. • GPT-2-Output …","url":["https://arxiv.org/pdf/2008.13533"]} -{"year":"2020","title":"Geographically-Balanced Gigaword Corpora for 50 Language Varieties","authors":["J Dunn, B Adams - Proceedings of The 12th Language Resources and …, 2020"],"snippet":"… 3. Collecting Geo-Referenced Documents The data for this paper comes from the Common Crawl,2 as processed in the Corpus of Global Language Use (henceforth, CGLU). This project includes the Common Crawl data from …","url":["https://www.aclweb.org/anthology/2020.lrec-1.308.pdf"]} -{"year":"2020","title":"Geoparsing the historical Gazetteers of Scotland: accurately computing location in mass digitised texts","authors":["R Filgueira, C Grover, M Terras, B Alex - Proceedings of the 8th Workshop on …, 2020"],"snippet":"… Small size model (11MB). • en core web md: English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl … en core web lg: English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl …","url":["https://www.aclweb.org/anthology/2020.cmlc-1.4.pdf"]} -{"year":"2020","title":"GeoVectors: a Linked Open Corpus of OpenStreetMap Embeddings","authors":["N Tempelmeier, S Gottschalk, E Demidova - 2020"],"snippet":"… As most of the OSM keys are in English, we chose the 300-dimensional English word vectors trained on the Common Crawl and Wikipedia [5]. Encoding: To encode an OSM entity o, we utilise the individual word em …","url":["https://openreview.net/pdf?id=EibPtOjZUn"]} -{"year":"2020","title":"German's Next Language Model","authors":["B Chan, S Schweter, T Möller - arXiv preprint arXiv:2010.10906, 2020"],"snippet":"… The XLM-RoBERTa model is trained on 2.5TB of data from a cleaned Common Crawl corpus (Wenzek et al., 2020) for 100 different languages … OSCAR (Ortiz Suárez et al., 2019) is a set of monolingual corpora extracted from Common Crawl …","url":["https://arxiv.org/pdf/2010.10906"]} -{"year":"2020","title":"Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks","authors":["AROKM Downey, A Rumshisky"],"snippet":"… Similarity-based approaches. The first baseline represents a given text, a question, and each of the choices as the average of 300-dimensional CommonCrawl FastText word embeddings (Bojanowski et al. 2016) of its constituent words …","url":["https://www.aaai.org/Papers/AAAI/2020GB/AAAI-RogersA.7778.pdf"]} -{"year":"2020","title":"Getting Passive Aggressive About False Positives: Patching Deployed Malware Detectors","authors":["E Raff, B Filar, J Holt - arXiv preprint arXiv:2010.12080, 2020"],"snippet":"… We base our testing and results on a representative sample of industry data. Our corpus consisted of 1,101,407 Microsoft Office documents that contained macros. Similar to [33] data was collected from Common Crawl (CC) [34] and VirusTotal [35] …","url":["https://arxiv.org/pdf/2010.12080"]} -{"year":"2020","title":"Getting Structured Data from the Internet","authors":["BDP Scale, JM Patel"],"snippet":"… WARC file format ..... 278 Common crawl index ..... 282 … 331 Processing parquet files for a common crawl index ..... 334 …","url":["https://link.springer.com/content/pdf/10.1007/978-1-4842-6576-5.pdf"]} -{"year":"2020","title":"Give your Text Representation Models some Love: the Case for Basque","authors":["R Agerri, IS Vicente, JA Campos, A Barrena, X Saralegi… - arXiv preprint arXiv …, 2020"],"snippet":"… Common Crawl word vectors (FastText-officialcommon-crawl) were trained on Common Crawl and Wikipedia using CBOW with position-weights … train our systems to perform the following comparisons: (i) FastText official models …","url":["https://arxiv.org/pdf/2004.00033"]} -{"year":"2020","title":"GLEAKE: Global and Local Embedding Automatic Keyphrase Extraction","authors":["JR Asl, JM Banda - arXiv preprint arXiv:2005.09740, 2020"],"snippet":"… doc2vec_news_dbow AP News glove.6B Wikipedia + Gigaword GloVe 50-300 [28] glove.twitter.27B Twitter 25-200 glove.840B Common Crawl 300 TABLE 1 DIFFERENT PRE-TRAINED EMBEDDINGS USED BY GLEAKE Page 5. 5 …","url":["https://arxiv.org/pdf/2005.09740"]} -{"year":"2020","title":"Global Under-Resourced MEedia Translation (GoURMET)","authors":["MAAS BBC, JW BBC, B Haddow, AM Barone…"],"snippet":"Page 1. GoURMET H2020–825299 D5.3 Initial Integration Report Global Under-Resourced MEedia Translation (GoURMET) H2020 Research and Innovation Action Number: 825299 D5.3 – Initial Integration Report Nature Report Work Package WP5 …","url":["https://gourmet-project.eu/wp-content/uploads/2020/07/GoURMET_D5_3_Initial_Integration_Report.pdf"]} -{"year":"2020","title":"Going Back in Time to Find What Existed on the Web and How much has been Preserved: How much of Palestinian Web has been Archived?","authors":["T Sammar, H Khalilia - مؤتمرات الآداب والعلوم الانسانية والطبيعية, 2020"],"snippet":"‎… References 1. Common crawl url index. url (http://index.commoncrawl.org/). 2. International internet preservation consortium (iipc).http://www.netpreserve.org. 3. Internet archive (https://archive.org/). 4. Internet archive wayback machine. url (https://archive.org/web/) …","url":["http://proceedings.sriweb.org/akn/index.php/art/article/viewFile/410/466"]} -{"year":"2020","title":"Goku's Participation in WAT 2020","authors":["D Wang, O Htun - Proceedings of the 7th Workshop on Asian Translation, 2020"],"snippet":"… Secondly, we fine-tuned on the JPO patent corpus using the mBART auto-encoder model (Liu et al., 2020), which has been pre-trained on largescale monolingual CommonCrawl (CC) corpus in 25 languages using the BART objective (Lewis et al., 2020) …","url":["https://www.aclweb.org/anthology/2020.wat-1.16.pdf"]} -{"year":"2020","title":"GoodReads Book Recommendation Service","authors":["Y Tian, V Bai, Z Doganata"],"snippet":"… Common Crawl: https://commoncrawl.org/ Models: ● Howard, Jeremy and Ruder, Sebastian. \"Universal Language Model Fine-tuning for Text Classification.\" Paper presented at the meeting of the ACL, 2018. ● Conneau …","url":["http://tianyijun.com/files/GoodReads_Recommendation.pdf"]} -{"year":"2020","title":"GottBERT: a pure German Language Model","authors":["R Scheible, F Thomczyk, P Tippmann, V Jaravine… - arXiv preprint arXiv …, 2020"],"snippet":"… dbmz BERT used as source data a German Wikipedia dump, EU Bookshop corpus, Open Subtitles, CommonCrawl, ParaCrawl and News Crawl which … than mBERT, the multilingual XLM-RoBERTa (Conneau et al., 2019) was …","url":["https://arxiv.org/pdf/2012.02110"]} -{"year":"2020","title":"GPT-3 AI language tool calls for cautious optimism","authors":["Oxford Analytica - Emerald Expert Briefings"],"snippet":"… The training process leveraged this by exposing GPT-3 to historical sweeps of the internet, known as 'crawls'. One substantial component was the Common Crawl dataset, a multilingual capture of almost 1 trillion words …","url":["https://www.emerald.com/insight/content/doi/10.1108/OXAN-DB256373/full/html"]} -{"year":"2020","title":"GPT-3 Creative Fiction","authors":["G Branwen - 2020"],"snippet":"… For GPT-2, I saw finetuning as doing 2 things: Fixing ignorance: missing domain knowledge. GPT-2 didn't know many things about most things—it was just a handful (1.5 billion) of parameters trained briefly on …","url":["https://www.gwern.net/GPT-3"]} -{"year":"2020","title":"Grammatical Error Correction in Low Error Density Domains: A New Benchmark and Analyses","authors":["S Flachs, O Lacroix, H Yannakoudakis, M Rei… - arXiv preprint arXiv …, 2020"],"snippet":"… matical errors. The source texts are randomly se- lected from the first 18 dumps of the CommonCrawl4 dataset and represent a wide range of data seen online such as blogs, magazines, corporate or educational websites. These …","url":["https://arxiv.org/pdf/2010.07574"]} -{"year":"2020","title":"Graph Attention Network with Memory Fusion for Aspect-level Sentiment Analysis","authors":["L Yuan, J Wang, LC Yu, X Zhang - Proceedings of the 1st Conference of the Asia …, 2020"],"snippet":"Page 1. Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 27–36 December 4 - 7, 2020 …","url":["https://www.aclweb.org/anthology/2020.aacl-main.4.pdf"]} -{"year":"2020","title":"Graph Policy Network for Transferable Active Learning on Graphs","authors":["S Hu, Z Xiong, M Qu, X Yuan, MA Côté, Z Liu, J Tang - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Graph Policy Network for Transferable Active Learning on Graphs Shengding Hu Tsinghua University hsd16@mails.tsinghua.edu.cn Zheng Xiong Tsinghua University harryfootball@163.com Meng Qu MILA meng.qu@umontreal.ca …","url":["https://arxiv.org/pdf/2006.13463"]} -{"year":"2020","title":"Graphical User Interface Auto-Completion with Element Constraints","authors":["L Brückner - 2020"],"snippet":"Page 1. Aalto University School of Science Master's Programme in ICT Innovation Lukas Brückner Graphical User Interface Auto-Completion with Element Constraints Master's Thesis Espoo, September 25, 2020 Supervisor: Prof …","url":["https://aaltodoc.aalto.fi/bitstream/handle/123456789/47385/master_Br%C3%BCckner_Lukas_2020.pdf?sequence=1"]} -{"year":"2020","title":"GraphWalker: An I/O-Efficient and Resource-Friendly Graph Analytic System for Fast and Scalable Random Walks","authors":["R Wang, Y Li, H Xie, Y Xu, JCS Lui"],"snippet":"Page 1. GraphWalker: An I/O-Efficient and Resource-Friendly Graph Analytic System for Fast and Scalable Random Walks Rui Wang1, Yongkun Li1, Hong Xie2, Yinlong Xu1, John CS Lui3 1University of Science and Technology …","url":["https://www.cse.cuhk.edu.hk/~cslui/PUBLICATION/ATC2020.pdf"]} -{"year":"2020","title":"GREEK-BERT: The Greeks visiting Sesame Street","authors":["J Koutsikakis, I Chalkidis, P Malakasiotis… - arXiv preprint arXiv …, 2020"],"snippet":"… and (c) the Greek part of OSCAR [25], a clean version of Common Crawl.5 Accents and other diacritics were removed, and all words were … 5https://commoncrawl.org 6https://github.com/google-research/bert 7The …","url":["https://arxiv.org/pdf/2008.12014"]} -{"year":"2020","title":"Grounded Compositional Outputs for Adaptive Language Modeling","authors":["N Pappas, P Mulcaire, NA Smith - arXiv preprint arXiv:2009.11523, 2020"],"snippet":"Page 1. Grounded Compositional Outputs for Adaptive Language Modeling Nikolaos Pappas♣ Phoebe Mulcaire♣ Noah A. Smith♣♦ ♣Paul G. Allen School of Computer Science & Engineering, University of Washington ♦Allen …","url":["https://arxiv.org/pdf/2009.11523"]} -{"year":"2020","title":"Guided Generation of Cause and Effect","authors":["Z Li, X Ding, T Liu, JE Hu, B Van Durme"],"snippet":"… 3629 Page 2. Processed Common Crawl Corpus Causal Patterns Based Matching and Filtering … Thus we harvest a large causal dataset from the preprocessed large-scale English Common Crawl corpus (5.14 TB) [Buck et al., 2014] …","url":["https://www.ijcai.org/Proceedings/2020/0502.pdf"]} -{"year":"2020","title":"Harbsafe-162. A Domain-Specific Data Set for the Intrinsic Evaluation of Semantic Representations for Terminological Data","authors":["S Arndt, D Schnäpp - arXiv preprint arXiv:2005.14576, 2020"],"snippet":"Page 1. Harbsafe-162 – A Domain-Specific Data Set for the Intrinsic Evaluation of Semantic Representations for Terminological Data Susanne Arndt, MA∗ Technische Universität Braunschweig Dieter Schnäpp, MA∗∗ Technische Universität Braunschweig …","url":["https://arxiv.org/pdf/2005.14576"]} -{"year":"2020","title":"Hard-Coded Gaussian Attention for Neural Machine Translation","authors":["W You, S Sun, M Iyyer - arXiv preprint arXiv:2005.00742, 2020"],"snippet":"… 10As the full WMT14 En→Fr is too large for us to feasibly train on, we instead follow Akoury et al. (2019) and train on just the Europarl / Common Crawl subset, while evaluating using the full dev/test sets. 11https://github.com/dojoteef/synst Page 5 …","url":["https://arxiv.org/pdf/2005.00742"]} -{"year":"2020","title":"Harnessing Multilinguality in Unsupervised Machine Translation for Rare Languages","authors":["X Garcia, A Siddhant, O Firat, AP Parikh - arXiv preprint arXiv:2009.11201, 2020"],"snippet":"… In contrast, for an actual low-resource language, Gujarati, WMT only provides 500 thousand lines of monolingual data (in news domain) and an additional 3.7 million lines of monolingual data from Common Crawl (noisy, generaldomain) …","url":["https://arxiv.org/pdf/2009.11201"]} -{"year":"2020","title":"HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection","authors":["B Mathew, P Saha, SM Yimam, C Biemann, P Goyal… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection* Binny Mathew1†, Punyajoy Saha1†, Seid Muhie Yimam2 Chris Biemann2, Pawan Goyal1, Animesh Mukherjee1 1 Indian Institute of Technology …","url":["https://arxiv.org/pdf/2012.10289"]} -{"year":"2020","title":"HCA: Hierarchical Compare Aggregate model for question retrieval in community question answering","authors":["MS Zahedi, M Rahgozar, RA Zoroofi - Information Processing & Management, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S030645732030813X"]} -{"year":"2020","title":"Hidden in Plain Sight: Building a Global Sustainable Development Data Catalogue","authors":["J Hodson, A Spezzatti - ICT Analysis and Applications, 2020"],"snippet":"… We iteratively re-train our model using 300-dimensional word-embedding features trained on the CommonCrawl web-scale data set 7 with the GLobal VEctors for Word Representation (GloVe) procedure (see [7]). Table 2 shows …","url":["https://link.springer.com/content/pdf/10.1007/978-981-15-8354-4.pdf#page=795"]} -{"year":"2020","title":"Hierarchical models vs. transfer learning for document-level sentiment classification","authors":["J Barnes, V Ravishankar, L Øvrelid, E Velldal - arXiv preprint arXiv:2002.08131, 2020"],"snippet":"… Universal Language Model Fine-Tuning (ULMFIT): We use the AWD-LSTM architecture (Merity et al., 2018) and pretrain on Wikipedia data (or Common Crawl in the case of Norwegian) taken from the CONLL 2017 shared task (Zeman et al., 2017) …","url":["https://arxiv.org/pdf/2002.08131"]} -{"year":"2020","title":"Hierarchical Multimodal Attention for End-to-End Audio-Visual Scene-Aware Dialogue Response Generation","authors":["H Le, D Sahoo, NF Chen, SCH Hoi - Computer Speech & Language, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0885230820300280"]} -{"year":"2020","title":"High Accuracy Phishing Detection Based on Convolutional Neural Networks","authors":["SY Yerima, MK Alzaylaee - 2020"],"snippet":"… Their approach first encodes the URL strings using one-hot encoding and then inputs each encoded character vector into the LSTM neurons for training and testing. Their method achieved an accuracy of 0.935 on the Common …","url":["https://dora.dmu.ac.uk/bitstream/handle/2086/19450/HAPD-CNN-paper-accepted-version.pdf?sequence=1&isAllowed=y"]} -{"year":"2020","title":"HitAnomaly: Hierarchical Transformers for Anomaly Detection in System Log","authors":["S Huang, Y Liu, C Fung, R He, Y Zhao, H Yang, Z Luan - IEEE Transactions on …, 2020"],"snippet":"… matrix. Finally, we obtain the word vector of 'terminating' as [16, 2, 11]. LogRobust [10] leverages off-the-shelf word vectors, which were pre-trained on Common Crawl Corpus dataset using the Fast-Text [20] algorithm. We initialize …","url":["https://ieeexplore.ieee.org/abstract/document/9244088/"]} -{"year":"2020","title":"How “BERTology” Changed the State-of-the-Art also for Italian NLP","authors":["F Tamburini - Proceedings of the Seventh Italian Conference on …, 2020"],"snippet":"… Page 2. billions of tokens. Also for GilBERTo it is available only the uncased model. • UmBERTo4: the more recent model de- veloped explicitly for Italian, as far as we know, is UmBERTo ('Musixmatch/umbertocommoncrawl-cased-v1' – umC) …","url":["http://ceur-ws.org/Vol-2769/paper_79.pdf"]} -{"year":"2020","title":"How Furiously Can Colourless Green Ideas Sleep? Sentence Acceptability in Context","authors":["JH Lau, CS Armendariz, S Lappin, M Purver, C Shu - arXiv preprint arXiv:2004.00881, 2020"],"snippet":"… BERTUCS Transformer Bidir. 340M Uncased 13GB WordPiece Wikipedia, BookCorpus XLNET Transformer Hybrid 340M Cased 126GB SentenceWikipedia, BookCorpus, Giga5 Piece ClueWeb, Common Crawl Table 1: Language models and their configurations …","url":["https://arxiv.org/pdf/2004.00881"]} -{"year":"2020","title":"How Human is Machine Translationese? Comparing Human and Machine Translations of Text and Speech","authors":["J van Genabith, E Teich","Y Bizzoni, TS Juzek, C España-Bonet, KD Chowdhury… - Proceedings of the 17th …, 2020"],"snippet":"… German translation and interpreting are both from English. lines de tokens en tokens Ct Cs CommonCrawl 2,212,292 49,870,179 54,140,396 MultiUN 108,387 4,494,608 4,924,596 NewsCommentary 324,388 8,316,081 46,222,416 …","url":["http://www.sfb1102.uni-saarland.de/wp/wp-content/uploads/2020/06/IWSLT-b1-B7-final2020.pdf","https://www.aclweb.org/anthology/2020.iwslt-1.34.pdf"]} -{"year":"2020","title":"How Language Shapes Prejudice Against Women: An Examination Across 45 World Languages","authors":["D DeFranza, H Mishra, A Mishra - 2020"],"snippet":"… context in which it occurs. Using text data from Wikipedia and the Common Crawl project … discussing gender issues. Wikipedia and a corpus of web crawl data from over five billion web pages, known as the Common Crawl, serve as our data source …","url":["https://psyarxiv.com/mrbcf/download?format=pdf"]} -{"year":"2020","title":"How Many Pages? Paper Length Prediction from the Metadata","authors":["E Çano, O Bojar - arXiv preprint arXiv:2010.15924, 2020"],"snippet":"… We used static word embeddings of 300 dimensions from three sources: the 6 billion tokens collection of Common Crawl4 trained with Glove [23], the 840 billion tokens collection of Common Crawl trained with Glove, and …","url":["https://arxiv.org/pdf/2010.15924"]} -{"year":"2020","title":"How Much Self-attention Do We Need? Trading Attention for Feed-forward Layers","authors":["K Irie, A Gerstenberger, R Schlüter, H Ney - ICASSP, Barcelona, Spain, 2020"],"snippet":"… to 6 (in contrast to what we typically observe eg, on LibriSpeech [28]): this is in fact due to some overlap between the common crawl training subset … Then we fine-tune the model on the TED-LIUM 2 transcriptions (2 M words) …","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1126/Irie-ICASSP-2020.pdf"]} -{"year":"2020","title":"How Should Markup Tags Be Translated?","authors":["G Hanneman, G Dinu, AI Amazon"],"snippet":"… especially large or noisy data sets. For EN–DE, we begin with the training data released by the WMT 2020 news task, ignoring the Common Crawl and Paracrawl corpora and heavily filtering WikiMatrix. Our EN–FR training data …","url":["https://assets.amazon.science/fa/f2/640de7fd483a8c385db7a0b5c7cd/how-should-markup-tags-be-translated.pdf"]} -{"year":"2020","title":"Human-in-the-Loop AI for Analysis of Free Response Facial Expression Label Sets","authors":["C Butler, H Oster, J Togelius - Proceedings of the 20th ACM International Conference …, 2020"],"snippet":"… 1. GloVe, 300-dimensional vectors trained on Common Crawl [33]: distributional model that learns word vectors by examining word co- occurrences within a text corpus with logbilinear regression, global matrix factorization and local context window methods …","url":["https://dl.acm.org/doi/abs/10.1145/3383652.3423892"]} -{"year":"2020","title":"Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning","authors":["R Schuster, T Schuster, Y Meri, V Shmatikov - arXiv preprint arXiv:2001.04935, 2020"],"snippet":"Page 1. Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning* Roei Schuster Tel Aviv University † roeischuster@mail.tau.ac.il Tal Schuster CSAIL, MIT tals@csail.mit.edu Yoav Meri † 111yoav@gmail.com Vitaly …","url":["https://arxiv.org/pdf/2001.04935"]} -{"year":"2020","title":"Hungarian layer: A novel interpretable neural layer for paraphrase identification","authors":["H Xiao - Neural Networks, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0893608020302653"]} -{"year":"2020","title":"HW-TSC's Participation in the WMT 2020 News Translation Shared Task","authors":["D Wei, H Shang, Z Wu, Z Yu, L Li, J Guo, M Wang…"],"snippet":"… monolingual text from Common Crawl and news crawl 2018 for Km and En, respectively. 2.1.3 Ps/En Similar to Km/En, we also use the Para Crawl v5.1 (1M), Khmer and Pashto parallel data (0.03M) as bitext and select …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.31.pdf"]} -{"year":"2020","title":"Hybrid Feature Model for Emotion Recognition in Arabic Text","authors":["N Alswaidan, MEB Menai - IEEE Access, 2020"],"snippet":"… 5https://github.com/minimaxir/char-embeddings 6https://unicode.org/emoji/ charts/full-emoji-list.html • FastText [22]: 300-dimensional word vectors trained on Common Crawl7 using CBOW with position weights … 7https://commoncrawl …","url":["https://ieeexplore.ieee.org/iel7/6287639/8948470/09007420.pdf"]} -{"year":"2020","title":"HyCoNN: Hybrid Cooperative Neural Networks for Personalized News Discussion Recommendation","authors":["J Risch, V Künstler, R Krestel"],"snippet":"… For DeepCoNN and also HyCoNN, we use 300-dimensional fastText word embeddings, which were pre-trained on the English-language Common Crawl dataset [25]. The resulting embeddings function as input to a convolutional layer that consists of n neurons …","url":["https://hpi.de/fileadmin/user_upload/fachgebiete/naumann/people/risch/risch2020hyconn.pdf"]} -{"year":"2020","title":"Identifying Cognates in English-Dutch and French-Dutch by means of Orthographic Information and Cross-lingual Word Embeddings","authors":["E Lefever, S Labat, P Singh - the 12th Conference on Language Resources and …, 2020"],"snippet":"… The former approach was improved in the following way. Firstly, standard fastText word embeddings, which were pretrained on Common Crawl and Wikipedia and generated with the standard skip-gram model as proposed by Bo- janowski et al …","url":["https://biblio.ugent.be/publication/8662200/file/8662201"]} -{"year":"2020","title":"Identifying Phished Website Using Multilayer Perceptron","authors":["A Dev, V Jain - Advances in Distributed Computing and Machine …"],"snippet":"… Phishing Webpage Source: PhishTank, OpenPhish. Legitimate Webpage Source: Alexa, Common Crawl. The main process in the phishing webpage is to work on its features and how effectively it is handling the dataset. Each …","url":["https://link.springer.com/chapter/10.1007/978-981-15-4218-3_37"]} -{"year":"2020","title":"Identifying Sensitive URLs at Web-Scale","authors":["M Srdjan, I Costas, S Georgios, N Laoutaris - 2020","S Matic, C Iordanou, G Smaragdakis, N Laoutaris - studies"],"snippet":"… We then use our classifier to search for sensitive URLs in a corpus of 1 Billion URLs collected by the Common Crawl project. We identify more than 155 millions sensitive URLs in more than 4 million domains … Automated …","url":["http://eprints.networks.imdea.org/2187/1/imc20.pdf","http://laoutaris.info/wp-content/uploads/2020/09/imc2020.pdf"]} -{"year":"2020","title":"Identifying Tasks from Mobile App Usage Patterns","authors":["Y Tian, K Zhou, M Lalmas, D Pelleg - Proceedings of the 43rd International ACM …, 2020"],"snippet":"Page 1. Identifying Tasks from Mobile App Usage Patterns Yuan Tian University of Nottingham Nottingham, UK yuan.tian@nottingham.ac.uk Ke Zhou University of Nottingham Nottingham, UK ke.zhou@nottingham.ac …","url":["https://dl.acm.org/doi/abs/10.1145/3397271.3401441"]} -{"year":"2020","title":"Igbo-English Machine Translation: An Evaluation Benchmark","authors":["I Ezeani, P Rayson, I Onyenwe, C Uchechukwu… - arXiv preprint arXiv …, 2020"],"snippet":"… Page 2. Published as a conference paper at ICLR 2020 texts (eg Wikipedia, CommonCrawl, local government materials, local TV/Radio stations etc). Phase 2: Translation and correction In this phase, the 10,000 sentence pairs …","url":["https://arxiv.org/pdf/2004.00648"]} -{"year":"2020","title":"IIU: Specialized Architecture for Inverted Index Search","authors":["J Heo, J Won, Y Lee, S Bharuka, J Jang, TJ Ham…"],"snippet":"Page 1. IIU: Specialized Architecture for Inverted Index Search Jun Heo∗ Jaeyeon Won∗ Yejin Lee∗ Shivam Bharuka†§ Jaeyoung Jang‡ Tae Jun Ham∗ Jae W. Lee∗ ∗Seoul National University, †Facebook, Inc., ‡Sungkyunkwan University …","url":["https://www.cs.princeton.edu/~tae/iiu_asplos2020.pdf"]} -{"year":"2020","title":"Imitation Attacks and Defenses for Black-box Machine Translation Systems","authors":["E Wallace, M Stern, D Song - arXiv preprint arXiv:2004.15015, 2020"],"snippet":"… For English→German, we query the source side of the WMT14 training set (≈ 4.5M sentences).3 For Nepali→English, we query the Nepali Language Wikipedia (≈ 100,000 sentences) and approximately two million sentences from Nepali common crawl …","url":["https://arxiv.org/pdf/2004.15015"]} -{"year":"2020","title":"Impact of News on the Commodity Market: Dataset and Results","authors":["A Sinha, T Khandait - arXiv preprint arXiv:2009.04202, 2020"],"snippet":"… The GloVe pre-trained word-embeddings are known to capture the meaning of a word through a high dimensional vector [22]. For this research, we used the 300-dimensional vectors which were trained on 840 billion tokens through the common crawl …","url":["https://arxiv.org/pdf/2009.04202"]} -{"year":"2020","title":"Impact of sentence length on the readability of web for screen reader users","authors":["BB Kadayat, E Eika - International Conference on Human-Computer …, 2020"],"snippet":"… They used MapReduce for real-time calculation of the readability of more than a billion webpages. The datasets called Common Crawl included 61 million domain-names, 92 million PDF documents, and seven million Word documents …","url":["https://link.springer.com/chapter/10.1007/978-3-030-49282-3_18"]} -{"year":"2020","title":"Improved method of word embedding for efficient analysis of human sentiments","authors":["S Sagnika, BSP Mishra, SK Meher - Multimedia Tools and Applications, 2020"],"snippet":"… Designed by Pennington [23] in 2014, it creates a word vector space by training on word-word co-occurrence counts. The models are trained on Wikipedia dumps, Gigaword 5 and Common Crawl texts, and apply …","url":["https://link.springer.com/article/10.1007/s11042-020-09632-9"]} -{"year":"2020","title":"Improving Indonesian Text Classification Using Multilingual Language Model","authors":["IF Putra, A Purwarianti - arXiv preprint arXiv:2009.05713, 2020"],"snippet":"… The XLM-R Large also has substantially more parameters and was trained on a larger balanced dataset from the CommonCrawl corpus that contains 100 languages. The XLM-R variant that we use in the experiment is …","url":["https://arxiv.org/pdf/2009.05713"]} -{"year":"2020","title":"IMPROVING KNOWLEDGE ACCESSIBILITY ON THE WEB","authors":["R Yu"],"snippet":"Page 1. IMPROVING KNOWLEDGE ACCESSIBILITY ON THE WEB – from Knowledge Base Augmentation to Search as Learning Inaugural dissertation for the attainment of the title of doctor in the Faculty of Mathematics and …","url":["https://docserv.uni-duesseldorf.de/servlets/DerivateServlet/Derivate-56199"]} -{"year":"2020","title":"Improving Low Compute Language Modeling with In-Domain Embedding Initialisation","authors":["C Welch, R Mihalcea, JK Kummerfeld - arXiv preprint arXiv:2009.14109, 2020"],"snippet":"… We see that the value of additional data depends on the domain. Gigaword is also news text and is able to improve performance. The larger GloVe datasets use Wikipedia and CommonCrawl data, which is a poorer match and so does not improve performance …","url":["https://arxiv.org/pdf/2009.14109"]} -{"year":"2020","title":"Improving Network Security through Collaborative Sharing","authors":["CS Ardi - 2020"],"snippet":"Page 1. IMPROVING NETWORK SECURITY THROUGH COLLABORATIVE SHARING by Calvin Satiawan Ardi A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA …","url":["http://search.proquest.com/openview/9de70dfceaa27a03d073c17f5f071579/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2020","title":"Improving Personal Health Mention Detection on Twitter Using Permutation Based Word Representation Learning","authors":["PI Khan, I Razzak, A Dengel, S Ahmed - International Conference on Neural …, 2020"],"snippet":"… In: Invited Talk at the SIGIR 2012 Workshop on Open-Source Information Retrieval (2012)Google Scholar. 24. Common CrawlCommon crawl corpus (2019). http://commoncrawl.org. Copyright information. © Springer Nature …","url":["https://link.springer.com/chapter/10.1007/978-3-030-63830-6_65"]} -{"year":"2020","title":"Improving Ranking in Document based Search Systems","authors":["RRK Menon, J Kaartik, ETK Nambiar, AK TK, A Kumar - 2020 4th International …, 2020"],"snippet":"… The pre-trained word embedding models used were: 1. Fasttext i. – Wiki News 1M 16B Tokens ii. – Common Crawl 2M 600B Tokens 2. GloVe 2.2M 840B Tokens B. Evaluation Metrics The standard metrics include Precision, Recall, and F1 score …","url":["https://ieeexplore.ieee.org/abstract/document/9143047/"]} -{"year":"2020","title":"Increasing Accessibility of Electronic Theses and Dissertations (ETDs) Through Chapter-level Classification","authors":["PM Jude - 2020"],"snippet":"Page 1. Increasing Accessibility of Electronic Theses and Dissertations (ETDs) Through Chapter-level Classification Palakh Mignonne Jude Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University …","url":["https://vtechworks.lib.vt.edu/bitstream/handle/10919/99294/Jude_P_T_2020.pdf?sequence=1"]} -{"year":"2020","title":"INDEXING OF BIG TEXT DATA AND SEARCHING IN THE INDEXED DATA","authors":["BD KOZÁK"],"snippet":"… enhanced documents. Input data In [14, 8], the input data was CommonCrawl and Wikipedia. The English wikipedia … 8 Page 15. CommonCrawl3is a project that maintains an open repository of web crawl data. For the practical part of …","url":["https://dspace.vutbr.cz/bitstream/handle/11012/192492/final-thesis.pdf?sequence=3"]} -{"year":"2020","title":"Indic-Transformers: An Analysis of Transformer Language Models for Indian Languages","authors":["K Jain, A Deshpande, K Shridhar, F Laumann, A Dash - arXiv preprint arXiv …, 2020"],"snippet":"… The Open Super-large Crawled ALMAnaCH coRpus (OSCAR) dataset [48] is a filtered version of the CommonCrawl dataset and has monolingual corpora for 166 languages. Prior to training, we normalize the OSCAR dataset for …","url":["https://arxiv.org/pdf/2011.02323"]} -{"year":"2020","title":"IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages","authors":["D Kakwani, A Kunchukuttan, S Golla, A Bhattacharyya…"],"snippet":"… The OSCAR project (Ortiz Suarez et al., 2019), a recent processing of CommonCrawl, also contains much less data for most Indian languages than our crawls. The CCNet () and C4 () projects also provide tools to …","url":["https://indicnlp.ai4bharat.org/papers/arxiv2020_indicnlp_corpus.pdf"]} -{"year":"2020","title":"IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding","authors":["B Wilie, K Vincentio, GI Winata, S Cahyawijaya, X Li… - arXiv preprint arXiv …, 2020"],"snippet":"… Page 5. Dataset # Words # Sentences Size Style Source OSCAR (Ortiz Suárez et al., 2019) 2,279,761,186 148,698,472 14.9 GB mixed OSCAR CoNLLu Common Crawl (Ginter et al., 2017) 905,920,488 77,715,412 6.1 GB mixed LINDAT/CLARIAH-CZ …","url":["https://arxiv.org/pdf/2009.05387"]} -{"year":"2020","title":"Inducing Language-Agnostic Multilingual Representations","authors":["W Zhao, S Eger, J Bjerva, I Augenstein - arXiv preprint arXiv:2008.09112, 2020"],"snippet":"… XLM-R Contextualized word embeddings (Conneau et al., 2019) are pre-trained on the CommonCrawl corpora of 100 languages, which contain more monolingual data than Wikipedia corpora, with 1) a vocabulary size …","url":["https://arxiv.org/pdf/2008.09112"]} -{"year":"2020","title":"Inductive Learning on Commonsense Knowledge Graph Completion","authors":["B Wang, G Wang, J Huang, J You, J Leskovec… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Inductive Learning on Commonsense Knowledge Graph Completion Bin Wang1, Guangtao Wang2, Jing Huang2, Jiaxuan You3, Jure Leskovec3, C.-C. Jay Kuo1 1 University of Southern California 2 JD AI Research …","url":["https://arxiv.org/pdf/2009.09263"]} -{"year":"2020","title":"Inexpensive Domain Adaptation of Pretrained Language Models: A Case Study on Biomedical Named Entity Recognition","authors":["N Poerner, U Waltinger, H Schütze - arXiv preprint arXiv:2004.03354, 2020"],"snippet":"… 1 Introduction Pretrained Language Models such as BERT (De- vlin et al., 2019) have spearheaded advances on many NLP tasks. Usually, PTLMs are pretrained on unlabeled general-domain and/or mixed-domain text, such …","url":["https://arxiv.org/pdf/2004.03354"]} -{"year":"2020","title":"Inexpensive Domain Adaptation of Pretrained Language Models: Case Studies on Biomedical NER and Covid-19 QA","authors":["N Poerner, U Waltinger, H Schütze"],"snippet":"… Pretrained Language Models (PTLMs) such as BERT (Devlin et al., 2019) have spearheaded ad- vances on many NLP tasks. Usually, PTLMs are pretrained on unlabeled general-domain and/or mixed-domain text, such …","url":["https://web.iiit.ac.in/~rizwan.ali/papers/828.pdf"]} -{"year":"2020","title":"Infosys Machine Translation System for WMT20 Similar Language Translation Task","authors":["K Rathinasamy, A Singh, B Sivasambagupta… - Proceedings of WMT, 2020"],"snippet":"… data. 2.2.2 Synthetic data CommonCrawl n-grams raw monolingual files are processed1 to remove sentences with invalid characters, strip leading and trailing whitespaces, and remove duplicate sentences. 3 System Overview …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.52.pdf"]} -{"year":"2020","title":"iNLTK: Natural Language Toolkit for Indic Languages","authors":["G Arora - arXiv preprint arXiv:2009.12534, 2020"],"snippet":"… iNLTK results were compared against results reported in (Kunchukuttan et al., 2020) for pre-trained embeddings released by the FastText project trained on Wikipedia (FT-W) (Bo- janowski et al., 2016), Wiki+CommonCrawl …","url":["https://arxiv.org/pdf/2009.12534"]} -{"year":"2020","title":"INSET: Sentence Infilling with INter-SEntential Transformer","authors":["Y Huang, Y Zhang, O Elachqar, Y Cheng - Proceedings of the 58th Annual Meeting of …, 2020"],"snippet":"… For the effectiveness of human evaluation, we use the simplest strategy to mask sentences. The Recipe dataset is obtained from (https: //commoncrawl.org), where the metadata is formatted according to Schema.org (https:// schema.org/Recipe) …","url":["https://www.aclweb.org/anthology/2020.acl-main.226.pdf"]} -{"year":"2020","title":"Integrating Geospatial Data and Social Media in Bidirectional Long-Short Term Memory Models to Capture Human Nature Interactions","authors":["A Larkin, P Hystad - The Computer Journal, 2020"],"snippet":"… language processing [32]. Tweet texts were transformed into word vector arrays using the Stanford GloVe (Global Vectors for Word Representation) Common Crawl dictionary (https://nlp.stanford.edu/projects/glove/). The GloVe …","url":["https://academic.oup.com/comjnl/advance-article-abstract/doi/10.1093/comjnl/bxaa094/5893915"]} -{"year":"2020","title":"Intelligent phishing detection scheme using deep learning algorithms","authors":["MA Adebowale, KT Lwin, MA Hossain - Journal of Enterprise Information …, 2020"],"snippet":"… Half of the data set consisted of phishing sites from PhishTank, which is a site that is used as phishing URL depository, and half of the data set was comprised of legitimate sites from Common Crawl, a corpus of web crawl data …","url":["https://www.emerald.com/insight/content/doi/10.1108/JEIM-01-2020-0036/full/html"]} -{"year":"2020","title":"Intermediate Training of BERT for Product Matching","authors":["R Peeters, C Bizer, G Glavaš - small"],"snippet":"… uct Corpus for Large-Scale Product Matching [26]. These datasets are derived from schema.org annotations from thousands of webshops extracted from the Common Crawl. Relying on schema.org annotations of product identifiers …","url":["http://data.dws.informatik.uni-mannheim.de/largescaleproductcorpus/data/v2/papers/DI2KG2020_Peeters.pdf"]} -{"year":"2020","title":"Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve","authors":["O Agarwal, Y Yang, BC Wallace, A Nenkova - arXiv preprint arXiv:2004.04564, 2020"],"snippet":"… representations are concatenated. We use 300 dimensional cased GloVe (Pennington et al., 2014) vectors trained on Common Crawl.2 We use the IO labeling scheme and evaluate the systems via micro-F1, at the token level. We use …","url":["https://arxiv.org/pdf/2004.04564"]} -{"year":"2020","title":"Interpretable & Time-Budget-Constrained Contextualization for Re-Ranking","authors":["S Hofstätter, M Zlabinger, A Hanbury - arXiv preprint arXiv:2002.01854, 2020"],"snippet":"… The first section contains the traditional baselines; the second contains the neural re-ranking baselines; in the third section we report the results of our TK model with three 6 42B CommonCrawl lower-cased: https://nlp.stanford.edu/projects/glove/ Page 5 …","url":["https://arxiv.org/pdf/2002.01854"]} -{"year":"2020","title":"Introduction to Cloud Computing and Amazon Web Services (AWS)","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… 5 examples. IAM and S3 sections are necessary for Chapters 6 and 7 since we will be using data compiled by a nonprofit called common crawl which is only publicly available on S3 through AWS open registry. You will have …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_3"]} -{"year":"2020","title":"Introduction to Common Crawl Datasets","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"The Common Crawl Foundation (https://commoncrawl.org/) is a 501(c)(3) nonprofit involved in providing open access web crawl data going back to over eight years. They perform monthly web crawls which cover over 25 billion pages for each month. This …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_6"]} -{"year":"2020","title":"Introduction to Web Scraping","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… We will introduce natural language processing algorithms in Chapter 4, and we will put them into action in Chapters 6 and 7 on a Common Crawl dataset. The next step is loading the cleaned data from the preceding step into an appropriate database …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_1"]} -{"year":"2020","title":"Is Everything Fine, Grandma? Acoustic and Linguistic Modeling for Robust Elderly Speech Emotion Recognition","authors":["G Sogancıoglu, O Verkholyak, H Kaya, D Fedotov… - INTERSPEECH, Shanghai …, 2020","G Soğancıoğlu, O Verkholyak, H Kaya, D Fedotov… - arXiv preprint arXiv …, 2020"],"snippet":"… We use pre-trained 100-dimensional English and German word embeddings [23], which are trained on Common Crawl2, and finetune the pretrained model on our dataset … 1https://cloud.google.com/translate 2http://commoncrawl.org/ Page 3. cording to its POS …","url":["https://arxiv.org/pdf/2009.03432","https://indico2.conference4me.psnc.pl/event/35/contributions/3140/attachments/1218/1261/Wed-SS-1-4-12.pdf"]} -{"year":"2020","title":"Is language modeling enough? Evaluating effective embedding combinations","authors":["R Schneider, T Oberhauser, P Grundmann, FA Gers… - 2020"],"snippet":"… 2.1. Universal Text Embeddings Recently, researchers explore universal text embeddings trained on extensive Web corpora, such as the Common Crawl6 (Mikolov et al., 2018; Radford et al., 2019), the billion … 5https …","url":["https://eprints.soton.ac.uk/438613/1/LREC20_LM_TM_27_1_.pdf"]} -{"year":"2020","title":"Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation","authors":["B Eikema, W Aziz - arXiv preprint arXiv:2005.10283, 2020"],"snippet":"… For English-Nepali we also use a translated version of the Penn Treebank4 and for English-Sinhala we additionally use Open Subtitles (Lison et al., 2018). We use a filtered crawl of Wikipedia and Common Crawl released in Guzmán et al …","url":["https://arxiv.org/pdf/2005.10283"]} -{"year":"2020","title":"Is Wikipedia succeeding in reducing gender bias? Assessing changes in gender bias in Wikipedia using word embeddings","authors":["KG Schmahl, TJ Viering, S Makrodimitris, AN Jahfari… - Proceedings of the Fourth …, 2020"],"snippet":"… These categories have shown significant bias towards male or female words in embeddings from Google News corpora [Mikolov et al., 2013a], Google Books [Jones et al., 2020], as well as a 'Common Crawl' corpus [Caliskan et al., 2017] …","url":["https://www.aclweb.org/anthology/2020.nlpcss-1.11.pdf"]} -{"year":"2020","title":"It's the Best Only When It Fits You Most: Finding Related Models for Serving Based on Dynamic Locality Sensitive Hashing","authors":["L Zhou, Z Wang, A Das, J Zou - arXiv preprint arXiv:2010.09474, 2020"],"snippet":"… BAIR CORD-19 LSUN Bedroom iNaturalist (iNat) 2017 ImageNet OpenImagesV4 Wikipedia 1 Billion Word Benchmark CommonCrawl Multillingual Wikipedia Natural Questions 3 15 3 3 8 10 9 7 2 5 58 3 5 2 5 2 CelebA HQ iMet Collection 2019 …","url":["https://arxiv.org/pdf/2010.09474"]} -{"year":"2020","title":"Italian Transformers Under the Linguistic Lens","authors":["A Miaschip, G Sartim, D Brunato, F Dell'Orletta… - Proceedings of the Seventh …, 2020"],"snippet":"… For instance, we can notice that, for both the probing models, features related to the distribution of syntactic relations (SyntacticDep) are better predicted by GePpeTto, while GilBERTo and UmBERTo-Commoncrawl are the best …","url":["http://ceur-ws.org/Vol-2769/paper_56.pdf"]} -{"year":"2020","title":"JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation","authors":["Z Mao, F Cromieres, R Dabre, H Song, S Kurohashi - arXiv preprint arXiv:2005.03361, 2020"],"snippet":"… Mono Ja Common Crawl 22M En News Crawl 22M Ru News Crawl 22M … 5.1.2. Monolingual data We use monolingual data containing 22M Japanese, 22M English and 22M Russian sentences randomly sub-sampled from Common Crawl dataset and News crawl4 dataset …","url":["https://arxiv.org/pdf/2005.03361"]} -{"year":"2020","title":"Joint Multiclass Debiasing of Word Embeddings","authors":["R Popović, F Lemmerich, M Strohmaier - arXiv preprint arXiv:2003.11520, 2020"],"snippet":"… As in previous studies [7], evaluation was done on three pretrained Word Embedding models with vector dimension of 300: FastText2(English we- bcrawl and Wikipedia, 2 million words), GloVe3(Common Crawl, Wikipedia …","url":["https://arxiv.org/pdf/2003.11520"]} -{"year":"2020","title":"Joint translation and unit conversion for end-to-end localization","authors":["G Dinu, P Mathur, M Federico, S Lauly, Y Al-Onaizan - arXiv preprint arXiv …, 2020","GDPMMFSL YaserAl-Onaizan, AWS Amazon"],"snippet":"… Europarl (Koehn, 2005) and news commentary data from WMT En→De shared task 2019 totalling 2.2 million sentences.2 Standard translation test sets do not have, however, enough examples of unit conversions and in fact corpora …","url":["https://arxiv.org/pdf/2004.05219","https://assets.amazon.science/b2/a7/e1ada6104b3587401b30ccc8637a/joint-translation-and-unit-conversion-for-end-to-end-localization.pdf"]} -{"year":"2020","title":"KBPearl: a knowledge base population system supported by joint entity and relation linking","authors":["X Lin, H Li, H Xin, Z Li, L Chen - Proceedings of the VLDB Endowment, 2020"],"snippet":"Page 1. KBPearl: A Knowledge Base Population System Supported by Joint Entity and Relation Linking Xueling Lin, Haoyang Li, Hao Xin, Zijian Li, Lei Chen Department of Computer Science and Engineering The Hong Kong …","url":["https://dl.acm.org/doi/pdf/10.14778/3384345.3384352"]} -{"year":"2020","title":"Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation","authors":["W Zhang, X Li, Y Yang, R Dong, G Luo - Future Internet, 2020"],"snippet":"Recently, the pretraining of models has been successfully applied to unsupervised and semi-supervised neural machine translation. A cross-lingual language model uses a pretrained masked language model to initialize the …","url":["https://www.mdpi.com/1999-5903/12/12/215/pdf"]} -{"year":"2020","title":"Kernel compositional embedding and its application in linguistic structured data classification","authors":["H Ganji, MM Ebadzadeh, S Khadivi - Knowledge-Based Systems, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0950705120300460"]} -{"year":"2020","title":"Key Phrase Classification in Complex Assignments","authors":["M Ravikiran - arXiv preprint arXiv:2003.07019, 2020"],"snippet":"… Corpus and English Wikipedia used in BERT was found to be useful for training. The additional data included Common Crawl News dataset (76 GB), Web text corpus (38 GB) and Stories from Common Crawl (31 GB). This coupled …","url":["https://arxiv.org/pdf/2003.07019"]} -{"year":"2020","title":"Keynote speaker","authors":["M Benjamin"],"snippet":"Skip to content …","url":["https://asling.org/tc42/"]} -{"year":"2020","title":"Keyphrase Extraction as Sequence Labeling Using Contextualized Embeddings","authors":["D Sahrawat, D Mahata, H Zhang, M Kulkarni, A Sharma… - Advances in Information …, 2020"],"snippet":"… We also use 300 dimensional fixed embeddings from Glove [20], Word2Vec [19], and FastText [13] (common-crawl, wiki-news). We also compare the proposed architecture against four popular baselines …","url":["https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7148038/"]} -{"year":"2020","title":"KGvec2go--Knowledge Graph Embeddings as a Service","authors":["J Portisch, M Hladik, H Paulheim - arXiv preprint arXiv:2003.05809, 2020"],"snippet":"… knowledge graphs. 4.3. WebIsALOD The WebIsA database (Seitner et al., 2016) is a data set which consists of hypernymy relations ex- tracted from the Common Crawl8, a downloadable copy of the Web. The extraction was …","url":["https://arxiv.org/pdf/2003.05809"]} -{"year":"2020","title":"KIT's IWSLT 2020 SLT Translation System","authors":["NQ Pham, F Schneider, TN Nguyen, TL Ha, TS Nguyen… - Proceedings of the 17th …, 2020"],"snippet":"… Table 2: Text Training Data Dataset Sentences TED Talks (TED) 220K Europarl (EPPS) 2.2MK CommonCrawl 2.1M Rapid 1.21M ParaCrawl 25.1M OpenSubtitles 12.6M WikiTitle 423K Back-translated News 26M Page 2. 56 3 Simultaneous Speech Translation …","url":["https://www.aclweb.org/anthology/2020.iwslt-1.4.pdf"]} -{"year":"2020","title":"KLEJ: Comprehensive Benchmark for Polish Language Understanding","authors":["P Rybak, R Mroczkowski, J Tracz, I Gawlik - arXiv preprint arXiv:2005.00630, 2020"],"snippet":"… word vectors. To evaluate their impact on KLEJ tasks, we initialize word embeddings with fastText (Bojanowski et al., 2016) trained on Common Crawl and Wikipedia for Polish language (Grave et al., 2018). 4.1.3 ELMo ELMo …","url":["https://arxiv.org/pdf/2005.00630"]} -{"year":"2020","title":"KLUMSy@ KIPoS: Experiments on Part-of-Speech Tagging of Spoken Italian","authors":["T Proisl, G Lapesa"],"snippet":"… The PAISÀ corpus of Italian texts from the web (Lyding et al., 2014),5 the text of the Italian Wikimedia dumps,6 ie Wiki(pedia|books|news|versity|voyage), as ex- tracted by Wikipedia Extractor,7 and the Italian subset of OSCAR …","url":["http://ceur-ws.org/Vol-2765/paper140.pdf"]} -{"year":"2020","title":"Knowledge Augmented Aspect Category Detection for Aspect-based Sentiment Analysis","authors":["K Martinen - 2019"],"snippet":"Page 1. MASTERTHESIS Knowledge Augmented Aspect Category Detection for Aspect-based Sentiment Analysis Kai Martinen 01.12.2019 University of Hamburg MIN-Faculty Department of Computer Science Language Technologies …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/teaching/theses/completed-theses/2019-ma-martinen.pdf"]} -{"year":"2020","title":"Knowledge Efficient Deep Learning for Natural Language Processing","authors":["H Wang - arXiv preprint arXiv:2008.12878, 2020"],"snippet":"Page 1. Knowledge Efficient Deep Learning for Natural Language Processing by Hai Wang A thesis submitted in partial fulfillment for the degree of Doctor of Philosophy in Computer Science at the Toyota Technological Institute …","url":["https://arxiv.org/pdf/2008.12878"]} -{"year":"2020","title":"Knowledge Graphs Evolution and Preservation--A Technical Report from ISWS 2019","authors":["N Abbas, K Alghamdi, M Alinam, F Alloatti, G Amaral… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Knowledge Graphs Evolution and Preservation A Technical Report from ISWS 2019 December 23, 2020 Bertinoro, Italy arXiv:2012.11936v1 [cs.AI] 22 Dec 2020 Page 2. Authors Main Editors Valentina Anita Carriero …","url":["https://arxiv.org/pdf/2012.11936"]} -{"year":"2020","title":"KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding","authors":["J Ham, YJ Choe, K Park, I Choi, H Soh - arXiv preprint arXiv:2004.03289, 2020"],"snippet":"… 20 days). We also use XLM-R (Conneau and Lample, 2019), a publicly available cross-lingual language model that was pre-trained on 2.5TB of Common Crawl corpora in 100 languages including Korean (54GB). Note that …","url":["https://arxiv.org/pdf/2004.03289"]} -{"year":"2020","title":"LAMBERT: Layout-Aware language Modeling using BERT for information extraction","authors":["Ł Garncarek, R Powalski, T Stanisławek, B Topolski… - arXiv preprint arXiv …, 2020"],"snippet":"… Dataset pages EDGAR 119 088 RVL-CDIP 90 054 Common Crawl 389 469 cTDaR 782 private 151 074 Total 750 467 Table 1: Sizes of training datasets … Common Crawl PDFs This is a dataset produced by downloading PDF …","url":["https://arxiv.org/pdf/2002.08087"]} -{"year":"2020","title":"Language model domain adaptation for automatic speech recognition","authors":["A Prasad, P Motlicek, A Nanchen - 2020"],"snippet":"… By exploring and exploiting various datasets like Common Crawl, Europarl, news and TEDLIUM and by experimenting different techniques in training a model, we achieve the goal of adapting a general purpose LM to a domain like talks …","url":["https://infoscience.epfl.ch/record/275402"]} -{"year":"2020","title":"Language Models and Word Sense Disambiguation: An Overview and Analysis","authors":["D Loureiro, K Rezaee, MT Pilehvar… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Language Models and Word Sense Disambiguation: An Overview and Analysis Daniel Loureiro∗ LIAAD - INESC TEC Department of Computer Science - FCUP University of Porto, Portugal Kiamehr Rezaee∗ Department …","url":["https://arxiv.org/pdf/2008.11608"]} -{"year":"2020","title":"Language Models are Few-Shot Learners","authors":["TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan… - arXiv preprint arXiv …, 2020"],"snippet":"… of our held-out validation set as an accurate measure of overfitting, and (3) we also added known high-quality reference corpora to the training mix to augment CommonCrawl and increase its diversity. Details of the first two …","url":["https://arxiv.org/pdf/2005.14165"]} -{"year":"2020","title":"Language Models are Open Knowledge Graphs","authors":["C Wang, X Liu, D Song - arXiv preprint arXiv:2010.11967, 2020"],"snippet":"… In fact, these pre-trained LMs automatically acquire factual knowledge from large-scale corpora (eg, BookCorpus (Zhu et al., 2015), Common Crawl (Brown et al., 2020)) via pre-training. The learned knowledge in pre-trained LMs is the key to the current success …","url":["https://arxiv.org/pdf/2010.11967"]} -{"year":"2020","title":"Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling","authors":["S Bhosale, K Yee, S Edunov, M Auli - arXiv preprint arXiv:2011.07164, 2020"],"snippet":"… Big architecture. The model is trained on de-duplicated Romanian CommonCrawl data consisting of 623M sentences or 21.7B words after normalization and tokenization (Conneau et al., 2019; Wenzek et al., 2020). The German …","url":["https://arxiv.org/pdf/2011.07164"]} -{"year":"2020","title":"Language-agnostic BERT Sentence Embedding","authors":["F Feng, Y Yang, D Cer, N Arivazhagan, W Wang - arXiv preprint arXiv:2007.01852, 2020"],"snippet":"… The sentences are filtered using a sentence 1https://commoncrawl.org/ 2https://www.wikipedia.org/ 3Long lines are usually JavaScript or attempts at SEO … Finally, we pretrain on common crawl which is much larger, albeit …","url":["https://arxiv.org/pdf/2007.01852"]} -{"year":"2020","title":"Large Scale Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training","authors":["O Agarwal, H Ge, S Shakeri, R Al-Rfou - arXiv preprint arXiv:2010.12688, 2020"],"snippet":"… 3We use only one annotator at the moment but are expanding this evaluation to multiple annotators. Page 7. such as Wikipedia or common crawl. KGs are a rich source of factual information that can serve as additional succint informatopm …","url":["https://arxiv.org/pdf/2010.12688"]} -{"year":"2020","title":"Large-Scale Analysis of HTTP Response Headers","authors":["C Leyers, J Paytosh, N Worthy - 2020"],"snippet":"… The data come from the Common Crawl's monthly web crawls that collect responses from what we can consider to be the entire internet … The data come from the Common Crawl's monthly web crawls that collect responses …","url":["https://digitalcommons.winthrop.edu/source/SOURCE_2020/allpresentationsandperformances/101/"]} -{"year":"2020","title":"Latte-Mix: Measuring Sentence Semantic Similarity with Latent Categorical Mixtures","authors":["M Li, H Bai, L Tan, K Xiong, J Lin - arXiv preprint arXiv:2010.11351, 2020"],"snippet":"Page 1. Latte-Mix: Measuring Sentence Semantic Similarity with Latent Categorical Mixtures Minghan Li*1, 2 , He Bai1, 3 , Luchen Tan1 , Kun Xiong1 , Ming Li1, 3 , Jimmy Lin1, 3 1RSVP.ai, 2University of Toronto, 3David R. Cheriton …","url":["https://arxiv.org/pdf/2010.11351"]} -{"year":"2020","title":"LEAPME: Learning-based Property Matching with Embeddings","authors":["D Ayala Hernández, IC Hernández Salmerón… - ArXiv. org, arXiv …, 2020","D Ayala, I Hernández, D Ruiz, E Rahm - arXiv preprint arXiv:2010.01951, 2020"],"snippet":"… To compute embeddings, we use the pre-trained GloVe approach [43]1, specifically for the uncased Common Crawl corpus that includes 300-dimensional vectors for 1.9 million words, promising a good coverage …","url":["https://arxiv.org/pdf/2010.01951","https://idus.us.es/bitstream/handle/11441/105071/1/LEAPME%20Learning%20based%20Property%20Matching%20with%20Embeddings.pdf?sequence=1"]} -{"year":"2020","title":"Learning Accurate Integer Transformer Machine-Translation Models","authors":["E Wu - arXiv preprint arXiv:2001.00926, 2020"],"snippet":"… sor2Tensor v1.12 English-to-German translation task (translate_ende_wmt32k_packed). This dataset has 4.6 million sentence pairs drawn from three WMT18 [Bojar et al., 2018a] parallel corpora: News Commentary V13, Europarl V7, and Common Crawl …","url":["https://arxiv.org/pdf/2001.00926"]} -{"year":"2020","title":"Learning and Evaluating Emotion Lexicons for 91 Languages","authors":["S Buechel, S Rücker, U Hahn - arXiv preprint arXiv:2005.05672, 2020"],"snippet":"… We use the fastText embedding models from Grave et al. (2018) trained for 157 languages on the respective WIKIPEDIA and the respective part of COMMONCRAWL. These resources not only greatly facilitate our work …","url":["https://arxiv.org/pdf/2005.05672"]} -{"year":"2020","title":"Learning Dynamic Knowledge Graphs to Generalize on Text-Based Games","authors":["A Adhikari, X Yuan, MA Côté, M Zelinka, MA Rondeau… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Learning Dynamic Knowledge Graphs to Generalize on Text-Based Games Ashutosh Adhikari * 1 Xingdi Yuan * 2 Marc-Alexandre Côté * 2 Mikuláš Zelinka 3 Marc-Antoine Rondeau 2 Romain Laroche 2 Pascal Poupart …","url":["https://arxiv.org/pdf/2002.09127"]} -{"year":"2020","title":"Learning Engineering Properties with Bag-of-Tricks. For the Automated Evaluation of a Piping Design","authors":["WC Tan, KH Chua, CB Yan, IM Chen - 2020 IEEE 16th International Conference on …, 2020"],"snippet":"… and T. Mikolov, “Learning word vectors for 157 languages,” in Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018), 2018. [28] [Online]. Available: https://commoncrawl.org/ 1280","url":["https://ieeexplore.ieee.org/abstract/document/9217001/"]} -{"year":"2020","title":"LEARNING FROM MULTIMODAL WEB DATA","authors":["JM Hessel - 2020"],"snippet":"… (2018); Roemmele et al. (2011); De Marneffe et al. (2019); Clark et al. (2019). 7https://commoncrawl.org/ 5 Page 22. model; they achieve high performance on several video understanding tasks (eg, retrieval), and …","url":["https://jmhessel.com/files/2020/phd_thesis.pdf"]} -{"year":"2020","title":"Learning Geometric Word Meta-Embeddings","authors":["P Jawanpuria, NTV Dev, A Kunchukuttan, B Mishra - arXiv preprint arXiv:2004.09219, 2020"],"snippet":"… GloVe (Pennington et al., 2014): has 1 917 494 word embeddings trained on 42B tokens of web data from the common crawl. • fastText (Bojanowski et al., 2017): has 2 000 000 word embeddings trained on common crawl …","url":["https://arxiv.org/pdf/2004.09219"]} -{"year":"2020","title":"Learning hierarchical relationships for object-goal navigation","authors":["Y Qiu, A Pal, HI Christensen"],"snippet":"Page 1. Learning hierarchical relationships for object-goal navigation Yiding Qiu ∗ UC San Diego yiqiu@eng.ucsd.edu Anwesan Pal ∗ UC San Diego a2pal@eng.ucsd.edu Henrik I. Christensen UC San Diego hichristensen@eng.ucsd.edu …","url":["https://www.researchgate.net/profile/Anwesan_Pal/publication/346061932_Learning_hierarchical_relationships_for_object-goal_navigation/links/5fb9b05fa6fdcc6cc659d1b2/Learning-hierarchical-relationships-for-object-goal-navigation.pdf"]} -{"year":"2020","title":"Learning to Evaluate Translation Beyond English: BLEURT Submissions to the WMT Metrics 2020 Shared Task","authors":["T Sellam, A Pu, HW Chung, S Gehrmann, Q Tan… - arXiv preprint arXiv …, 2020"],"snippet":"… Details of MBERT-WMT pre-training We trained MBERT-WMT model with an MLM loss (Devlin et al., 2019), using a combination of public datasets: Wikipedia, the WMT 2019 News Crawl (Barrault et al.), the C4 variant of Com …","url":["https://arxiv.org/pdf/2010.04297"]} -{"year":"2020","title":"Learning to Segment Actions from Observation and Narration","authors":["D Fried, JB Alayrac, P Blunsom, C Dyer, S Clark… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Learning to Segment Actions from Observation and Narration Daniel Fried‡ Jean-Baptiste Alayrac† Phil Blunsom† Chris Dyer† Stephen Clark† Aida Nematzadeh† †DeepMind, London, UK ‡Computer Science Division, UC Berkeley …","url":["https://arxiv.org/pdf/2005.03684"]} -{"year":"2020","title":"Learning to summarize from human feedback","authors":["N Stiennon, L Ouyang, J Wu, DM Ziegler, R Lowe… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Learning to summarize from human feedback Nisan Stiennon∗ Long Ouyang∗ Jeff Wu∗ Daniel M. Ziegler∗ Ryan Lowe∗ Chelsea Voss∗ Alec Radford Dario Amodei Paul Christiano∗ OpenAI Abstract As language …","url":["https://arxiv.org/pdf/2009.01325"]} -{"year":"2020","title":"Learning unbiased zero-shot semantic segmentation networks via transductive transfer","authors":["H Liu, Y Wang, J Zhao, G Yang, F Lv - arXiv preprint arXiv:2007.00515, 2020"],"snippet":"… IV. EXPERIMENTS Following [7], we use the concatenation of two different word vectors, ie word2vec trained on Google News [12] and fastText trained on Common Crawl [12], to construct the semantic space shared by source and target classes …","url":["https://arxiv.org/pdf/2007.00515"]} -{"year":"2020","title":"Learning User Representations for Open Vocabulary Image Hashtag Prediction","authors":["T Durand - Proceedings of the IEEE/CVF Conference on Computer …, 2020"],"snippet":"… We train our model using ADAM [26] during 20 epochs with a start learning rate 5e-5. We use ResNet-50 [22] as the ConvNet and GloVe embeddings [34] as pre-trained word em- beddings. GloVe was trained on Common …","url":["http://openaccess.thecvf.com/content_CVPR_2020/papers/Durand_Learning_User_Representations_for_Open_Vocabulary_Image_Hashtag_Prediction_CVPR_2020_paper.pdf"]} -{"year":"2020","title":"Learning Word and Sub-word Vectors for Amharic (Less Resourced Language)","authors":["A Eshetu, G Teshome, T Abebe"],"snippet":"… The large text collection from Wikipedia and common crawl are commonly used data source to train and learn word vectors (Al-Rfou et al., 2013; Bojanowski et al … (2014) released GloVe models trained on Wikipedia …","url":["https://www.academia.edu/download/64390049/39IJAERS-08202035-Learning.pdf"]} -{"year":"2020","title":"Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization","authors":["M Farahani, M Gharachorloo, M Manthouri - arXiv preprint arXiv:2012.11204, 2020"],"snippet":"… T5, on the other hand, is a unified Seq2Seq framework that employs Text-to- Text format to address NLP text-based problems. A multilingual variation of the T5 model is called mT5 [16] that covers 101 different …","url":["https://arxiv.org/pdf/2012.11204"]} -{"year":"2020","title":"Leveraging Structured Metadata for Improving Question Answering on the Web","authors":["X Du, A Hassan, A Fourney, R Sim, P Bennett… - … of the 1st Conference of the …, 2020"],"snippet":"… website content. The Web Data Commons project (Mühleisen and Bizer, 2012) estimates that 0.9 billion HTML pages out of the 2.5 billion pages (37.1%) in the Common Crawl web corpus1 contain structured metadata. Figure …","url":["https://www.aclweb.org/anthology/2020.aacl-main.55.pdf"]} -{"year":"2020","title":"LIG-Health at Adhoc and Spoken IR Consumer Health Search: expanding queries using UMLS and FastText.","authors":["P Mulhem, GG Saez, A Mannion, D Schwab, J Frej - Conference and Labs of the …, 2020"],"snippet":"… The FastText embedding vector of a word is the sum of the vectors of its component ngrams. We used the pre-trained word vectors for English language, trained on Common Crawl and Wikipedia using FastText. The features of the model used are as follows; …","url":["http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2020/paper_129.pdf"]} -{"year":"2020","title":"LIMSI@ WMT 2020","authors":["SA Rauf, JC Rosales, I Paris, PM Quang, S Paris…"],"snippet":"… Domain Corpus sents. words words (en) (de) web Paracrawl 50,875 978 919 economy Tilde EESC 2,858 61 58 news Commoncrawl 2,399 51 47 Tilde rapid 940 20 19 News commentary 361 8 8 tourism Tilde tourism 7 …","url":["http://statmt.org/wmt20/pdf/2020.wmt-1.86.pdf"]} -{"year":"2020","title":"Linguistic Structure Guided Context Modeling for Referring Image Segmentation","authors":["F Zhang, J Han","T Hui, S Liu, S Huang, G Li, S Yu, F Zhang, J Han"],"snippet":"… rate. CNN is fixed during training. We use batch size 1 and stop training after 700K iterations. GloVe word embeddings [30] pretrained on Common Crawl with 840B tokens are used to replace randomly initialized ones. For fair …","url":["http://colalab.org/media/paper/Linguistic_Structure_Guided_Context_Modeling_for_Referring_Image_Segmentation.pdf","https://link.springer.com/content/pdf/10.1007/978-3-030-58607-2_4.pdf"]} -{"year":"2020","title":"Linguistically-aware Attention for Reducing the Semantic-Gap in Vision-Language Tasks","authors":["G KV, A Nambiar, KS Srinivas, A Mittal - arXiv preprint arXiv:2008.08012, 2020"],"snippet":"… The pre-trained word-to-vector networks such as Glove [29] and Bert [30] are inexpensive and rich in making linguistic correlations (since they are already trained on a large textual corpus such as Common Crawl and Wikipedia2014) …","url":["https://arxiv.org/pdf/2008.08012"]} -{"year":"2020","title":"LNMap: Departures from Isomorphic Assumption in Bilingual Lexicon Induction Through Non-Linear Mapping in Latent Space","authors":["T Mohiuddin, MS Bari, S Joty - arXiv preprint arXiv:2004.13889, 2020"],"snippet":"… English, Italian, and German em- beddings were trained on WacKy crawling corpora using CBOW (Mikolov et al., 2013b), while Spanish and Finnish embeddings were trained on WMT News Crawl and Common Crawl, respectively. 4.2 Baseline Methods …","url":["https://arxiv.org/pdf/2004.13889"]} -{"year":"2020","title":"Localizing Open-Ontology QA Semantic Parsers in a Day Using Machine Translation","authors":["M Moradshahi, G Campagna, SJ Semnani, S Xu… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Localizing Open-Ontology QA Semantic Parsers in a Day Using Machine Translation Mehrad Moradshahi Giovanni Campagna Sina J. Semnani Silei Xu Monica S. Lam Computer Science Department Stanford University …","url":["https://arxiv.org/pdf/2010.05106"]} -{"year":"2020","title":"Localizing Q&A Semantic Parsers for Any Language in a Day","authors":["M Moradshahi, G Campagna, S Semnani, S Xu, M Lam - Proceedings of the 2020 …, 2020"],"snippet":"Page 1. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 5970–5983, November 16–20, 2020. c 2020 Association for Computational Linguistics 5970 Localizing Open-Ontology …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.481.pdf"]} -{"year":"2020","title":"Locally Constructing Product Taxonomies from Scratch Using Representation Learning","authors":["M Kejriwal, RK Selvam, CC Ni, N Torzec"],"snippet":"… The WDC schema.org project, which relies on the webpages in the Common Crawl, is able to automatically extract schema.org data from webpages due to its unique syntax and make it available as a dataset in Resource Description Framework (RDF) …","url":["https://web.ntpu.edu.tw/~myday/doc/ASONAM2020/ASONAM2020_Proceedings/pdf/papers/080_098_507.pdf"]} -{"year":"2020","title":"LOREM: Language-consistent Open Relation Extraction from Unstructured Text","authors":["T Harting, S Mesbah, C Lofi"],"snippet":"… Since these sentences are automatically tagged, we do expect a higher noise level than in the manually tagged test sets. For the language-individual model, we use FastText word em- beddings [11] which are trained on Common Crawl and Wikipedia dataset …","url":["https://pure.tudelft.nl/portal/files/69221720/2020_WWW_LOREM.pdf"]} -{"year":"2020","title":"Lost in Embedding Space: Explaining Cross-Lingual Task Performance with Eigenvalue Divergence","authors":["H Dubossarsky, I Vulić, R Reichart, A Korhonen - arXiv preprint arXiv:2001.11136, 2020"],"snippet":"… We prefer Wikipedia as the main embedding training corpus over the larger Common Crawl corpus, because the text in Wikipedia is much cleaner or even hand-curated, and adheres to the rules of standard language (Grave et al., 2018) …","url":["https://arxiv.org/pdf/2001.11136"]} -{"year":"2020","title":"Low-Resource Knowledge-Grounded Dialogue Generation","authors":["X Zhao, W Wu, C Tao, C Xu, D Zhao, R Yan - arXiv preprint arXiv:2002.10348, 2020"],"snippet":"Page 1. Published as a conference paper at ICLR 2020 LOW-RESOURCE KNOWLEDGE-GROUNDED DIALOGUE GENERATION Xueliang Zhao1,2, Wei Wu3, Chongyang Tao1, Can Xu3, Dongyan Zhao1,2, Rui Yan1,2,4∗ 1Wangxuan …","url":["https://arxiv.org/pdf/2002.10348"]} -{"year":"2020","title":"Low-Resource Text Classification via Cross-lingual Language Model Fine-tuning","authors":["X Li, Z Li, J Sheng, W Slamu"],"snippet":"… XLM - R shows the possibility of training one model for many languages while not sacrificing per-language performance. It is trained on 2.5TB of CommonCrawl data, in 100 languages and uses a large vocabulary size of CCL 2020 …","url":["http://www.cips-cl.org/static/anthology/CCL-2020/CCL-20-092.pdf"]} -{"year":"2020","title":"LREC 2020 Workshop Language Resources and Evaluation Conference 11–16 May 2020","authors":["M Kupietz, H Lungen, I Pisetta"],"snippet":"Page 1. LREC 2020 Workshop Language Resources and Evaluation Conference 11–16 May 2020 8th Workshop on Challenges in the Management of Large Corpora (CMLC-8) PROCEEDINGS Editors: Piotr Ba´nski, Adrien Barbaresi, Simon Clematide …","url":["https://ids-pub.bsz-bw.de/files/9811/Banski_Barbaresi_Clematide_Kupietz_Luengen_Pisetta_Proceedings_LREC_2020.pdf"]} -{"year":"2020","title":"Machine Bias and Fundamental Rights","authors":["D Amilevičius - Smart Technologies and Fundamental Rights, 2020"],"snippet":"Jump to Content Jump to Main Navigation. English; 中文; français; Deutsch. Access via: Google Googlebot - Web Crawler SEO. Login to my Brill account Create Brill Account. Publications. Subjects. African Studies American Studies …","url":["https://brill.com/view/book/edcoll/9789004437876/BP000019.xml"]} -{"year":"2020","title":"Machine Translation for English–Inuktitut with Segmentation, Data Acquisition and Pre-Training","authors":["C Roest, L Edman, G Minnema, K Kelly, J Spenader… - Proceedings of the Fifth …, 2020"],"snippet":"… 2), we train XLM models with the News Crawl data for English and Common Crawl data for Inuktitut, as specified in Table 2. We also use Hansards and Newsdevtrain oversampled 5 times for parallel data. We try both tagging …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.29.pdf"]} -{"year":"2020","title":"Machine Translation in Natural Language Processing by Implementing Artificial Neural Network Modelling Techniques: An Analysis","authors":["FA Khan, A Abubakar - International Journal on Perceptive and Cognitive …, 2020"],"snippet":"… The experiment for the proposed model has followed with BERT, to BookCorpus [35] and Wikipedia for English as an initializing point for pretraining. Similarly, other text includes, Giga5 (16Gb) ClueWeb 2012-B and Common Crawl respectively …","url":["https://journals.iium.edu.my/kict/index.php/IJPCC/article/download/134/96"]} -{"year":"2020","title":"Machine Translation Reference-less Evaluation using YiSi-2 with Bilingual Mappings of Massive Multilingual Language Model","authors":["C Lo, S Larkin - Proceedings of the Fifth Conference on Machine …, 2020"],"snippet":"… The differencesbetweenXLM-RandBERTare1)XLM-Ris trained on the CommonCrawl corpus which is significantly larger than the Wikipedia training data used by BERT; 2) instead of a uniform data sampling rate used in BERT …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.100.pdf"]} -{"year":"2020","title":"Machine Translation System Selection from Bandit Feedback","authors":["J Naradowsky, X Zhang, K Duh - arXiv preprint arXiv:2002.09646, 2020"],"snippet":"… Specifically, we in- clude OpenSubtitles2018 [Lison and Tiedemann, 2016] and WMT 2017 [Bojar et al., 2017], which contains data from eg parliamentary proceedings (Europarl, UN), political/economic news, and web-crawled parallel corpus (Common Crawl) …","url":["https://arxiv.org/pdf/2002.09646"]} -{"year":"2020","title":"MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer","authors":["J Pfeiffer, I Vulić, I Gurevych, S Ruder - arXiv preprint arXiv:2005.00052, 2020"],"snippet":"… It is a Transformer-based model pretrained for one hundred languages on large cleaned Common Crawl corpora (Wenzek et al., 2019). For efficiency purposes, we use the XLM-R Base configuration as the basis for all of our experiments …","url":["https://arxiv.org/pdf/2005.00052"]} -{"year":"2020","title":"Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation","authors":["N Reimers, I Gurevych - arXiv preprint arXiv:2004.09813, 2020"],"snippet":"Page 1. Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation Nils Reimers and Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science …","url":["https://arxiv.org/pdf/2004.09813"]} -{"year":"2020","title":"MantisTable SE: an Efficient Approach for the Semantic Table Interpretation","authors":["M Cremaschi, R Avogadro, A Barazzetti, D Chieregato - Semantic Web Challenge on …, 2020"],"snippet":"… Web Tables: The WebTables system [1] extracts 14.1 billion HTML tables and finds 154 million are high-quality tables (1.1%); – Web Tables: Lehmberg et al. [5] extract 233 million content tables from Common Crawl 2015 (2.25 …","url":["http://ceur-ws.org/Vol-2775/paper8.pdf"]} -{"year":"2020","title":"Mapping crime descriptions to law articles using deep learning","authors":["M Vink, N Netten, MS Bargh, S van den Braak… - Proceedings of the 13th …, 2020"],"snippet":"… This is a popular word embedding created by Facebook and available in many languages. The FastText embeddings are trained on Wikipedia texts and the data from the common crawl project. The word embeddings have a vector size of 300 …","url":["https://dl.acm.org/doi/abs/10.1145/3428502.3428507"]} -{"year":"2020","title":"Mapping Languages: The Corpus of Global Language Use","authors":["J Dunn"],"snippet":"… language) and 156 countries (again with over 1 million words from each country), all distilled from Common Crawl web data … region: (i) the number of sites indexed by the Common Crawl; (ii) the population's degree of access …","url":["https://publicdata.canterbury.ac.nz/Research/Geocorpus/Documentation/!Paper.Corpus_of_Global_Language_Use.pdf"]} -{"year":"2020","title":"Mapping the market for remanufacturing: An application of “Big Data” analytics","authors":["JQF Netoa, M Dutordoira - International Journal of Production Economics, 2020"],"snippet":"… The vectors are created with Global Vectors for Word Representation (GloVe), one of the most well-known word embedding methods (Pennington et al., 2014), and are based on a data set obtained from Common Crawl, a nonprofit …","url":["https://www.sciencedirect.com/science/article/pii/S092552732030181X"]} -{"year":"2020","title":"MASK: A flexible framework to facilitate de-identification of clinical texts","authors":["N Milosevic, G Kalappa, H Dadafarin, M Azimaee… - arXiv preprint arXiv …, 2020"],"snippet":"… The first of these ap- proaches, used GLoVe (Global Vector) word embeddings [14]. We used GLoVe embeddings trained on common crawl data containing 840 billion tokens, 2.2 million unique tokens in vocabulary, and 300dimensional vectors …","url":["https://arxiv.org/pdf/2005.11687"]} -{"year":"2020","title":"Masked ELMo: An evolution of ELMo towards fully contextual RNN language models","authors":["G Senay, E Salin - arXiv preprint arXiv:2010.04302, 2020"],"snippet":"… It should be noted that ELMo 5.5B is trained on a larger corpus than ELMo and Masked ELMo (Wikipedia: 1.9B and the common crawl from WMT 2008-2012: 3.6B). Moreover, a BERT baseline (BERT*) trained on the same …","url":["https://arxiv.org/pdf/2010.04302"]} -{"year":"2020","title":"Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of Yorùbá and Twi","authors":["J Alabi, K Amponsah-Kaakyire, D Adelani… - Proceedings of The 12th …, 2020"],"snippet":"… The resource par excellence is Wikipedia2, an online encyclopedia currently available in 307 languages3. Other initiatives such as Common Crawl4 or the Jehovahs Witnesses site5 are also repositories for multilingual …","url":["https://www.aclweb.org/anthology/2020.lrec-1.335.pdf"]} -{"year":"2020","title":"Massively Multilingual Document Alignment with Cross-lingual Sentence-Mover's Distance","authors":["A El-Kishky, F Guzmán - arXiv preprint arXiv:2002.00761, 2020"],"snippet":"… selected for evaluation. Baseline Methods. For comparison, we implemented two existing and intuitive document scoring baselines previously evaluated on this URL-Aligned CommonCrawl dataset [11]. The first method dubbed …","url":["https://arxiv.org/pdf/2002.00761"]} -{"year":"2020","title":"Master Thesis: Developing a Cross-Lingual Named Entity Recognition Model","authors":["J Podolak, P Zeinert - 2020"],"snippet":"Page 1. Master Thesis: Developing a Cross-Lingual Named Entity Recognition Model Jowita Podolak1, Philine Zeinert2 1jopo@itu.dk 2phze@itu.dk June 1, 2020 Course Code: KISPECI1SE Page 2. Abstract To build a Cross …","url":["https://www.derczynski.com/itu/docs/xling-ner_jopo_phze.pdf"]} -{"year":"2020","title":"Matching Job Applicants to Free Text Job Ads Using Semantic Networks and Natural Language Inference","authors":["A Thun - 2020"],"snippet":"Page 1. IN DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS , STOCKHOLM SWEDEN 2020 Matching Job Applicants to Free Text Job Ads Using Semantic Networks and Natural Language Inference ANTON THUN …","url":["https://www.diva-portal.org/smash/get/diva2:1467916/FULLTEXT01.pdf"]} -{"year":"2020","title":"MDD@ AMI: Vanilla Classifiers for Misogyny Identification","authors":["S El Abassi, S Nisioi - Proceedings of Sixth Evaluation Campaign of Natural …, 2020"],"snippet":"… CBOW embeddings pre-trained on Wikipedia and OSCAR (Common Crawl)3. The second run is trained on English glove em- beddings that surprisingly contain the representation of more than half of our Italian …","url":["http://ceur-ws.org/Vol-2765/paper149.pdf"]} -{"year":"2020","title":"MEASURING DIVERGENT THINKING ORIGINALITY WITH HUMAN RATERS AND TEXT-MINING MODELS: A PSYCHOMETRIC COMPARISON OF METHODS","authors":["D Dumasa, P Organisciaka, M Dohertyb"],"snippet":"Page 1. Running Head: MEASURING ORIGINALITY 1 MEASURING DIVERGENT THINKING ORIGINALITY WITH HUMAN RATERS AND TEXT-MINING MODELS: A PSYCHOMETRIC COMPARISON OF METHODS …","url":["https://www.researchgate.net/profile/Denis_Dumas/publication/339364072_Measuring_Divergent_Thinking_Originality_with_Human_Raters_and_Text-Mining_Models_A_Psychometric_Comparison_of_Methods/links/5e4d686892851c7f7f46b607/Measuring-Divergent-Thinking-Originality-with-Human-Raters-and-Text-Mining-Models-A-Psychometric-Comparison-of-Methods.pdf"]} -{"year":"2020","title":"Measuring prominence of scientific work in online news as a proxy for impact","authors":["J Ravenscroft, A Clare, M Liakata - arXiv preprint arXiv:2007.14454, 2020"],"snippet":"… In our task we employ pre-trained GloVe4 feature embeddings trained on the Common Crawl dataset5, a multi-petabyte archive of content scraped from the world wide web containing 42 billion tokens and a vocabulary 1.9 million words …","url":["https://arxiv.org/pdf/2007.14454"]} -{"year":"2020","title":"Media-Analytics. org: A Resource to Research Language Usage by News Media Outlets","authors":["D Rozado - ITM Web of Conferences, 2020"],"snippet":"… news media outlets. News articles textual content are available in outlet-specific domains and Internet cache repositories such as the Internet Archive Wayback Machine, Google Cache and Common Crawl. Articles' headlines …","url":["https://www.itm-conferences.org/articles/itmconf/pdf/2020/03/itmconf_ictessh2020_03004.pdf"]} -{"year":"2020","title":"Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?","authors":["S Hisamoto, M Post, K Duh"],"snippet":"… Page 3. CommonCrawl subcorpus … We now describe how Carol prepares the data for Alice and Bob. First, Carol se- lects 4 subcorpora for the training data of Al- ice, namely CommonCrawl, Europarl v7, News Commentary v13, and Rapid 2016 …","url":["http://www.cs.jhu.edu/~kevinduh/t/membership-inference.pdf"]} -{"year":"2020","title":"Meta-Learning for Few-Shot NMT Adaptation","authors":["A Sharaf, H Hassan, H Daumé III - arXiv preprint arXiv:2004.02745, 2020"],"snippet":"… In all cases, the baseline machine translation system is a neural English to German (En-De) transformer model (Vaswani et al., 2017), initially trained on 5.2M sentences filtered from the the standard parallel data …","url":["https://arxiv.org/pdf/2004.02745"]} -{"year":"2020","title":"Method and apparatus for improved automatic subtitle segmentation using an artificial neural network model","authors":["P WILKEN, E Matusov - US Patent App. 16/876,780, 2020"],"snippet":"… These data included all other publicly available training data, including ParaCrawl, CommonCrawl, EUbookshop, JRCAcquis, EMEA, and other corpora from the OPUS collection … This may be done to avoid oversampling …","url":["https://patentimages.storage.googleapis.com/d1/56/2b/7b6e0c087c851d/US20200364402A1.pdf"]} -{"year":"2020","title":"Method and system for interactive keyword optimization for opaque search engines","authors":["R Puzis, A ELYASHAR, M REUBEN - US Patent App. 16/840,538, 2020"],"snippet":"… 2018]. The model was trained on Common Crawl (http://commoncrawl.org/) and Wikipedia (https://www.wikipedia.org/) using fastText library (https://fasttext.cc/). For the distance measure, the simple Euclidean distance was used …","url":["https://patents.google.com/patent/US20200327120A1/en"]} -{"year":"2020","title":"Method for automatically generating a wrapper for extracting web data, and a computer system","authors":["G Gottlob, E SALLINGER, R FAYZRAKHMANOV… - US Patent App. 16/630,485, 2020"],"snippet":"US20200167393A1 - Method for automatically generating a wrapper for extracting web data, and a computer system - Google Patents. Method for automatically generating a wrapper for extracting web data, and a computer system. Download PDF Info …","url":["https://patents.google.com/patent/US20200167393A1/en"]} -{"year":"2020","title":"Methods for morphology learning in low (er)-resource scenarios","authors":["T Bergmanis - 2020"],"snippet":"Page 1. This thesis has been submitted in fulfilment of the requirements for a postgraduate degree (eg PhD, MPhil, DClinPsychol) at the University of Edinburgh. Please note the following terms and conditions of use: This work …","url":["https://era.ed.ac.uk/bitstream/handle/1842/37115/Bergmanis2020_Redacted.pdf?sequence=3&isAllowed=y"]} -{"year":"2020","title":"Metrics and tools for exploring toxicity in social media","authors":["PMFN da Silva - 2020"],"snippet":"Page 1. FACULDADE DE ENGENHARIA DA UNIVERSIDADE DO PORTO Metrics and tools for exploring toxicity in social media Pedro Silva Mestrado Integrado em Engenharia Informática e Computação Supervisor: Sérgio …","url":["https://repositorio-aberto.up.pt/bitstream/10216/128545/2/412412.pdf"]} -{"year":"2020","title":"Mis-shapes, Mistakes, Misfits: An Analysis of Domain Classification Services","authors":["P Vallina, V Le Pochat, Á Feal, M Paraschiv, J Gamba…"],"snippet":"… service. Their popularity is further reflected by the fact that 47% of the 4.4M domains are indexed in the Chrome User Experience Report [71] and 0.5% by Common Crawl [72], both generated between August and October 2019 …","url":["https://lepoch.at/files/domain-classification-imc20.pdf"]} -{"year":"2020","title":"Mitigating Bias in Deep Nets with Knowledge Bases: the Case of Natural Language Understanding for Robots","authors":["M Mensio, E Bastianelli, I Tiddi, G Rizzo"],"snippet":"… The connections in green represent highway connections be- tween the first and the third layer. over the Common Crawl resource3 … For this reason, it can intrinsically provide an ex- planation for the model behavior, as it summarizes a much 3http://commoncrawl.org …","url":["http://ceur-ws.org/Vol-2600/paper20.pdf"]} -{"year":"2020","title":"Mitigating Gender Bias in Machine Learning Data Sets","authors":["S Leavy, G Meaney, K Wade, D Greene - arXiv preprint arXiv:2005.06898, 2020"],"snippet":"… evaluation of system accuracy and learned associations in machine learning technologies that underlie many search and recommendation systems [9]. Implicit Association Tests (IATs) were found to be effective in uncovering …","url":["https://arxiv.org/pdf/2005.06898"]} -{"year":"2020","title":"Modeling Recurring Concepts in Single-label and Multi-label Streams","authors":["Z Ahmadi"],"snippet":"Page 1. Modeling Recurring Concepts in Single-label and Multi-label Streams A thesis submitted for the degree of DN at the Department of Physics, Mathematics and Computer Science at the Johannes …","url":["https://publications.ub.uni-mainz.de/theses/volltexte/2019/100003220/pdf/100003220.pdf"]} -{"year":"2020","title":"Modeling remotely collected speech data: Applications for psychiatry","authors":["TB Holmlund - 2020"],"snippet":"… To base the analysis on a corpus with a wide variety of animal-word sources, we used a set of pretrained word vectors calculated from approximately 42 billion tokens from the entire internet, courtesy of the Common Crawl project (Pennington et al., 2014) …","url":["https://munin.uit.no/bitstream/handle/10037/17098/paper_III.pdf?sequence=8"]} -{"year":"2020","title":"Modeling the Music Genre Perception across Language-Bound Cultures","authors":["EV Epure, G Salha, M Moussallam, R Hennequin - arXiv preprint arXiv:2010.06325, 2020"],"snippet":"… representations as described next. Multilingual Static Word Embeddings. The classical word embeddings we study are the multilingual fastText word vectors trained on Wikipedia and Common Crawl (Grave et al., 2018). The model is an …","url":["https://arxiv.org/pdf/2010.06325"]} -{"year":"2020","title":"Modeling Word Formation in English–German Neural Machine Translation","authors":["M Weller-Di Marco, A Fraser - Proceedings of the 58th Annual Meeting of the …, 2020"],"snippet":"… We compare four training settings: small (248,730 sentences: newscommentary), large2M (1,956,444 sentences: Europarl + news-commentary), large4M (4,116,215 sentences: Europarl + news-commentary + …","url":["https://www.aclweb.org/anthology/2020.acl-main.389.pdf"]} -{"year":"2020","title":"Moral Concerns are Differentially Observable in Language","authors":["B Kennedy, M Atari, AM Davani, J Hoover, A Omrani… - 2020"],"snippet":"… lexical semantic relatedness. We compute text representations by averaging GloVe word embedding vectors (Pennington et al., 2014) which were trained on text from the Common Crawl3. In 2 https://github.com/lda-project/lda …","url":["https://psyarxiv.com/uqmty/download?format=pdf"]} -{"year":"2020","title":"Moral Framing and Ideological Bias of News","authors":["K Lerman - … : 12th International Conference, SocInfo 2020, Pisa …","N Mokhberian, A Abeliuk, P Cummings, K Lerman - arXiv preprint arXiv:2009.12979, 2020"],"snippet":"… then and V the m− denote semantic to axis set corresponding to this MF dimension is: Am= mean (V m+)− mean (V m−)(1) For the computations of this part, the embeddings of words are obtained from the pretrained GloVe …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=6tIBEAAAQBAJ&oi=fnd&pg=PA206&dq=commoncrawl&ots=3yMGfdMsr5&sig=MyXXhdRNotNI087l9_nvaNHNjhI","https://arxiv.org/pdf/2009.12979"]} -{"year":"2020","title":"Morphological and pseudomorphological effects in English visual word processing: How much can we attribute the statistical structure of the language?","authors":["P Stevens, D Plaut"],"snippet":"… Real-valued 300dimensional semantic vectors generated from the Common Crawl internet text corpus were converted to 200-dimensional binary vectors using a binary multidimensional scaling algorithm (Rohde, 2002) …","url":["https://cognitivesciencesociety.org/cogsci20/papers/0399/0399.pdf"]} -{"year":"2020","title":"Morphological Skip-Gram: Using morphological knowledge to improve word representation","authors":["F Santos, H Macedo, T Bispo, C Zanchetting - arXiv preprint arXiv:2007.10055, 2020"],"snippet":"… Keeping the quality of word embeddings and decreasing training time is very important because usually, a corpus to training embeddings is composed of 1B tokens. For example, the Common Crawl corpora contain 820B tokens …","url":["https://arxiv.org/pdf/2007.10055"]} -{"year":"2020","title":"mT5: A massively multilingual pre-trained text-to-text transformer","authors":["A Roberts, A Barua, A Siddhant, C Raffel, L Xue… - 2021","L Xue, N Constant, A Roberts, M Kale, R Al-Rfou… - arXiv preprint arXiv …, 2020"],"snippet":"… In this paper, we in- troduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages … mC4 comprises natural text in 101 languages drawn from the public Common Crawl web scrape …","url":["https://arxiv.org/pdf/2010.11934","https://research.google/pubs/pub50316/"]} -{"year":"2020","title":"Multi-Label Sentiment Analysis on 100 Languages with Dynamic Weighting for Label Imbalance","authors":["SF Yilmaz, EB Kaynak, A Koç, H Dibeklioğlu, SS Kozat - arXiv preprint arXiv …, 2020"],"snippet":"… 2, we use XLM-RoBERTa pretrained tokenizer and pretrained model [17]. XLM-RoBERTa is pretrained on CommonCrawl corpora of 100 different languages. We first tokenize the input sentence si into subword units via …","url":["https://arxiv.org/pdf/2008.11573"]} -{"year":"2020","title":"Multi-model transfer and optimization for cloze task","authors":["J Tang, L Ling, C Ma, H Zhang, J Huang - … on Artificial Intelligence and Robotics 2020, 2020"],"snippet":"… Transformer Enc. PLM ≈ BERT WikiEn+BookCorpus+Giga5 +ClueWeb+Common Crawl RoBERTa Transformer Enc. MLM 355M BookCorpus+CCNews +OpenWebText+STORIES XLM-R Transformer Enc. MLM 550M CommonCrawl ALBERT Transformer Enc …","url":["https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11574/115740T/Multi-model-transfer-and-optimization-for-cloze-task/10.1117/12.2579412.short"]} -{"year":"2020","title":"Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity","authors":["I Vulić, S Baker, EM Ponti, U Petti, I Leviant, K Wing… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity https://multisimlex.com/ Ivan Vulic ∗♠ LTL, University of Cambridge Simon Baker ∗♠ LTL, University of Cambridge …","url":["https://arxiv.org/pdf/2003.04866"]} -{"year":"2020","title":"Multilingual AMR-to-Text Generation","authors":["A Fan, C Gardent - Proceedings of the 2020 Conference on Empirical …, 2020"],"snippet":"… 4.1 Data Pretraining For encoder pretraining on silver AMR, we take thirty million sentences from the English portion of CCNET 2 (Wenzek et al., 2019), a cleaned version of Common Crawl (an open source version of the web) …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.231.pdf"]} -{"year":"2020","title":"Multilingual Denoising Pre-training for Neural Machine Translation","authors":["Y Liu, J Gu, N Goyal, X Li, S Edunov, M Ghazvininejad… - arXiv preprint arXiv …, 2020"],"snippet":"… 2 Multilingual Denoising Pre-training We use a large-scale common crawl (CC) corpus (§2.1) to pre-train BART models (§2.2). Our ex- periments in the later sections involve finetuning a range of models pre-trained on different subsets of the CC languages §2.3) …","url":["https://arxiv.org/pdf/2001.08210"]} -{"year":"2020","title":"Multilingual Dependency Parsing from Universal Dependencies to Sesame Street","authors":["J Nivre - International Conference on Text, Speech, and …, 2020"],"snippet":"… pre-trained models provided by Che et al. [3], who train ELMo on 20 million words randomly sampled from raw WikiDump and Common Crawl datasets for 44 languages. For BERT, we employ the pretrained multilingual cased …","url":["https://link.springer.com/chapter/10.1007/978-3-030-58323-1_2"]} -{"year":"2020","title":"Multilingual Factual Knowledge Retrieval from Pretrained Language Models","authors":["Z Jiang, A Anastasopoulos, J Araki, H Ding, G Neubig - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models Zhengbao Jiang†, Antonios Anastasopoulos♣,∗, Jun Araki‡, Haibo Ding‡, Graham Neubig† †Languages Technologies Institute …","url":["https://arxiv.org/pdf/2010.06189"]} -{"year":"2020","title":"Multilingual Legal Information Retrieval System for Mapping Recitals and Normative Provisions","authors":["AK JOHN - Legal Knowledge and Information Systems: JURIX …, 2020"],"snippet":"… cc/docs/en/crawl-vectors. html): pre-trained on Common Crawl and Wikipedia. We used the word-average method, which divides the sum of the word embeddings in a legal norm by the norm length. The embedding di- mension size was set to 128 …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=fy4NEAAAQBAJ&oi=fnd&pg=PA123&dq=commoncrawl&ots=BUOuNY0sH2&sig=nWyRhj_URXdbdS30qkO5Z2kfHcY"]} -{"year":"2020","title":"Multilingual Probing Tasks for Word Representations","authors":["G Gül Şahin, C Vania, I Kuznetsov, I Gurevych - Computational Linguistics"],"snippet":"Page 1. Computational Linguistics Just Accepted MS. https://doi.org/10.1162/ COLI_a_00376 © Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license LINSPECTOR Multilingual Probing Tasks for Word Representations …","url":["https://www.mitpressjournals.org/doi/pdf/10.1162/COLI_a_00376"]} -{"year":"2020","title":"Multilingual Stance Detection in Tweets: The Catalonia Independence Corpus","authors":["E Zotova, R Agerri, M Nuñez, G Rigau - … of The 12th Language Resources and …, 2020"],"snippet":"… Initial experimentation showed that the Common Crawl5 models performed better for our particular task. The Common Crawl models are trained using a Continuous Bag-of-Words (CBOW) architecture with position-weights and …","url":["https://www.aclweb.org/anthology/2020.lrec-1.171.pdf"]} -{"year":"2020","title":"Multilingual Stance Detection: The Catalonia Independence Corpus","authors":["E Zotova, R Agerri, M Nuñez, G Rigau - arXiv preprint arXiv:2004.00050, 2020"],"snippet":"… task. The Common Crawl models are trained using a Continuous Bag-of-Words (CBOW) architecture with position-weights and 300 di- mensions on a vocabulary of 2M words … information. 5http://commoncrawl.org/ Page 4. 3.5 …","url":["https://arxiv.org/pdf/2004.00050"]} -{"year":"2020","title":"Multilingual Translation with Extensible Multilingual Pretraining and Finetuning","authors":["Y Tang, C Tran, X Li, PJ Chen, N Goyal, V Chaudhary… - arXiv preprint arXiv …, 2020"],"snippet":"… challenging to train from scratch. In contrast, monolingual data exists even for low resource languages, particularly in resources such as Wikipedia or Commoncrawl, a version of the web. Thus, leveraging this monolingual …","url":["https://arxiv.org/pdf/2008.00401"]} -{"year":"2020","title":"Multilingual Unsupervised Sentence Simplification","authors":["L Martin, A Fan, É de la Clergerie, A Bordes, B Sagot - arXiv preprint arXiv …, 2020"],"snippet":"… We mine a large quantity of paraphrases from the Common Crawl using libraries such as LASER (Artetxe et al., 2018) and faiss (Johnson et al … CCNET is an extraction of Common Crawl,2 an open source snapshot of the web …","url":["https://arxiv.org/pdf/2005.00352"]} -{"year":"2020","title":"Multilingual Zero-shot Constituency Parsing","authors":["T Kim, S Lee - arXiv preprint arXiv:2004.13805, 2020"],"snippet":"Page 1. Multilingual Zero-shot Constituency Parsing Taeuk Kim and Sang-goo Lee Department of Computer Science and Engineering Seoul National University, Seoul, Korea {taeuk,sglee}@europa.snu.ac.kr Abstract Zero-shot …","url":["https://arxiv.org/pdf/2004.13805"]} -{"year":"2020","title":"MultiMix: A Robust Data Augmentation Strategy for Cross-Lingual NLP","authors":["MS Bari, MT Mohiuddin, S Joty - arXiv preprint arXiv:2004.13240, 2020"],"snippet":"… samples around each selected sample. XLM-R is a multilingual LM that is trained on massive multilingual corpora (2.5 TB of refined Common-Crawl data in 100 languages) with a masked LM (MLM) objective. We chose XLM-R …","url":["https://arxiv.org/pdf/2004.13240"]} -{"year":"2020","title":"Multiple Knowledge GraphDB (MKGDB)","authors":["S Faralli, P Velardi, F Yusifli - Proceedings of The 12th Language Resources and …, 2020"],"snippet":"… Crawl11 Web corpus. 7https://www.objectivity.com/products/ thingspan/ thingspanfeatures/ 8https://titan.thinkaurelius.com/ 9https://neo4j.com/ 10the Neo4j platform provides an interface for the development and inclusion of …","url":["https://www.aclweb.org/anthology/2020.lrec-1.283.pdf"]} -{"year":"2020","title":"Multiscale System for Alzheimer's Dementia Recognition through Spontaneous Speech","authors":["E Edwards, C Dognin, B Bollepalli, M Singh…"],"snippet":"… Deep Random Forest Setting: We extract features using three pre-trained embeddings: Word2Vec (CBOW) with subword information [29] (pre-trained on Common Crawl), GloVe [30] pre-trained on Common Crawl and Sent2Vec …","url":["https://indico2.conference4me.psnc.pl/event/35/contributions/3302/attachments/1227/1271/Wed-SS-1-6-9.pdf"]} -{"year":"2020","title":"MuSe 2020 Challenge and Workshop: Multimodal Sentiment Analysis, Emotion-target Engagement and Trustworthiness Detection in Real-life Media: Emotional Car …","authors":["L Stappen, A Baird, G Rizos, P Tzirakis, X Du, F Hafner… - Proceedings of the 1st …, 2020"],"snippet":"… 4.3 Language FastText [5] is a library for efficient learning of word embeddings. It is based on the skipgram model where a vector representation is associated to each character n-gram. The model is trained on the English Common Crawl corpus (600B tokens) …","url":["https://dl.acm.org/doi/abs/10.1145/3423327.3423673"]} -{"year":"2020","title":"MuSe 2020--The First International Multimodal Sentiment Analysis in Real-life Media Challenge and Workshop","authors":["L Stappen, A Baird, G Rizos, P Tzirakis, X Du, F Hafner… - arXiv preprint arXiv …, 2020"],"snippet":"… 4.3 Language FastText [5] is a library for efficient learning of word embeddings. It is based on the skipgram model where a vector representation is associated to each character n-gram. The model is trained on the English Common Crawl corpus (600B tokens) …","url":["https://arxiv.org/pdf/2004.14858"]} -{"year":"2020","title":"MWPD2020: Semantic Web Challenge on Mining the Web of HTML-embedded Product Data","authors":["Z Zhang, C Bizer, R Peeters, A Primpeli"],"snippet":"Page 1. MWPD2020: Semantic Web Challenge on Mining the Web of HTML-embedded Product Data Ziqi Zhang1[0000−0002−8587−8618], Christian Bizer2[0000−0003−2367−0237], Ralph Peeters2[0000−0003 …","url":["http://ceur-ws.org/Vol-2720/paper1.pdf"]} -{"year":"2020","title":"Névszói kötőhangzók variabilitásának korpuszalapú vizsgálata Corpus-based analysis of the variability of linking vowels in nouns and adjectives","authors":["R Péter, L Dániel"],"snippet":"… 3.1 Corpus The corpus on which we conducted our measurements is the prepublished version of the Webcorpus 2 (Nemeskey, 2020). It is based on the Common Crawl webcorpus, which is a collection of pages …","url":["https://hlt.bme.hu/media/pdf/thesis_levai_ma.pdf"]} -{"year":"2020","title":"Named Entity Recognition for Code-Mixed Indian Corpus using Meta Embedding","authors":["R Priyadharshini, BR Chakravarthi, M Vegupatti… - 2020 6th International …, 2020"],"snippet":"… IV. EXPERIMENT We use FastText word embedding trained from Common Crawl and Wikipedia [30] for English and Hindi-Devanagari script (native script for Hindi). We also add the English Twitter GloVe word embeddings since the NER data is from Twitter …","url":["https://ieeexplore.ieee.org/abstract/document/9074379/"]} -{"year":"2020","title":"Naming unrelated words reliably predicts creativity","authors":["JA Olson, J Nahas, D Chmoulevitch, ME Webb - PsyArXiv. December, 2020"],"snippet":"… We trained the GloVe model with the Common Crawl corpus, which contains the text of billions of web pages … We chose the GloVe algorithm and the Common Crawl corpus; this combination correlates best with …","url":["https://psyarxiv.com/qvg8b/download/?format=pdf"]} -{"year":"2020","title":"Narrative Origin Classification of Israeli-Palestinian Conflict Texts","authors":["J Wei, E Santos Jr - The Thirty-Third International Flairs Conference, 2020"],"snippet":"… For training and testing, we converted text inputs into nu- merical representations using 300-dimensional distributed embeddings pre-trained on the Common Crawl database with the GloVe method (Pennington, Socher, and Manning 2014) …","url":["https://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS20/paper/download/18443/17596"]} -{"year":"2020","title":"Natural Language Correction-Thesis Proposal","authors":["J Náplava"],"snippet":"… considered. This table displays top 15 languages with worst results on dictionary baseline. a new pipeline utilizing both clean data from Wikipedia and also not that clean data from general web (utilizing CommonCrawl corpus). The …","url":["http://ufal.mff.cuni.cz/~zabokrtsky/pgs/thesis_proposal/jakub-naplava-proposal.pdf"]} -{"year":"2020","title":"Natural Language Generation using Transformer Network in an Open-domain Setting","authors":["AAM Gopinath, P Bhattacharyya","D Varshney, A Ekbal, GP Nagaraja, M Tiwari… - International Conference on …, 2020"],"snippet":"… The embeddings used in our model are trained on Common Crawl dataset with 840B tokens and 2.2M vocab. We use 300-dimensional sized vectors. 3.3 Baseline Models We formulate our task of response generation as a machine translation problem …","url":["https://link.springer.com/chapter/10.1007/978-3-030-51310-8_8","https://www.researchgate.net/profile/Deeksha-Varshney/publication/342238636_Natural_Language_Generation_Using_Transformer_Network_in_an_Open-Domain_Setting/links/60432b74299bf1e0785aff2f/Natural-Language-Generation-Using-Transformer-Network-in-an-Open-Domain-Setting.pdf"]} -{"year":"2020","title":"Natural Language Processing (NLP) and Text Analytics","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"In the preceding chapters, we have solely relied on the structure of the HTML documents themselves to scrape information from them, and that is a powerful method to extract information.","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_4"]} -{"year":"2020","title":"Natural Language Processing Model for Managing Maintenance Requests in Buildings","authors":["Y Bouabdallaoui, Z Lafhaj, P Yim, L Ducoulombier… - Buildings, 2020"],"snippet":"… In order to overcome the limited corpus of vocabulary in our dataset, a pre-trained word embedding was used. It is based on the FastText model [44] and trained on a large corpus of French vocabulary from Wikipedia and Common Crawl [45] …","url":["https://www.mdpi.com/2075-5309/10/9/160/pdf"]} -{"year":"2020","title":"Natural Language Transfer Learning for Physiological Textual Similarity","authors":["V Awatramani, P Gupta - 2020 10th International Conference on Cloud …, 2020"],"snippet":"… Moreover, RoBERTa is trained over 160 GB of text that includes English Wikipedia and BooksCorporus used earlier in BERT and additionally, CommonCrawl News (CC-NEWS) dataset consisting of 63 million articles …","url":["https://ieeexplore.ieee.org/abstract/document/9058216/"]} -{"year":"2020","title":"Naver Labs Europe's Participation in the Robustness, Chat, and Biomedical Tasks at WMT 2020","authors":["A Bérard, V Nikoulina, I Calapodescu, J Philip - … of the Fifth Conference on Machine …, 2020"],"snippet":"… Corpus Sents Docs Paracrawl 33.9M – Rapid2019 965k 48.3k Europarl 1.75M 6.7k Commoncrawl 1.97M – Wikimatrix 5.68M – Wikititles … of large monolingual English and German datasets (100M lines in total per language …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.57.pdf"]} -{"year":"2020","title":"Nearest Neighbor Machine Translation","authors":["U Khandelwal, A Fan, D Jurafsky, L Zettlemoyer… - arXiv preprint arXiv …, 2020"],"snippet":"… The parallel sentences are mined from cleaned monolingual commoncrawl data created using the ccNet pipeline (Wenzek et al., 2019) … with only the kNN distribution (λ = 1) with beam size 1, retrieving k = 8 neighbors from …","url":["https://arxiv.org/pdf/2010.00710"]} -{"year":"2020","title":"NestMSA: a new multiple sequence alignment algorithm","authors":["M Kayed, AA Elngar - The Journal of Supercomputing, 2020"],"snippet":"Multiple sequence alignment (MSA) is a core problem in many applications. Various optimization algorithms such as genetic algorithm and particle swarm opti.","url":["https://link.springer.com/article/10.1007/s11227-020-03206-0"]} -{"year":"2020","title":"Neural Aspect-based Text Generation","authors":["H Hayashi - 2020"],"snippet":"Page 1. November 25, 2020 DRAFT Thesis Proposal Neural Aspect-based Text Generation Hiroaki Hayashi November 25, 2020 Language Technologies Institute School of Computer Science Carnegie Mellon University Pittsburgh, PA 15123 Thesis Committee …","url":["https://hiroakih.me/thesis_proposal.pdf"]} -{"year":"2020","title":"Neural Databases","authors":["J Thorne, M Yazdani, M Saeidi, F Silvestri, S Riedel… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Neural Databases James Thorne University of Cambridge Facebook AI jt719@cam.ac.uk Majid Yazdani Facebook AI myazdani@fb.com Marzieh Saeidi Facebook AI marzieh@fb.com Fabrizio Silvestri Facebook AI fsilvestri@fb.com …","url":["https://arxiv.org/pdf/2010.06973"]} -{"year":"2020","title":"NEURAL MACHINE TRANSLATION WITH UNIVERSAL VISUAL REPRESENTATION","authors":["Z Li, H Zhao"],"snippet":"… We used newsdev2016 as the dev set and newstest2016 as the test set. 2) For the EN-DE translation task, 4.43M bilingual sentence pairs of the WMT14 dataset were used as training data, including Common Crawl, News Commentary, and Europarl v7 …","url":["https://www.researchgate.net/profile/Zhuosheng_Zhang4/publication/339375656_Neural_Machine_Translation_with_Universal_Visual_Representation/links/5e4e27a4299bf1cdb938db20/Neural-Machine-Translation-with-Universal-Visual-Representation.pdf"]} -{"year":"2020","title":"Neural Simultaneous Speech Translation Using Alignment-Based Chunking","authors":["P Wilken, T Alkhouli, E Matusov, P Golik - arXiv preprint arXiv:2005.14489, 2020"],"snippet":"Page 1. arXiv:2005.14489v1 [cs.CL] 29 May 2020 Neural Simultaneous Speech Translation Using Alignment-Based Chunking Patrick Wilken, Tamer Alkhouli, Evgeny Matusov, Pavel Golik Applications Technology (AppTek), Aachen …","url":["https://arxiv.org/pdf/2005.14489"]} -{"year":"2020","title":"Neural Text Segmentation and Its Application to Sentiment Analysis","authors":["J Li, B Chiu, S Shang, L Shao - IEEE Transactions on Knowledge and Data …, 2020"],"snippet":"Page 1. 1041-4347 (c) 2020 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/9051834/"]} -{"year":"2020","title":"Neural Word Embeddings for Sentiment Analysis","authors":["B Naderalvojoud - 2020"],"snippet":"Page 1. 1 Page 2. NEURAL WORD EMBEDDINGS FOR SENTIMENT ANALYSIS DUYGU ANAL˙IZ˙I˙IC¸˙IN S˙IN˙IRSEL S¨OZC¨UK¨OZ YERLES¸˙IKLER˙I BEHZAD NADERALVOJOUD PROF. DR. EBRU SEZER Supervisor Submitted to …","url":["http://www.openaccess.hacettepe.edu.tr:8080/xmlui/bitstream/handle/11655/22784/10350958.pdf?sequence=1"]} -{"year":"2020","title":"NewB: 200,000+ Sentences for Political Bias Detection","authors":["J Wei - arXiv preprint arXiv:2006.03051, 2020"],"snippet":"… When inputting sentences into the model, we tokenize each sentence at the word-level and convert each word into a vector using 300dimensional distributed embeddings trained on the Common Crawl database with the GloVe method (Pennington et al., 2014) …","url":["https://arxiv.org/pdf/2006.03051"]} -{"year":"2020","title":"News topic classification as a first step towards diverse news recommendation","authors":["O De Clercq, L De Bruyne, V Hoste - Computational Linguistics in the Netherlands …, 2020"],"snippet":"… The RobBERT model was trained on the Dutch part of the 39GB OSCAR corpus, a part of the Common Crawl corpus (Suárez et al. 2019). As sub-word token input, BERTje uses WordPiece, whereas RobBERT uses byte-level Byte Pair Encoding (BPE) …","url":["https://www.clinjournal.org/clinj/article/download/103/92"]} -{"year":"2020","title":"NLNDE at CANTEMIST: Neural Sequence Labeling and Parsing Approaches for Clinical Concept Extraction","authors":["L Lange, X Dai, H Adel, J Strötgen - arXiv preprint arXiv:2010.12322, 2020"],"snippet":"… In particular, we use pre-trained fastText embeddings [22] that were trained on articles from Wikipedia and the Common Crawl, as well as domain-speci c fastText embeddings [23] that were pretrained on articles of the Spanish …","url":["https://arxiv.org/pdf/2010.12322"]} -{"year":"2020","title":"NLP North at WNUT-2020 Task 2: Pre-training versus Ensembling for Detection of Informative COVID-19 English Tweets","authors":["AG Møller, R van der Goot, B Plank - Proceedings of the 6th Workshop on Noisy …, 2020"],"snippet":"… twitter. ArXiv, abs/2005.07503. Sebastian Nagel. 2016. https://commoncrawl org/2016/10/news-dataset-available/. Dat Quoc Nguyen, Thanh Vu, Afshin Rahimi, Mai Hoang Dao, Linh The Nguyen, and Long Doan. 2020. WNUT …","url":["http://www.robvandergoot.com/doc/wnut2020.pdf"]} -{"year":"2020","title":"No computation without representation: Avoiding data and algorithm biases through diversity","authors":["C Kuhlman, L Jackson, R Chunara - arXiv preprint arXiv:2002.11836, 2020"],"snippet":"… Page 3. Dataset Description Sensitive Attribute race gender age other Adult: US Census income data [36]. [43, 57, 72, 77] [2, 3, 25, 27, 43, 57, 64, 77] [57] [2, 57] Common Crawl: Occupation biographies [42]. [34] Comm …","url":["https://arxiv.org/pdf/2002.11836"]} -{"year":"2020","title":"Noise Pollution in Hospital Readmission Prediction: Long Document Classification with Reinforcement Learning","authors":["L Xu, J Hogan, RE Patzer, JD Choi - arXiv preprint arXiv:2005.01259, 2020"],"snippet":"… Averaged Word Embedding For the averaged word embedding encoder (AWE; Section 4.2), em- beddings generated by FastText trained on the Common Crawl and the English Wikipedia with the 300 dimension is …","url":["https://arxiv.org/pdf/2005.01259"]} -{"year":"2020","title":"Not All Swear Words Are Used Equal: Attention over Word n-grams for Abusive Language Identification","authors":["HJ Jarquín-Vásquez, M Montes-y-Gómez… - Mexican Conference on …, 2020","L Villasenor-Pineda - Pattern Recognition: 12th Mexican Conference, MCPR …"],"snippet":"… On the other hand, for word representation we used pre-trained fastText embeddings [3], trained with subword information on Common Crawl. Table 2. Proposed attention-based deep neural network hyperparameters. Layer …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=UO_rDwAAQBAJ&oi=fnd&pg=PA282&dq=commoncrawl&ots=4BGirDzhwW&sig=0HRX8HVtoXlgmfN76pD6wgHh1_c","https://link.springer.com/chapter/10.1007/978-3-030-49076-8_27"]} -{"year":"2020","title":"NOTICE TO BORROWERS","authors":["C Li"],"snippet":"Page 1. DISTRIBUTION AGREEMENT In presenting this thesis as a partial fulfillment of the requirements for an advanced degree from Emory University, I agree that the Library of the University shall make it available for inspection …","url":["https://franklicm.github.io/files/thesis_final.pdf"]} -{"year":"2020","title":"Novel Entity Discovery from Web Tables","authors":["S Zhang, E Meij, K Balog, R Reinanda - arXiv preprint arXiv:2002.00206, 2020"],"snippet":"Page 1. Novel Entity Discovery from Web Tables Shuo Zhang Bloomberg London, United Kingdom szhang611@bloomberg.net Edgar Meij Bloomberg London, United Kingdom emeij@bloomberg.net Krisztian Balog University …","url":["https://arxiv.org/pdf/2002.00206"]} -{"year":"2020","title":"Novel Opinion mining System for Movie Reviews","authors":["AH AbdulHafiz - International Journal of Intelligent Systems and …, 2020"],"snippet":"… We have adopted the Word2Vec feature representation, the CSG in particular, in our work. It has a pre-trained word vector for the English language trained on 1 million common crawl and Wikipedia documents. Word2vec is a two-layer neural net …","url":["https://151.80.211.128/IJISAE/article/download/1090/621"]} -{"year":"2020","title":"NRC Systems for the 2020 Inuktitut–English News Translation Task","authors":["R Knowles, D Stewart, S Larkin, P Littell - Proceedings of the Fifth Conference on …, 2020"],"snippet":"… org/wmt20/translation-task.html Page 3. 157 Wiki Titles or Common Crawl Inuktitut data.9 We incorporated the news portion of the development data in training our models to alleviate the domain mismatch issue (Section 5.1). 4 Preprocessing and Postprocessing …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.13.pdf"]} -{"year":"2020","title":"NUIG-DSI at the WebNLG+ challenge: Leveraging Transfer Learning for RDF-to-text generation","authors":["N Pasricha, M Arcan, P Buitelaar - Proceedings of the 3rd WebNLG Workshop on …, 2020"],"snippet":"… middle) and reference lexicalisation (bottom). (Vaswani et al., 2017) and is pre-trained using un- supervised learning on a large corpus of unlabeled data obtained from the Web using the Common Crawl project. It is trained using …","url":["https://webnlg-challenge.loria.fr/files/2020.webnlg-papers.15.pdf"]} -{"year":"2020","title":"ODArchive–Creating an Archive for Structured Data from Open Data Portals","authors":["T Weber, J Mitöhner"],"snippet":"… We deem this project particularly useful as a resource for experiments on real-world structured data: to name an example, while large corpora of tabular data from Web tables have been made available via CommonCrawl [6], the …","url":["https://aic.ai.wu.ac.at/~polleres/publications/webe-etal-2020ISWC.pdf"]} -{"year":"2020","title":"On Finding Similar Verses from the Holy Quran using Word Embeddings","authors":["S Saeed, S Haider, Q Rajput - 2020 International Conference on Emerging Trends in …, 2020"],"snippet":"… It is an English multitask Convolution Neural Network (CNN)[4] trained on OntoNotes[14], with GloVe[11] vectors trained on Common Crawl[3]. It contains 685,000 keys, 20,000 unique words and each word …","url":["https://ieeexplore.ieee.org/abstract/document/9080691/"]} -{"year":"2020","title":"On Multilingual Word Embeddings & their applications in machine translation","authors":["N Jain"],"snippet":"… crosslingual signals. The hard/challenging dataset comprise of English-Italian, English-German, English-Finnish and English-Spanish pairs. These embeddings are trained on Wacky crawling corpora/common crawl corpora. We notice …","url":["https://naman-ntc.github.io/data/Seminar.pdf"]} -{"year":"2020","title":"On revealing shared conceptualization among open datasets","authors":["M Bogdanović, N Veljković, MF Gligorijević, D Puflović… - Journal of Web Semantics, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S1570826820300573"]} -{"year":"2020","title":"On the comparability of Pre-trained Language Models","authors":["M Aßenmacher, C Heumann - arXiv preprint arXiv:2001.00781, 2020"],"snippet":"… Wikipedia data sets are available in the Tensorflow Datasets-module3. CommonCrawl Among other resources, Yang et al. (2019) used data from CommonCrawl … CommonCrawl https://commoncrawl.org/ Unclear Fully available XLNet ClueWeb 2012-B Callan et al …","url":["https://arxiv.org/pdf/2001.00781"]} -{"year":"2020","title":"On the diminishing return of labeling clinical reports","authors":["JB Lamare, T Olatunji, L Yao - arXiv preprint arXiv:2010.14587, 2020"],"snippet":"… linear classifiers. Recent NLP ad- vance pushes the envelop much further by leveraging web-scale data – for instance, the Common Crawl project 1 that produces 20TB of textual data from the Internet each month. To cope …","url":["https://arxiv.org/pdf/2010.14587"]} -{"year":"2020","title":"On the Effectiveness of Behavior-Based Ransomware Detection","authors":["J Han, Z Lin, DE Porter - International Conference on Security and Privacy in …, 2020"],"snippet":"… To measure whether partial encryption is effective at withholding user data, we collected 200 different PDF documents from the web using Common Crawl Document Download [4]. We choose PDF documents with a minimum of 10 pages …","url":["https://link.springer.com/chapter/10.1007/978-3-030-63095-9_7"]} -{"year":"2020","title":"On the evaluation of retrofitting for supervised short-text classification","authors":["K GHAZI, A TCHECHMEDJIEV, S HARISPE…"],"snippet":"… we considered the 300-dimensional word vectors: (i) Paragram [16], learned from the text content in the paraphrase database PPDB, (ii) Glove [22] learned from Wikipedia and Common Crawl data, (iii) MUSE, a fastText embedding …","url":["http://ceur-ws.org/Vol-2708/donlp2.pdf"]} -{"year":"2020","title":"On the impact of publicly available news and information transfer to financial markets","authors":["M Jazbec, B Pásztor, F Faltings, N Antulov-Fantulin… - arXiv preprint arXiv …, 2020"],"snippet":"… To address ihttps://commoncrawl.org iiDetailed statistics about the Common Crawl can found here: https://commoncrawl.github.io/cc-crawl-statistics iiiWe omitted the domain www.nbonews.com. While the most frequently occurring …","url":["https://arxiv.org/pdf/2010.12002"]} -{"year":"2020","title":"On the importance of pre-training data volume for compact language models","authors":["V Micheli, M D'Hoffschmidt, F Fleuret - arXiv preprint arXiv:2010.03813, 2020"],"snippet":"… OSCAR 2 (Ortiz Suárez et al., 2019) is a large-scale multilingual open source collection of corpora ob- tained by language classification and filtering of the Common Crawl corpus 3. The whole French part amounts to 138 GB …","url":["https://arxiv.org/pdf/2010.03813"]} -{"year":"2020","title":"On the Language Neutrality of Pre-trained Multilingual Representations","authors":["J Libovický, R Rosa, A Fraser - arXiv preprint arXiv:2004.05160, 2020"],"snippet":"… XLM-RoBERTa. Conneau et al. (2019) claim that the original mBERT is under-trained and train a similar model on a larger dataset that consists of two terabytes of plain text extracted from CommonCrawl (Wenzek et al., 2019) …","url":["https://arxiv.org/pdf/2004.05160"]} -{"year":"2020","title":"On the Persistence of Persistent Identifiers of the Scholarly Web","authors":["M Klein, L Balakireva - arXiv preprint arXiv:2004.03011, 2020"],"snippet":"… These findings were confirmed in a large scale study by Thompson and Jian [16] based on two samples of the web taken from Common Crawl6 datasets … Thompson, HS, Tong, J.: Can common crawl reliably track persistent identifier (PID) use over time …","url":["https://arxiv.org/pdf/2004.03011"]} -{"year":"2020","title":"On the synthesis of metadata tags for HTML files","authors":["P Jiménez, JC Roldán, FO Gallego, R Corchuelo - Software: Practice and Experience"],"snippet":"… Recently, an analysis of the 32.04 million domains in the November 2019 Common Crawl has revealed that only 11.92 million domains provide metadata tags,1 which clearly argues for a method that helps software agents deal with the documents provided by the remaining …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.2886"]} -{"year":"2020","title":"On using Product-Specific Schema. org from Web Data Commons: An Empirical Set of Best Practices","authors":["R Kiran Selvam, M Kejriwal - arXiv e-prints, 2020","RK Selvam, M Kejriwal - arXiv preprint arXiv:2007.13829, 2020"],"snippet":"… on e-commerce websites. The Web Data Commons (WDC) project has extracted schema.org data at scale from webpages in the Common Crawl and made it available as an RDF 'knowledge graph' at scale. The portion of this …","url":["https://arxiv.org/pdf/2007.13829","https://ui.adsabs.harvard.edu/abs/2020arXiv200713829K/abstract"]} -{"year":"2020","title":"On-The-Fly Information Retrieval Augmentation for Language Models","authors":["H Wang, D McAllester - Proceedings of the First Joint Workshop on Narrative …, 2020"],"snippet":"… News etc. For language modelling we use the NY Times portion because it is written by native English speakers. Since GPT 2.0 is trained on Common Crawl which contains news collections started from 2008. To avoid testing …","url":["https://www.aclweb.org/anthology/2020.nuse-1.14.pdf"]} -{"year":"2020","title":"One Belt, One Road, One Sentiment? A Hybrid Approach to Gauging Public Opinions on the New Silk Road Initiative","authors":["JK Chandra, E Cambria, A Nanetti"],"snippet":"… ABSA. We used the Common Crawl GloVe version [44], a pre-trained 300-dimension vector representation database of 840 billion tokens and 2.2 million vocabulary, to convert our preprocessed tweets into word embeddings …","url":["https://sentic.net/one-belt-one-road-one-sentiment.pdf"]} -{"year":"2020","title":"Open Information Extraction as Additional Source for Kazakh Ontology Generation","authors":["N Khairova, S Petrasova, O Mamyrbayev, K Mukhsina - Asian Conference on …, 2020"],"snippet":"… also for many others. For example, an experiment was conducted in [19] for assessing the adequacy of measuring the factual density of 50 randomly selected Spanish documents in the CommonCrawl corpus. In a recent study …","url":["https://link.springer.com/chapter/10.1007/978-3-030-41964-6_8"]} -{"year":"2020","title":"Open Intent Extraction from Natural Language Interactions","authors":["N Vedula, N Lipka, P Maneriker, S Parthasarathy - Proceedings of The Web …, 2020"],"snippet":"… 2A commercial Customer Relationship Management (CRM) software. Implementation: We use the 300-dimensional GloVe embeddings [50] pre-trained on the Common Crawl dataset3, and character embeddings as per Ma et al [42] …","url":["https://dl.acm.org/doi/pdf/10.1145/3366423.3380268"]} -{"year":"2020","title":"Open science-based framework to reveal open data publishing: an experience from using Common Crawl","authors":["A Correa, I Fernandes - ELPUB 24rd edition of the International Conference on …, 2020"],"snippet":"The publishing of open data is considered a key element for civic participation paving the way to the 'public value', a term which underpins the social contribution. A result of that can be seen through the popularity of data portals published all around …","url":["https://hal.archives-ouvertes.fr/hal-02544245/document"]} -{"year":"2020","title":"Open source speech recognition on edge devices","authors":["R Peinl, B Rizk, R Szabad - 2020 10th International Conference on Advanced …, 2020"],"snippet":"… To make the comparison as fair as possible we used the KenLM 6-gram language model for all ASR models except DS2, which came with its own word-level language model trained on CommonCrawl (en-00) from the Common Crawl Corpus7 …","url":["https://ieeexplore.ieee.org/abstract/document/9208978/"]} -{"year":"2020","title":"Open-Domain Question Answering Goes Conversational via Question Rewriting","authors":["R Anantha, S Vakulenko, Z Tu, S Longpre, S Pulman… - arXiv preprint arXiv …, 2020"],"snippet":"… relevant pages with randomly sampled web pages that constitute 1% of the Common Crawl dataset identified … Wayback Machine and 9.9M random web pages from the Common Crawl dataset … 3 https://commoncrawl.org/2019 …","url":["https://arxiv.org/pdf/2010.04898"]} -{"year":"2020","title":"Optimal Subarchitecture Extraction For BERT","authors":["A de Wynter, DJ Perry - arXiv preprint arXiv:2010.10499, 2020"],"snippet":"… In order to have a sufficiently diverse dataset to pre-train Bort, we combined corpora obtained from Wikipedia7, Wiktionary8, OpenWebText (Gokaslan and Cohen, 2019), UrbanDictionary9, One Billion Words (Chelba et al., 2014) …","url":["https://arxiv.org/pdf/2010.10499"]} -{"year":"2020","title":"Optimizing Distributed Computing Systems via Machine Learning","authors":["H Wang - 2020"],"snippet":"Page 1. Optimizing Distributed Computing Systems via Machine Learning by Hao Wang A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Electrical …","url":["https://tspace.library.utoronto.ca/bitstream/1807/103710/1/Wang_Hao_202011_PhD_thesis.pdf"]} -{"year":"2020","title":"OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings","authors":["S Dev, T Li, JM Phillips, V Srikumar - arXiv preprint arXiv:2007.00049, 2020"],"snippet":"… Our code for reproducing experiments will be released upon publication. Debiasing contextualized embeddings. The operations above are described for a noncontextualized embedding; we use one of the largest such …","url":["https://arxiv.org/pdf/2007.00049"]} -{"year":"2020","title":"Out-of-the-Box and into the Ditch? Multilingual Evaluation of Generic Text Extraction Tools","authors":["A Barbaresi, G Lejeune - Proceedings of the 12th Web as Corpus Workshop, 2020"],"snippet":"… Recently, approaches using the CommonCrawl1 have flourished as they allow for faster download and processing by skipping (or more precisely outsourcing) the crawling phase (Habernal et al., 2016; Schäfer, 2016) … 1https://commoncrawl.org …","url":["https://www.aclweb.org/anthology/2020.wac-1.2.pdf"]} -{"year":"2020","title":"Overview of the CLEF eHealth 2020 task 2: consumer health search with ad hoc and spoken queries","authors":["L Goeuriot, Z Liu, G Pasi, GG Saez, M Viviani, C Xu - … Notes of Conference and Labs of …, 2020"],"snippet":"… 2.1 Documents The 2018 CLEF eHealth Consumer Health Search document collection was used in this year's IR challenge. As detailed in [17], this collection consists of web pages acquired from the CommonCrawl. An …","url":["http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2020/paper_260.pdf"]} -{"year":"2020","title":"Overview of the CLEF eHealth Evaluation Lab 2020","authors":["C Xu - … IR Meets Multilinguality, Multimodality, and Interaction …"],"snippet":"… Task 2. The 2018 CLEF eHealth Consumer Health Search document collection was used in this year's IR challenge. As detailed in [14], this collection consists of web pages acquired from the CommonCrawl. An initial list of websites was identified for acquisition …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=IxP9DwAAQBAJ&oi=fnd&pg=PA255&dq=commoncrawl&ots=BCbV87DfTS&sig=y0JIuiOfa-DKv4aFVa2ZXlCYBic"]} -{"year":"2020","title":"Overview of the seventh Dialog System Technology Challenge: DSTC7","authors":["LF D'Haro, K Yoshino, C Hori, TK Marks… - Computer Speech & …, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0885230820300012"]} -{"year":"2020","title":"Overview of the Transformer-based Models for NLP Tasks","authors":["A Gillioz, J Casas, E Mugellini, O Abou Khaled"],"snippet":"… they contain. The tokenization is done with SentencePiece [15]. In a few cases, for example, in [16], the authors only used a subset of those datasets (eg Stories [17] is a subset of CommonCrawl dataset). IV. BENCHMARKS …","url":["https://annals-csis.org/Volume_21/drp/pdf/20.pdf"]} -{"year":"2020","title":"Overview of Touché 2020: Argument Retrieval","authors":["A Bondarenko, M Fröbe, M Beloucif, L Gienapp… - Working Notes Papers of the …, 2020","H Wachsmuth, M Potthast, M Hagen - … IR Meets Multilinguality, Multimodality, and Interaction …","Y Ajjour, A Panchenko, C Biemann, B Stein… - Experimental IR Meets …"],"snippet":"… Stab et al.[26] retrieve documents from the Common Crawl 3 and then use a topic-dependent neural network to extract arguments from the retrieved documents … This method is shown to outperform several 3 http://commoncrawl. org. Page 398 …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=IxP9DwAAQBAJ&oi=fnd&pg=PA384&dq=commoncrawl&ots=BCbV87DfTS&sig=590FLiaMF0QksqxDhusBxoZQuK4","https://link.springer.com/content/pdf/10.1007/978-3-030-58219-7.pdf#page=395","https://webis.de/downloads/publications/papers/stein_2020v.pdf"]} -{"year":"2020","title":"ParaCrawl: Web-Scale Acquisition of Parallel Corpora","authors":["M Banón, P Chen, B Haddow, K Heafield, H Hoang…"],"snippet":"… In an ex- ploratory study, only 5% of a collection of web pages with useful content were found in CommonCrawl. This may have improved with recent more extensive crawls by CommonCrawl but there is still a strong argument for targeted crawling. 4 Crawling …","url":["https://www.neural.mt/papers/edinburgh/paracrawl.pdf"]} -{"year":"2020","title":"Parallelograms revisited: Exploring the limitations of vector space models for simple analogies","authors":["JC Peterson, D Chen, TL Griffiths - Cognition, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0010027720302596"]} -{"year":"2020","title":"Pedro Javier Ortiz Suárez1, 3 [0000− 0003− 0343− 8852], Yoann Dupont2","authors":["G Lejeune, T Tian"],"snippet":"… For the fixed word embeddings we used the Common Crawl-based FastText embeddings [10] originally trained by Facebook as opposed to the embeddings provided by the HIPE shared task, as we obtained better dev …","url":["http://ceur-ws.org/Vol-2696/paper_203.pdf"]} -{"year":"2020","title":"Persistent metadata catalog","authors":["GS Mcpherson, Y Mikhaylyuta, TD Baker, RJ Cole - US Patent 10,853,356, 2020"],"snippet":"… sources. In one embodiment metadata catalog service 120 may host metadata for a collection of data sets that are available in the public domain, such as the 1000 genome, NASA NEX, and the Common Crawl Corpus data sets …","url":["https://www.freepatentsonline.com/10853356.html"]} -{"year":"2020","title":"Phishing Detection Using Machine Learning Technique","authors":["J Rashid, T Mahmood, MW Nisar, T Nazir - 2020 First International Conference of …, 2020"],"snippet":"… and June 2017. In particular, we selected 5000 phishing web pages, and all web pages are more stable, especially based on URLs. The fish tank is based entirely on the Alexa URL and Common Crawl archives. B. Step 2 Vocabulary …","url":["https://ieeexplore.ieee.org/abstract/document/9283771/"]} -{"year":"2020","title":"PhishingLine: Hybrid Phishing Classifier with Logo Detection","authors":["K Vohra - 2020"],"snippet":"… 29 5.5 Web Capture . . . . . 30 5.5.1 WARC . . . . . 30 5.5.2 Common Crawl . . . . . 30 5.6 Client Server Architecture . . . . . 30 5.6.1 TCP …","url":["https://www.ka.beer/pdf/project.pdf"]} -{"year":"2020","title":"Phonemer at WNUT-2020 Task 2: Sequence Classification Using COVID Twitter BERT and Bagging Ensemble Technique based on Plurality Voting","authors":["A Wadhawan - arXiv preprint arXiv:2010.00294, 2020"],"snippet":"… Table 2. 4.2 System Settings For training the CNN, LSTM and BiLSTMs, word vectors for english language pre-trained on Common Crawl2 and Wikipedia3 are downloaded4 and used using FastText5 library. These word vectors …","url":["https://arxiv.org/pdf/2010.00294"]} -{"year":"2020","title":"Photo Stream Question Answer","authors":["W Zhang, S Tang, Y Cao, J Xiao, S Pu, F Wu, Y Zhuang - Proceedings of the 28th …, 2020"],"snippet":"Page 1. Photo Stream Question Answer Wenqiao Zhang Zhejiang University wenqiaozhang@zju.edu.cn Siliang Tang* Zhejiang Universety siliang@zju.edu.cn Yanpeng Cao,Jun Xiao Zhejiang University caoyp,junx@zju.edu.cn …","url":["https://dl.acm.org/doi/abs/10.1145/3394171.3413745"]} -{"year":"2020","title":"PMap: Ensemble Pre-training Models for Product Matching","authors":["N Kertkeidkachorn, R Ichise"],"snippet":"… In this section, we explain the pre-train models and how to fine-tune them. 5 http://webdatacommons.org/largescaleproductcorpus/v2/index. html 6 http://webdatacommons.org/structureddata/ 7 https://commoncrawl …","url":["http://ceur-ws.org/Vol-2720/paper2.pdf"]} -{"year":"2020","title":"PoKED: A Semi-Supervised System for Word Sense Disambiguation","authors":["F Wei"],"snippet":"Page 1. PoKED: A Semi-Supervised System for Word Sense Disambiguation Feng Wei 1 Abstract In this paper, we propose a semi-supervised neural system, named Position-wise Orthogonal Knowledge-Enhanced Disambiguator …","url":["https://proceedings.icml.cc/static/paper_files/icml/2020/1929-Paper.pdf"]} -{"year":"2020","title":"Practical Data Science for Information Professionals","authors":["D Stuart - 2020"],"snippet":"Page 1. Practical Data Science for Information Professionals Page 2. Every purchase of a Facet book helps to fund CILIP's advocacy, awareness and accreditation programmes for information professionals. Page 3. Practical Data …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=2TjzDwAAQBAJ&oi=fnd&pg=PP3&dq=commoncrawl&ots=kN5yF0ztQ5&sig=n_c9XhBzAQWnpO6G9vyAev9qRJY"]} -{"year":"2020","title":"Pragmatic Aspects of Discourse Production for the Automatic Identification of Alzheimer's Disease","authors":["A Pompili, A Abad, DM de Matos, IP Martins - IEEE Journal of Selected Topics in …, 2020"],"snippet":"… For this purpose, we rely on a pre-trained model of word vector representations containing 2 million word vectors, in 300 dimensions, trained with fastText on Common Crawl [42]. In the process of converting a sentence into its vector …","url":["https://ieeexplore.ieee.org/abstract/document/8963723/"]} -{"year":"2020","title":"Pre-indexing Pruning Strategies","authors":["S Altin, R Baeza-Yates, BB Cambazoglu - International Symposium on String …, 2020"],"snippet":"… 4 Experimental Setup. 4.1 Document Collection. As web document collection, we mostly use the open source web collection provided by Common Crawl, CC, in November 2017 … 5 Experimental Results. 5.1 Common Crawl and BM25 …","url":["https://link.springer.com/chapter/10.1007/978-3-030-59212-7_13"]} -{"year":"2020","title":"Pre-trained Models for Natural Language Processing: A Survey","authors":["X Qiu, T Sun, Y Xu, Y Shao, N Dai, X Huang - arXiv preprint arXiv:2003.08271, 2020"],"snippet":"Page 1. .Invited Review . Pre-trained Models for Natural Language Processing: A Survey Xipeng Qiu*, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai & Xuanjing Huang School of Computer Science, Fudan University, Shanghai …","url":["https://arxiv.org/pdf/2003.08271"]} -{"year":"2020","title":"Pre-training Polish Transformer-based Language Models at Scale","authors":["S Dadas, M Perełkiewicz, R Poświata - arXiv preprint arXiv:2006.04229, 2020"],"snippet":"… the size of the training corpus. 2) We proposed a method for collecting and pre-processing the data from the Common Crawl database to obtain clean, high-quality text corpora. 3) We conducted a comprehensive evaluation …","url":["https://arxiv.org/pdf/2006.04229"]} -{"year":"2020","title":"Pre-training via Leveraging Assisting Languages and Data Selection for Neural Machine Translation","authors":["H Song, R Dabre, Z Mao, F Cheng, S Kurohashi… - arXiv preprint arXiv …, 2020"],"snippet":"… Additionally, we used Common Crawl4 monolingual corpora for pre-training … We filled the CurrentDistribution line by line in Common Crawl file if the ratio of the length of current line had been less than the ratio of this length in target length distribution …","url":["https://arxiv.org/pdf/2001.08353"]} -{"year":"2020","title":"Pre-training via Leveraging Assisting Languages for Neural Machine Translation","authors":["H Song, R Dabre, Z Mao, F Cheng, S Kurohashi…"],"snippet":"… We used Common Crawl3 monolingual corpora for pre-training … We created a shared sub-word vocabulary us- ing Japanese and English data from ASPEC mixing with Japanese, English, Chinese and French data from Common Crawl …","url":["https://shyyhs.github.io/files/ACL2020SRW_Song_paper.pdf"]} -{"year":"2020","title":"Predicting Consumers' Brand Sentiment Using Text Analysis on Reddit","authors":["P Cen"],"snippet":"… with a F-score of 86.25 for NER tasks. The en_core_web_md model is an English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl For NER tasks, it assigns various labels to identified entities, such as “CARDINAL,” “ORG,” …","url":["https://repository.upenn.edu/cgi/viewcontent.cgi?article=1097&context=joseph_wharton_scholars"]} -{"year":"2020","title":"Predicting Themes within Complex Unstructured Texts: A Case Study on Safeguarding Reports","authors":["A Edwards, D Rogers, J Camacho-Collados… - arXiv preprint arXiv …, 2020"],"snippet":"… Pre-trained word embeddings We leverage two pre-trained 300-dimensional word embedding models: Word2vec (Mikolov et al., 2013) trained on Google news dataset and fastText (Bojanowski et al., 2017) trained with subword information on Common Crawl …","url":["https://arxiv.org/pdf/2010.14584"]} -{"year":"2020","title":"Predicting Twitter Engagement With Deep Language Models","authors":["M Volkovs, Z Cheng, M Ravaut, H Yang, K Shen…"],"snippet":"… text comprehension tasks. The majority of published language models are pre-trained on large text corpora such as Wikipedia or CommonCrawl, that typically contain longer and properly worded pieces of text. Tweets on the …","url":["http://www.cs.toronto.edu/~mvolkovs/recsys2020_challenge.pdf"]} -{"year":"2020","title":"Prior Guided Feature Enrichment Network for Few-Shot Segmentation","authors":["Z Tian, H Zhao, M Shu, Z Yang, R Li, J Jia - IEEE Annals of the History of Computing, 2020"],"snippet":"… Therefore the prior is not,applicable and we only verify FEM on the baseline with,VGG-16 backbone in the zero-shot setting.,Structural Change,Embeddings of Word2Vec [24] and,FastText [22] are trained …","url":["https://www.computer.org/csdl/journal/tp/5555/01/09154595/1lZzPRFhQqY"]} -{"year":"2020","title":"Privacy at Scale: Introducing the PrivaSeer Corpus of Web Privacy Policies","authors":["M Srinath, S Wilson, CL Giles - arXiv preprint arXiv:2004.11131, 2020"],"snippet":"… As a consequence, 2https://commoncrawl.org/ Page 3 … Thus, we selected those URLs which had the word “privacy” or the words “data” and “protection” from the Common Crawl URL archive. We were able to extract 3.9 million URLs that fit this selection criterion …","url":["https://arxiv.org/pdf/2004.11131"]} -{"year":"2020","title":"Privacy Policies over Time: Curation andAnalysis of a Million-Document Dataset","authors":["R Amos, G Acar, E Lucherini, M Kshirsagar… - arXiv preprint arXiv …, 2020"],"snippet":"… app privacy policy URLs [23]. Concurrent to this work, Srinath et al. contribute PrivaSeer, a dataset of over 1 million English privacy policies extracted from May 2019 Common Crawl data [14]. Our work advances this area of …","url":["https://arxiv.org/pdf/2008.09159"]} -{"year":"2020","title":"Privacy-Preserving Passive DNS","authors":["P Papadopoulos, N Pitropakis, WJ Buchanan, O Lo… - Computers, 2020"],"snippet":"… These sources include but are not limited to Public Blacklists, the Alexa ranking, the Common Crawl project, and various Top Level Domain (TLD) zone files. This system's output is a refined dataset that can be …","url":["https://www.mdpi.com/2073-431X/9/3/64/pdf"]} -{"year":"2020","title":"Privacy-Preserving Visual Content Tagging using Graph Transformer Networks","authors":["XS Vu, DT Le, C Edlund, L Jiang, HD Nguyen"],"snippet":"… that local knowledge can be derived from data observations including label semantics or multimedia content se- mantics (eg, optical character recognition); whereas, global knowledge can be drawn from publicly available corpora …","url":["https://people.cs.umu.se/sonvx/files/ACMMM2020_SGTN_CAMREADY_1.pdf"]} -{"year":"2020","title":"Probing Task-Oriented Dialogue Representation from Language Models","authors":["CS Wu, C Xiong - arXiv preprint arXiv:2010.13912, 2020"],"snippet":"… is to maximize left-to-right generation likelihood. To ensure diverse and nearly unlimited text sources, they use Common Crawl to obtain 8M documents as its training data. Budzianowski and Vulic (2019) trained GPT2 on task …","url":["https://arxiv.org/pdf/2010.13912"]} -{"year":"2020","title":"Probing Tasks for Noised Back-Translation","authors":["N Spring - 2020"],"snippet":"Page 1. Bachelor's thesis presented to the Faculty of Arts and Social Sciences of the University of Zurich for the degree of Bachelor of Arts UZH Probing Tasks for Noised Back-Translation Author: Nicolas Spring Student …","url":["https://www.cl.uzh.ch/dam/jcr:34ea0877-26f8-405b-88a5-1191536986db/spring_ba_probing_tasks.pdf"]} -{"year":"2020","title":"Probing Text Models for Common Ground with Visual Representations","authors":["G Ilharco, R Zellers, A Farhadi, H Hajishirzi - arXiv preprint arXiv:2005.00619, 2020"],"snippet":"… GloVe embeddings (Pennington et al., 2014). For such, we use embeddings trained on 840 billion tokens of web data from Common Crawl, with dL = 300 and a vocabulary size of 2.2 million2. Models trained on text and images …","url":["https://arxiv.org/pdf/2005.00619"]} -{"year":"2020","title":"Programming in Natural Language with fuSE: Synthesizing Methods from Spoken Utterances Using Deep Natural Language Understanding","authors":["S Weigelt, V Steurer, T Hey, WF Tichy - Proceedings of the 58th Annual Meeting of …, 2020"],"snippet":"… on the Common Crawl dataset4 by Facebook Research (Mikolov et al., 3Note that we do not discuss the influence of varying epoch numbers, since we used early stopping, ie the training stops when the validation loss stops …","url":["https://www.aclweb.org/anthology/2020.acl-main.395.pdf"]} -{"year":"2020","title":"Projecting Heterogeneous Annotations for Named Entity Recognition","authors":["R Agerri, G Rigau","R Agerri, G Rigau - Proceedings of the Iberian Languages Evaluation …, 2020"],"snippet":"… Page 3. of Common Crawl text … The biggest update that XLM-Roberta offers is a significantly increased amount of training data, 2.5TB of Common Crawl clean data [6]. As for BERT, in this paper we use the base version of XLM-RoBERTa …","url":["http://ceur-ws.org/Vol-2664/capitel_paper2.pdf","https://ragerri.github.io/files/ixaera-capitel2020.pdf"]} -{"year":"2020","title":"Projecting named entity recognizers from resource-rich to resource-poor languages without annotated or parallel corpora","authors":["J Hou"],"snippet":"Page 1. Projecting named entity recognizers from resource-rich to resourcepoor languages without annotated or parallel corpora Hou, Jue Helsinki October 20, 2019 UNIVERSITY OF HELSINKI Department of Computer …","url":["https://helda.helsinki.fi/bitstream/handle/10138/310012/Jue_Hou-Master_s_Thesis-v2.1.pdf?sequence=2&isAllowed=y"]} -{"year":"2020","title":"PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data","authors":["D Carmo, M Piau, I Campiotti, R Nogueira, R Lotufo - arXiv preprint arXiv:2008.09144, 2020"],"snippet":"… The original T5 vocabulary uses the SentencePiece library [7] using English, German, French, and Romanian web pages from Common Crawl.1 We use a similar procedure to create our Portuguese vocabulary: we train …","url":["https://arxiv.org/pdf/2008.09144"]} -{"year":"2020","title":"PublishInCovid19 at the FinSBD-2 Task: Sentence and List Extraction in Noisy PDF Text Using a Hybrid Deep Learning and Rule-Based Approach","authors":["J Singh - Proceedings of the Second Workshop on Financial …, 2020"],"snippet":"… For the pretrained word embeddings, we use GloVE 6 which are trained on large Common Crawl dataset and can effectively represent 84 billion cased tokens. To train the BERT for our task, we fine-tune a pre-trained model, namely bert-base-cased …","url":["https://www.aclweb.org/anthology/2020.finnlp-1.pdf#page=63"]} -{"year":"2020","title":"Punctuation Prediction in Spontaneous Conversations: Can We Mitigate ASR Errors with Retrofitted Word Embeddings?","authors":["Ł Augustyniak, P Szymanski, M Morzy, P Zelasko… - arXiv preprint arXiv …, 2020"],"snippet":"… In this work, we use pre-trained GloVe embeddings trained on the Common Crawl dataset consisting of 2.6 billion textual documents … In our case, the punctuation in conversational transcripts is substantially different from the …","url":["https://arxiv.org/pdf/2004.05985"]} -{"year":"2020","title":"PyChain: A Fully Parallelized PyTorch Implementation of LF-MMI for End-to-End ASR","authors":["Y Shao, Y Wang, D Povey, S Khudanpur - arXiv preprint arXiv:2005.09824, 2020"],"snippet":"… We hope that our experience with PYCHAIN will inspire other efforts to build next-generation hybrid ASR tools. 812k hours AM train set and common crawl LM. 9Data augmentation and pre-trained on LibriSpeech …","url":["https://arxiv.org/pdf/2005.09824"]} -{"year":"2020","title":"Quality and Relevance Metrics for Selection of Multimodal Pretraining Data","authors":["R Rao, S Rao, E Nouri, D Dey, A Celikyilmaz, B Dolan - Proceedings of the IEEE/CVF …, 2020"],"snippet":"… The GloVe vectors used are pretrained on 840 billion tokens from Common Crawl. Let o ∈ Oi be the set of objects de- tected by the RCNN for a given image i, w ∈ d be the set 0 100 200 300 400 500 ConceptualCaptions Ngram …","url":["http://openaccess.thecvf.com/content_CVPRW_2020/papers/w56/Rao_Quality_and_Relevance_Metrics_for_Selection_of_Multimodal_Pretraining_Data_CVPRW_2020_paper.pdf"]} -{"year":"2020","title":"Quality Estimation for Machine Translation with Multi-granularity Interaction⋆","authors":["K Tian, J Zhang"],"snippet":"… (7) 4 Experiments 4.1 Dataset The bilingual parallel corpus that we use for pre-trained multilingual BERT is officially released by the WMT17 Shared Task: Machine Translation of News1, in- cluding Europarl v7, Common Crawl …","url":["http://sc.cipsc.org.cn/mt/conference/2020/papers/T20-1005.pdf"]} -{"year":"2020","title":"Quality Evaluation","authors":["JM Gomez-Perez, R Denaux, A Garcia-Silva - A Practical Guide to Hybrid Natural …, 2020"],"snippet":"… Besides the embeddings trained by us, we also include, as part of our study, several pre-trained embeddings, notably the GloVe embeddings for CommonCrawl— code glove_840B provided by Stanford11 —fastText …","url":["https://link.springer.com/chapter/10.1007/978-3-030-44830-1_7"]} -{"year":"2020","title":"Query focused abstractive summarization using BERTSUM model","authors":["DM Abdullah - 2020"],"snippet":"… Conneau et al. (2019) have introduced a multilingual masked language model from Facebook AI. This model has been trained on 2.5 TB of newly created clean CommonCrawl (Wenzek et al., 2019) data in 100 languages. The model has shown state-of-the-art results …","url":["https://opus.uleth.ca/bitstream/handle/10133/5760/ABDULLAH_DEEN_MOHAMMAD_MSC_2020.pdf?sequence=1"]} -{"year":"2020","title":"Question Answering for Comparative Questions with GPT-2","authors":["B Sievers"],"snippet":"… https://www.microsoft.com/en-us/research/blog/ turing-nlg-a-17-billion-parameter-languagemodel-by-microsoft/, accessed: 6.3.2020 3. Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic chatnoir: search engine for the clueweb and the common crawl …","url":["http://ceur-ws.org/Vol-2696/paper_213.pdf"]} -{"year":"2020","title":"Question Answering When Knowledge Bases are Incomplete","authors":["C Pradel, D Sileo, Á Rodrigo, A Peñas, E Agirre - International Conference of the …, 2020","E Agirre - … IR Meets Multilinguality, Multimodality, and Interaction …"],"snippet":"… with bag of word embeddings. We use FastText CommonCrawl word embeddings [10] 4 and a max pooling to produce the continuous bag of word representations of table columns and the question text. The column bag of words …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=IxP9DwAAQBAJ&oi=fnd&pg=PA43&dq=commoncrawl&ots=BCbV87DfTS&sig=kVIo_AYLn9xgMPpxB-rDuk1jzEg","https://link.springer.com/chapter/10.1007/978-3-030-58219-7_4"]} -{"year":"2020","title":"Question Type Classification Methods Comparison","authors":["T Seidakhmetov - arXiv preprint arXiv:2001.00571, 2020"],"snippet":"… The GLoVe vectors were pre-trained using 840 billion tokens from Common Crawl, and each token is mapped into a 300-dimensional vector [3]. Xembeddings = GloveEmbedding( Xword) ∈ RNxDword where Dword is a number of dimensions of a word vector …","url":["https://arxiv.org/pdf/2001.00571"]} -{"year":"2020","title":"Questioning the Use of Bilingual Lexicon Induction as an Evaluation Task for Bilingual Word Embeddings","authors":["B Marie, A Fujita"],"snippet":"… gual word embeddings. In fact, this corpus was significantly smaller than the Wikipedia corpora for all the other languages, and than the Finnish Common Crawl corpus used to train Finnish Vecmap-emb. Another finding is …","url":["https://www.anlp.jp/proceedings/annual_meeting/2020/pdf_dir/P5-14.pdf"]} -{"year":"2020","title":"REALTOXICITYPROMPTS: Evaluating Neural Toxic Degeneration in Language Models","authors":["S Gehman, S Gururangan, M Sap, Y Choi, NA Smith - arXiv preprint arXiv …, 2020","SGS Gururangan, MSY Choi, NA Smith"],"snippet":"… GPT-2 (specifically, GPT-2-small; Radford et al., 2019), is a similarly sized model pretrained on OPENAI-WT, which contains 40GB of English web text and is described in §6.7 GPT-3 (Brown et al., 2020) is pretrained on a mix …","url":["https://arxiv.org/pdf/2009.11462","https://homes.cs.washington.edu/~msap/pdfs/gehman2020realtoxicityprompts.pdf"]} -{"year":"2020","title":"Recent Trends in the Use of Deep Learning Models for Grammar Error Handling","authors":["M Naghshnejad, T Joshi, VN Nair - arXiv preprint arXiv:2009.02358, 2020"],"snippet":"Page 1. 1 Recent Trends in the Use of Deep Learning Models for Grammar Error Handling Mina Naghshnejad1, Tarun Joshi, and Vijayan N. Nair Corporate Model Risk, Wells Fargo2 Abstract Grammar error handling (GEH) is …","url":["https://arxiv.org/pdf/2009.02358"]} -{"year":"2020","title":"Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation","authors":["AC Stickland, X Li, M Ghazvininejad - arXiv preprint arXiv:2004.14911, 2020"],"snippet":"Page 1. Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation Asa Cooper Stickland♣ Xian Li♠ ♣ University of Edinburgh, ♠ Facebook AI a.cooper.stickland@ed.ac.uk, {xianl,ghazvini}@fb.com Marjan Ghazvininejad♠ Abstract …","url":["https://arxiv.org/pdf/2004.14911"]} -{"year":"2020","title":"ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning","authors":["W Yu, Z Jiang, Y Dong, J Feng - arXiv preprint arXiv:2002.04326, 2020"],"snippet":"Page 1. Published as a conference paper at ICLR 2020 RECLOR: AREADING COMPREHENSION DATASET REQUIRING LOGICAL REASONING Weihao Yu∗, Zihang Jiang∗, Yanfei Dong & Jiashi Feng National University …","url":["https://arxiv.org/pdf/2002.04326"]} -{"year":"2020","title":"Recognai's Working Notes for CANTEMIST-NER Track","authors":["DC Fidalgo, D Vila-Suero, FA Montes - … of the Iberian Languages Evaluation Forum …, 2020"],"snippet":"… light-weight solution regarding the pretrained component. For this reason we chose the pretrained Spanish word vectors provided by FastText[4]. These vectors encompass 2 million words that were trained on Common Crawl4 …","url":["http://ceur-ws.org/Vol-2664/cantemist_paper4.pdf"]} -{"year":"2020","title":"Reducing Language Biases in Visual Question Answering with Visually-Grounded Question Encoder","authors":["G KV, A Mittal - arXiv preprint arXiv:2007.06198, 2020"],"snippet":"Page 1. Reducing Language Biases in Visual Question Answering with Visually-Grounded Question Encoder Gouthaman KV and Anurag Mittal Indian Institute of Technology Madras, India {gkv,amittal}@cse.iitm.ac.in Abstract …","url":["https://arxiv.org/pdf/2007.06198"]} -{"year":"2020","title":"Referring Image Segmentation via Cross-Modal Progressive Comprehension","authors":["S Huang, T Hui, S Liu, G Li, Y Wei, J Han, L Liu, B Li - … of the IEEE/CVF Conference on …, 2020"],"snippet":"Page 1. Referring Image Segmentation via Cross-Modal Progressive Comprehension Shaofei Huang1,2∗ Tianrui Hui1,2∗ Si Liu3† Guanbin Li4 Yunchao Wei5 Jizhong Han1,2 Luoqi Liu6 Bo Li3 1 Institute of Information Engineering …","url":["http://openaccess.thecvf.com/content_CVPR_2020/papers/Huang_Referring_Image_Segmentation_via_Cross-Modal_Progressive_Comprehension_CVPR_2020_paper.pdf"]} -{"year":"2020","title":"Refinement of Unsupervised Cross-Lingual Word Embeddings","authors":["M Biesialska, MR Costa-jussà - arXiv preprint arXiv:2002.09213, 2020"],"snippet":"… Finnish. Monolingual embeddings of 300 di- mensions were created using Word2Vec3 [18] and were trained on WMT News Crawl (Spanish), WacKy crawling corpora (English, German), and Common Crawl (Finnish). To evaluate …","url":["https://arxiv.org/pdf/2002.09213"]} -{"year":"2020","title":"ReINTEL: A Multimodal Data Challenge for Responsible Information Identification on Social Network Sites","authors":["DT Le, XS Vu, ND To, HQ Nguyen, TT Nguyen, L Le… - arXiv preprint arXiv …, 2020"],"snippet":"… Word2VecVN (Vu, 2016) x Trained on 7GB texts of Vietnamese news FastText (Vietnamese version) (Joulin et al., 2016) x Trained on Vietnamese texts of the CommonCrawl corpus ETNLP (Vu et al., 2019) x Trained on 1GB texts of Vietnamese Wikipedia …","url":["https://arxiv.org/pdf/2012.08895"]} -{"year":"2020","title":"Related Tasks can Share! A Multi-task Framework for Affective language","authors":["KS Deep, MS Akhtar, A Ekbal, P Bhattacharyya - arXiv preprint arXiv:2002.02154, 2020"],"snippet":"… 3. Character-level embeddings2: Character-level embeddings are trained over common crawl glove corpus providing 300 dimensional vectors for each character (used in case if word is not present in other two embeddings) …","url":["https://arxiv.org/pdf/2002.02154"]} -{"year":"2020","title":"Relational and Fine-Grained Argument Mining","authors":["D Trautmann, M Fromm, V Tresp, T Seidl, H Schütze - Datenbank-Spektrum, 2020"],"snippet":"… Crowdworkers had the task of selecting argumentative spans for a given set of topics and topic related sentences. The sentences were from textual data extracted from Common Crawl Footnote 6 for a predefined list of eight …","url":["https://link.springer.com/article/10.1007/s13222-020-00341-z"]} -{"year":"2020","title":"Relational Databases and SQL Language","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… itself. We are mainly discussing PostgreSQL here so that you can scale up to 64 TB if you decide to index large portions of common crawl datasets for creating a backlinks and news database in Chapters 6 and 7, respectively …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_5"]} -{"year":"2020","title":"Representation learning for input classification via topic sparse autoencoder and entity embedding","authors":["D Li, J Zhang, P Li - US Patent App. 16/691,554, 2020"],"snippet":"US20200184339A1 - Representation learning for input classification via topic sparse autoencoder and entity embedding - Google Patents. Representation learning for input classification via topic sparse autoencoder and entity embedding. Download PDF Info …","url":["https://patents.google.com/patent/US20200184339A1/en"]} -{"year":"2020","title":"Reproducible Extraction of Cross-lingual Topics (rectr)","authors":["CH Chan, J Zeng, H Wessler, M Jungblut, K Welbers… - … Methods and Measures, 2020"],"snippet":"… 17 Researchers at Facebook Research created these aligned word embeddings by first training fastText word embeddings (Bojanowski et al., 2017) using Wikipedia and Common Crawl articles of each individual …","url":["https://www.tandfonline.com/doi/abs/10.1080/19312458.2020.1812555"]} -{"year":"2020","title":"Research Challenges in Designing Differentially Private Text Generation Mechanisms","authors":["O Feyisetan, A Aggarwal, Z Xu, N Teissier - arXiv preprint arXiv:2012.05403, 2020"],"snippet":"… A natural way to “bias” an exponential mechanism without changing its privacy properties is to modulate it with a public “prior” µ(z). For example, such a prior can be constructed over a publicly available corpus such as Wikipedia or Common Crawl …","url":["https://arxiv.org/pdf/2012.05403"]} -{"year":"2020","title":"Residual Energy-Based Models for Text Generation","authors":["Y Deng, A Bakhtin, M Ott, A Szlam, MA Ranzato - arXiv preprint arXiv:2004.11714, 2020"],"snippet":"… Page 6. Published as a conference paper at ICLR 2020 genres, totaling about half a billion words. The latter is a de-duplicated subset of the English portion of the CommonCrawl news dataset (Nagel, 2016), which totals around 16 Billion words …","url":["https://arxiv.org/pdf/2004.11714"]} -{"year":"2020","title":"Rethinking embedding coupling in pre-trained language models","authors":["HW Chung, T Févry, H Tsai, M Johnson, S Ruder - arXiv preprint arXiv:2010.12821, 2020"],"snippet":"Page 1. Preprint. Under review. RETHINKING EMBEDDING COUPLING IN PRE-TRAINED LANGUAGE MODELS Hyung Won Chung∗† Google Research hwchung@google.com Thibault Févry∗† thibaultfevry@gmail …","url":["https://arxiv.org/pdf/2010.12821"]} -{"year":"2020","title":"Rethinking Evaluation in ASR: Are Our Models Robust Enough?","authors":["T Likhomanenko, Q Xu, V Pratap, P Tomasello, J Kahn… - arXiv preprint arXiv …, 2020"],"snippet":"… only; for TL – both train transcriptions and provided LM data. We also train a 4-gram LM on Common Crawl (CC) data with 200k top words and pruning of all 3,4-grams appearing once. Perplexity of all LMs is shown in Table 2 …","url":["https://arxiv.org/pdf/2010.11745"]} -{"year":"2020","title":"Retrieving Comparative Arguments using Deep Pre-trained Language Models and NLU","authors":["V Chekalina, A Panchenko"],"snippet":"… ChatNoir is an Elasticsearch-based5 engine providing access to nearly 3 billion web pages from ClueWeb and Common Crawl corpora … ACM. 2. J. Bevendorff, B. Stein, M. Hagen, and M. Potthast. Elastic ChatNoir: Search …","url":["http://ceur-ws.org/Vol-2696/paper_210.pdf"]} -{"year":"2020","title":"Reusing a Pretrained Language Model on Languages with Limited Corpora for Unsupervised NMT","authors":["A Chronopoulou, D Stojanovski, A Fraser - arXiv preprint arXiv:2009.07610, 2020"],"snippet":"… We use 68M En sentences from NewsCrawl, 2.4M Mk and 4M Sq, both from CommonCrawl and Wikipedia … Second, in En- De, we use high-quality corpora for both languages (NewsCrawl), whereas Mk and Sq are trained on low-quality CommonCrawl data …","url":["https://arxiv.org/pdf/2009.07610"]} -{"year":"2020","title":"Review of the Recent Techniques for Learning Commonsense Knowledge applied to the Winograd Schema Challenge","authors":["A Koleva"],"snippet":"… 4 A Combined Approach Prakash et al. [2], to the best of our knowledge, are the first ones to combine methods from KRR with methods from ML such as language models. They propose a framework in which four different …","url":["http://ecai2020.eu/papers/1513_paper.pdf"]} -{"year":"2020","title":"Review rating prediction framework using deep learning","authors":["BH Ahmed, AS Ghabayen - Journal of Ambient Intelligence and Humanized …, 2020"],"snippet":"… word representation) embedding. There are several Glove embeddings from different sources, such as Twitter, Wikipedia or the common crawl. We utilized we utilize the Glove embedding trained by (Pennington et al. 2014) on …","url":["https://link.springer.com/article/10.1007/s12652-020-01807-4"]} -{"year":"2020","title":"Revisiting Round-Trip Translation for Quality Estimation","authors":["J Moon, H Cho, EL Park - arXiv preprint arXiv:2004.13937, 2020"],"snippet":"Page 1. Revisiting Round-Trip Translation for Quality Estimation Jihyung Moon Naver Papago Hyunchang Cho Naver Papago {jihyung.moon, hyunchang.cho, lucypark}@navercorp.com Eunjeong L. Park Naver Papago Abstract …","url":["https://arxiv.org/pdf/2004.13937"]} -{"year":"2020","title":"RobBERT: a Dutch RoBERTa-based Language Model","authors":["P Delobelle, T Winters, B Berendt - arXiv preprint arXiv:2001.06286, 2020"],"snippet":"… 3.1 Data We pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus (Ortiz Suárez et al., 2019). This Dutch corpus …","url":["https://arxiv.org/pdf/2001.06286"]} -{"year":"2020","title":"Robust Cross-lingual Embeddings from Parallel Sentences","authors":["A Sabet, P Gupta, JB Cordonnier, R West, M Jaggi - arXiv preprint arXiv:1912.12481, 2019"],"snippet":"… MUSE 0.38 0.30 0.74 0.64 RCSLS 0.38 0.30 0.74 0.64 FASTTEXTCommon Crawl 0.49 0.32 0.75 0.57 BIVEC 0.40 0.36 0.70 0.60 … We also include FASTTEXT monolingual vectors trained on CommonCrawl data (Grave et …","url":["https://arxiv.org/pdf/1912.12481"]} -{"year":"2020","title":"Robust Prediction of Punctuation and Truecasing for Medical ASR","authors":["M Sunkara, S Ronanki, K Dixit, S Bodapati, K Kirchhoff - Proceedings of the First …, 2020","MSSRK Dixit, SBK Kirchhoff - ACL 2020, 2020"],"snippet":"… But just like any other model, these Language Models are biased by their training data. In particular, they are typically trained on data that is easily available in large quantities on the internet eg Wikipedia, CommonCrawl etc …","url":["https://www.aclweb.org/anthology/2020.nlpmc-1.8.pdf","https://www.aclweb.org/anthology/2020.nlpmc-1.pdf#page=65"]} -{"year":"2020","title":"Robust Prediction of Punctuation and Truecasingfor Medical ASR","authors":["M Sunkara, S Ronanki, K Dixit, S Bodapati, K Kirchhoff - arXiv preprint arXiv …, 2020"],"snippet":"… But just like any other model, these Language Models are biased by their training data. In particular, they are typically trained on data that is easily available in large quantities on the internet eg Wikipedia, CommonCrawl etc …","url":["https://arxiv.org/pdf/2007.02025"]} -{"year":"2020","title":"Russian-English Bidirectional Machine Translation System","authors":["A Xv, W Chao - Ariel"],"snippet":"… For the monolingual data we use English and Russian Newscrawl as well as a filtered part of Commoncrawl in Russian … Russian is relatively smaller than that of English, we have to augment the Newscrawl data for Russian …","url":["http://statmt.org/wmt20/pdf/2020.wmt-1.35.pdf"]} -{"year":"2020","title":"Samsung R&D Institute Poland submission to WMT20 News Translation Task","authors":["M Krubinski, M Chochowski, B Boczek, M Koszowski… - Proceedings of the Fifth …, 2020"],"snippet":"… Pretraining a complete encoder-decoder model allows for later direct fine-tuning on the translation ob- jective, with parallel corpora. In our experiment, we sampled 250M sentences from CommonCrawl for Czech, English and Polish (ie 750M in total) …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.16.pdf"]} -{"year":"2020","title":"SandiDoc at CLEF 2020-Consumer Health Search: AdHoc IR Task","authors":["S Seneviratne, E Daskalaki, M Zakir - Conference and Labs of the Evaluation (CLEF) …, 2020"],"snippet":"… respectively. 2 Resources 2.1 Dataset The document collection used in the document retrieval task was acquired by the common crawl dump of 2018-19. This included web pages of the formats such as HTML, XHTML, XML. The …","url":["http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2020/paper_160.pdf"]} -{"year":"2020","title":"SardiStance@ EVALITA2020: Overview of the Task on Stance Detection in Italian Tweets","authors":["AT Cignarella, M Lai, C Bosco, V Patti, P Rosso - … of the 7th Evaluation Campaign of …, 2020"],"snippet":"… In particular, they trained three classifiers based respectively on SENTIPOLC 2016 (Barbieri et al., 2016) for sentiment analysis classification, on HaSpeeDe 2018 (Bosco et al., 2018) 4https://huggingface.co/Musixmatch/ umberto-commoncrawl-cased-v1 …","url":["http://ceur-ws.org/Vol-2765/paper159.pdf"]} -{"year":"2020","title":"SberQuAD–Russian Reading Comprehension Dataset: Description and Analysis","authors":["P Braslavski - … IR Meets Multilinguality, Multimodality, and Interaction …"],"snippet":"… We tokenized text using spaCy. 12 To initialize the embedding layer for BiDAF, DocQA, DrQA, and R-Net we use Russian casesensitive fastText embeddings trained on Common Crawl and Wikipedia. 13 This initialization is used for both questions and paragraphs …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=IxP9DwAAQBAJ&oi=fnd&pg=PA3&dq=commoncrawl&ots=BCbV87DfTS&sig=E3mWPymZbBAvDLn8azUpJimpaMs"]} -{"year":"2020","title":"Scalable Cross Lingual Pivots to Model Pronoun Gender for Translation","authors":["K Webster, E Pitler - arXiv preprint arXiv:2006.08881, 2020"],"snippet":"… glish was most recently contested at WMT'132 (Bojaretal., 2013), which offered participants 14,980,513 sentence pairs from Eu- roparl3 (Habash et al., 2017), Common Crawl (Smith et al., 2013), the United Nations cor …","url":["https://arxiv.org/pdf/2006.08881"]} -{"year":"2020","title":"Scalable, Multi-Constraint, Complex-Objective Graph Partitioning","authors":["GM Slota, C Root, K Devine, K Madduri… - IEEE Transactions on …, 2020"],"snippet":"Page 1. Scalable, Multi-Constraint, Complex-Objective Graph Partitioning GeorgeM.Slota ,CameronRoot,KarenDevine ,KameshMadduri , and Sivasankaran Rajamanickam Abstract—We introduce XTRAPULP, a distributed-memory …","url":["https://ieeexplore.ieee.org/abstract/document/9115834/"]} -{"year":"2020","title":"Scaling Laws for Neural Language Models","authors":["J Kaplan, S McCandlish, T Henighan, TB Brown… - arXiv preprint arXiv …, 2020","OEIAY Need"],"snippet":"01/23/20 - We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with mode...","url":["https://arxiv.org/pdf/2001.08361","https://deepai.org/publication/scaling-laws-for-neural-language-models"]} -{"year":"2020","title":"Scientific Question Answering with AAN","authors":["K Mueller"],"snippet":"… have easy to understand language as well as little background knowledge required. They then used web data provided by Common Crawl to find supporting web documents from which to draw information and construct …","url":["https://zoo.cs.yale.edu/classes/cs290/19-20a/mueller.keaton.kim6/Keaton_Mueller_CPSC290_Final_Report.pdf"]} -{"year":"2020","title":"Scones: Towards Conversational Authoring of Sketches","authors":["F Huang, D Ha, E Schoop, J Canny"],"snippet":"… detail in Section 4.1. For each text token t, we use a 300-dimensional GLoVe vector trained on 42B tokens from the Common Crawl dataset [22] to semantically represent these words in the instructions. To train the Transformer …","url":["http://people.eecs.berkeley.edu/~eschoop/docs/scones.pdf"]} -{"year":"2020","title":"Scoring Dimension-Level Job Performance From Narrative Comments: Validity and Generalizability When Using Natural Language Processing","authors":["AB Speer - Organizational Research Methods, 2020"],"snippet":"Performance appraisal narratives are qualitative descriptions of employee job performance. This data source has seen increased research attention due to the ability to efficiently derive insights u...","url":["https://journals.sagepub.com/doi/abs/10.1177/1094428120930815"]} -{"year":"2020","title":"SDRS: A new lossless dimensionality reduction for text corpora","authors":["IV de Mendizabal, V Basto-Fernandes, E Ezpeleta… - Information Processing & …, 2020"],"snippet":"… Clueweb 12, Web (html) pages, English, unknown, 870M. Common Crawl Data, Web (html) pages, multilingual, 100% spam, 9 Billion in 2014 and increasing. YouTube Comments Dataset, Youtube comments, multilingual, 7% spam, 6M …","url":["https://www.sciencedirect.com/science/article/pii/S0306457319314694"]} -{"year":"2020","title":"Searching the Web for Cross-lingual Parallel Data","authors":["A El-Kishky, P Koehn, H Schwenk - Proceedings of the 43rd International ACM SIGIR …, 2020"],"snippet":"… and Tokenization – CommonCrawl Preprocessing Code • Open-source Code for Generating Cross-lingual Datasets – Code for Generating High-quality Monolingual Data from CommonCrawl – Code for Generating …","url":["https://dl.acm.org/doi/abs/10.1145/3397271.3401417"]} -{"year":"2020","title":"SEDAR: a Large Scale French-English Financial Domain Parallel Corpus","authors":["A Ghaddar, P Langlais - Proceedings of The 12th Language Resources and …, 2020"],"snippet":"… It contains 40.8M sentence pairs extracted from five datasets that cover various domains: EUROPARL V7 (Koehn, 2005), UNITED NATIONS CORPUS (Eisele and Chen, 2010), COMMON CRAWL CORPUS, NEWS COMMENTARY, and 109 FRENCH-ENGLISH corpus …","url":["https://www.aclweb.org/anthology/2020.lrec-1.442.pdf"]} -{"year":"2020","title":"Seeking Meaning: Examining a Cross-situational Solution to Learn Action Verbs Using Human Simulation Paradigm","authors":["Y Zhang, A Amatuni, E Cain, C Yu"],"snippet":"… of words in a given corpus. We used the GloVe model pretrained on 840B tokens of Common Crawl text to create semantic distance measures (Pennington, Socher & Manning, 2014). We discovered that both semantic knowledge …","url":["https://cognitivesciencesociety.org/cogsci20/papers/0705/0705.pdf"]} -{"year":"2020","title":"Self-training Improves Pre-training for Natural Language Understanding","authors":["J Du, E Grave, B Gunel, V Chaudhary, O Celebi, M Auli… - arXiv preprint arXiv …, 2020"],"snippet":"… As a large-scale external bank of unannotated sentences, we extract and filter text from CommonCrawl 1 (Wenzek et al., 2019) … CommonCrawl data contains a wide variety of domains and text styles which makes it a good general-purpose corpus …","url":["https://arxiv.org/pdf/2010.02194"]} -{"year":"2020","title":"Semantic image retrieval","authors":["T Berg, PN Belhumeur - US Patent 10,769,502, 2020","T Berg, PN Belhumeur - US Patent App. 16/999,616, 2020"],"snippet":"… GloVe. Common Crawl is a public repository of web crawl data (eg, blogs, news, and comments) available on the internet in the commoncrawl.org domain, the entire contents of which is hereby incorporated by reference. GloVe …","url":["https://patents.google.com/patent/US20200380320A1/en","https://www.freepatentsonline.com/10769502.html"]} -{"year":"2020","title":"Semantic Matching: Dynamic Composition of Matcher Ensembles for Ontology Alignment","authors":["A Vennesland - 2020"],"snippet":"Page 1. ISBN 978-82-326-4842-9 (printed ver.) ISBN 978-82-326-4843- 6 (electronic ver.) ISSN 1503-8181 Doctoral theses at NTNU, 2020:247 Audun Vennesland Semantic Matching Dynamic Composition of Matcher …","url":["https://ntnuopen.ntnu.no/ntnu-xmlui/bitstream/handle/11250/2674337/Audun%20Vennesland_PhD.pdf?sequence=1"]} -{"year":"2020","title":"Semantic Networks for Engineering Design: A Survey","authors":["J Han, S Sarica, F Shi, J Luo - arXiv preprint arXiv:2012.07060, 2020"],"snippet":"… relations No Pre-trained word2vec (Mikolov et al., 2013) Unsupervised Google News Cosine similarity No Pre-trained GloVe (Pennington et al., 2014) Unsupervised Wikipedia, Gigaword, Common Crawl Cosine similarity No B-Link …","url":["https://arxiv.org/pdf/2012.07060"]} -{"year":"2020","title":"Semantic Norm Extrapolation is a Missing Data Problem","authors":["B Snefjella, I Blank - 2020"],"snippet":"Page 1. DRAFT Running head: SEMANTIC NORM EXTRAPOLATION 1 Semantic Norm Extrapolation is a Missing Data Problem Bryor Snefjella & Idan Blank University of California, Los Angeles Department of Psychology Author Note …","url":["https://psyarxiv.com/y2gav/download?format=pdf"]} -{"year":"2020","title":"Semantic Recommendations of Books Using Recurrent Neural Networks","authors":["M Nitu, S Ruseti, M Dascalu, S Tomescu - Ludic, Co-design and Tools Supporting Smart …"],"snippet":"… We conducted the experiments using pre-trained FastText embeddings for Romanian language. The embedding model consists of 2 million word vectors trained on Common Crawl and Wikipedia (approx. 600 billion tokens). Page 239. 240 M. Nitu et al. Fig …","url":["https://link.springer.com/content/pdf/10.1007/978-981-15-7383-5.pdf#page=234"]} -{"year":"2020","title":"Semantic-Based Algorithm for Scoring Alternative Uses Tests (AUT)","authors":["C Stevenson"],"snippet":"… Page 6. Each response was mapped to a word vector in 300 dimensions using a fastText pretrained model for Dutch. This model was trained on Wikipedia and Common Crawl data, using CBOW model with character …","url":["http://modelingcreativity.org/blog/wp-content/uploads/2020/07/Tsai_Y_BDS_Thesis_report_11695986_PML.pdf"]} -{"year":"2020","title":"Semantical Search Term Clustering for Performance Prediction","authors":["R Coenders"],"snippet":"Page 1. Eindhoven University of Technology MASTER Semantical search term clustering for performance prediction Coenders, R. Award date: 2019 Link to publication Disclaimer This document contains a student thesis (bachelor's …","url":["https://research.tue.nl/files/139495213/Thesis_RikCoenders_Aug2019.pdf"]} -{"year":"2020","title":"Semi-autonomous methodology to validate and update customer needs database through text data analytics","authors":["AM Bigorra, O Isaksson, M Karlberg - International Journal of Information …, 2020"],"snippet":"… Two different pre-trained word vectors based on 1 and 2 million English words from Wikipedia and Common Crawl are considered and they are referred along the rest of the presented paper as emb1 and emb2, respectively 3 …","url":["https://www.sciencedirect.com/science/article/pii/S0268401219300817"]} -{"year":"2020","title":"SemSeq: A Regime for Training Widely-Applicable Word-Sequence Encoders","authors":["H Tsuyuki, TY Ogawa, HTB Kobayashi - … : 16th International Conference of the Pacific …, 2020"]} -{"year":"2020","title":"Sense Inventories for Arabic Texts","authors":["M Alian, A Awajan"],"snippet":"… E. Fasttext pre-trained embeddings Arabic Fasttext embeddings are provided by Grave et al. [16]. These embeddings are resulted from training on Wikipedia and Common Crawl corpus. They have used an extension of the Fasttext model with subword information …","url":["https://www.researchgate.net/profile/Marwah_Alian/publication/346785930_Sense_Inventories_for_Arabic_Texts/links/5fd0a6a745851568d14da099/Sense-Inventories-for-Arabic-Texts.pdf"]} -{"year":"2020","title":"Sentence Matching with Deep Self-attention and Co-attention Features","authors":["Z Wang, D Yan - 2020","Z Wang, D Yan - … Conference on Knowledge Science, Engineering and …, 2021"],"snippet":"… 4.1 Implementation Details. In our experiments, word embedding vectors are initialized with 300d GloVe vectors pre-trained from the 840B Common Crawl corpus. Embeddings of out of the vocabulary of GloVe is initialized …","url":["https://link.springer.com/chapter/10.1007/978-3-030-82147-0_45","https://openreview.net/pdf?id=EEV7-ruXM5H"]} -{"year":"2020","title":"Sentence-Embedding and Similarity via Hybrid Bidirectional-LSTM and CNN Utilizing Weighted-Pooling Attention","authors":["D HUANG, A AHMED, SY ARAFAT, KI RASHID… - IEICE TRANSACTIONS on …, 2020"],"snippet":"… bag-of-words architecture. • A GloVe is a 300-dimensional word embedding model learned on aggregated global word co-occurrence statistics from Common Crawl (840 billion to- kens) [32]. 4.2 Datasets The datasets are concisely …","url":["https://www.jstage.jst.go.jp/article/transinf/E103.D/10/E103.D_2018EDP7410/_pdf"]} -{"year":"2020","title":"Sentiment Analysis Approach Based on Combination of Word Embedding Techniques","authors":["I Kaibi, H Satori - Embedded Systems and Artificial Intelligence, 2020"],"snippet":"… The fastText pre-trained word vectors is a high-quality word representation for 157 languages, two sources of data are used to train fastText pre-trained models: the free online encyclopedia Wikipedia and data from the common crawl project …","url":["https://link.springer.com/chapter/10.1007/978-981-15-0947-6_76"]} -{"year":"2020","title":"Sentiment Analysis based Multi-person Multi-criteria Decision Making Methodology: Using Natural Language Processing and Deep Learning for Decision Aid","authors":["C Zuheros, E Martínez-Cámara, E Herrera-Viedma… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. arXiv:2008.00032v1 [cs.CL] 31 Jul 2020 Sentiment Analysis based Multi-person Multi-criteria Decision Making Methodology: Using Natural Language Processing and Deep Learning for Decision Aid Cristina Zuheros1 …","url":["https://arxiv.org/pdf/2008.00032"]} -{"year":"2020","title":"Sentiment analysis for customer relationship management: an incremental learning approach","authors":["N Capuano, L Greco, P Ritrovato, M Vento - Applied Intelligence, 2020"],"snippet":"… text corpora. In particular, Universal Dependencies and WikiNER corpora [31] were used for Italian, while OntoNotes [32] and Common Crawl Footnote 3 corpora were used for English. The classification model. Once the WEs …","url":["https://link.springer.com/article/10.1007/s10489-020-01984-x"]} -{"year":"2020","title":"Sentiment Analysis for Hinglish Code-mixed Tweets by means of Cross-lingual Word Embeddings","authors":["P Singh, E Lefever - LREC 2020–4th Workshop on Computational …, 2020"],"snippet":"… This can probably be attributed to the quality of the monolingual embeddings, since the English embeddings were trained on the vast Common Crawl data while the Code-Mixed embeddings were trained on a little more than 100,000 scraped tweets …","url":["https://biblio.ugent.be/publication/8662137/file/8662140"]} -{"year":"2020","title":"Sentiment Aware Word Embeddings Using Refinement and Senti-Contextualized Learning Approach","authors":["B Naderalvojoud, EA Sezer - Neurocomputing, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0925231220304811"]} -{"year":"2020","title":"Sentiment detection with FedMD: Federated Learning via Model Distillation","authors":["PTG Momcheva"],"snippet":"… value due to the nature of tweets - they are short, have little context and contain misspelled and shortened words, all of which stands in general contrast to the GloVe training data and logic, which was based on structured …","url":["http://ceur-ws.org/Vol-2656/paper24.pdf"]} -{"year":"2020","title":"Seq2Seq Models for Recommending Short Text Conversations","authors":["J Torres, C Vaca, L Terán, CL Abad - Expert Systems with Applications, 2020"],"snippet":"… to a lower-dimensional representation ( w ∈ R d ). For the initialization of the word embeddings, we use the pre-trained vectors provided by Mikolov, Grave, Bojanowski, Puhrsch, and Joulin (2018), which consist of 2 …","url":["https://www.sciencedirect.com/science/article/pii/S0957417420300956"]} -{"year":"2020","title":"Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue","authors":["B Kim, J Ahn, G Kim - arXiv preprint arXiv:2002.07510, 2020"],"snippet":"Page 1. Published as a conference paper at ICLR 2020 SEQUENTIAL LATENT KNOWLEDGE SELECTION FOR KNOWLEDGE-GROUNDED DIALOGUE Byeongchang Kim Jaewoo Ahn Gunhee Kim Department of Computer …","url":["https://arxiv.org/pdf/2002.07510"]} -{"year":"2020","title":"Sequential Neural Networks for Noetic End-to-End Response Selection","authors":["Q Chen, W Wang - Computer Speech & Language, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S088523082030005X"]} -{"year":"2020","title":"Sequential Transfer Learning for Event Detection and Key Sentence Extraction","authors":["A Ollagnier, H Williams"],"snippet":"… Training is performed on the base version trained on cased text. XLNet [22] 12-layer, 768-hidden, 12-heads 110M parameters Pre-trained models are based on English texts from Wikipedia, BooksCorpus, Giga5, ClueWeb, and Common Crawl …","url":["https://www.researchgate.net/profile/Ollagnier_Anais/publication/344440133_Sequential_Transfer_Learning_for_Event_Detection_and_Key_Sentence_Extraction/links/5f75ad1a299bf1b53e0397ce/Sequential-Transfer-Learning-for-Event-Detection-and-Key-Sentence-Extraction.pdf"]} -{"year":"2020","title":"Shopify in Germany: An analysis of a Canadian e-commerce platform's marketing strategy and activities in an international market.","authors":["K Howe-Patterson, I Schuiling"],"snippet":"Page 1. Available at: http://hdl.handle.net/2078.1/thesis:25661 [Downloaded 2020/09/25 at 09:14:22 ] \"Shopify in Germany: An analysis of a Canadian e-commerce platform's marketing strategy and activities in an international …","url":["https://dial.uclouvain.be/downloader/downloader.php?pid=thesis%3A25661&datastream=PDF_01&cover=cover-mem"]} -{"year":"2020","title":"Sidecar: Augmenting Word Embedding Models with Expert Knowledge","authors":["M Lemay, D Shapiro, MK MacPherson, K Yee… - Future of Information and …, 2020"],"snippet":"… spaCy's en_core_web_lg GloVe model 8 , trained on Common Crawl 9. Facebook's crawl-300d-2M fastText model 10 , also trained on Common Crawl. All three models produce vectors of size 300. To generate vectors for the …","url":["https://link.springer.com/chapter/10.1007/978-3-030-39442-4_39"]} -{"year":"2020","title":"Sights, titles and tags: mining a worldwide photo database for sightseeing","authors":["A Luberg, J Pindis, T Tammet"],"snippet":"… Explosion AI spaCy pretrained model [7]. We use a medium sized web code model. The model is based on Common Crawl and OntoNotes 5 [22] sources … Facebook fastText pretrained model [8]. The model is pretrained on Common Crawl and Wikipedia data …","url":["http://wims2020.sigappfr.org/wp-content/uploads/2020/06/WIMS'20/p149-Luberg.pdf"]} -{"year":"2020","title":"SimAlign: High Quality Word Alignments without Parallel Training Data using Static and Contextualized Embeddings","authors":["MJ Sabet, P Dufter, H Schütze - arXiv preprint arXiv:2004.08728, 2020"],"snippet":"… In addition, we use XLM-RoBERTa base (Conneau et al., 2019), which is pretrained on 100 languages on CommonCrawl data. We denote alignments obtained using the embeddings from the i-th layer by XLM-R[i] …","url":["https://arxiv.org/pdf/2004.08728"]} -{"year":"2020","title":"Similarity judgment within and across categories: A comprehensive model comparison","authors":["R Richie, S Bhatia - 2020"],"snippet":"… Google News 100B 300 None Magnitude Librarya fastText 600B Common Crawl FastText with Continuous Bag of Words (CBOW) … GloVe 840B Common Crawl GloVe Common Crawl 840B 300 None Magnitude Library Glove 840B Common Crawl, Paragram …","url":["https://psyarxiv.com/5pa9r/download"]} -{"year":"2020","title":"Simulation Induces Durable, Extensive Changes to Self-knowledge","authors":["J Rubin-McGregor, Z Zhao, D Tamir - PsyArXiv. December, 2020"],"snippet":"Page 1. SIMULATION CHANGES SELF-KNOWLEDGE 1 1 2 3 4 5 6 Simulation Induces Durable, Extensive Changes to Self-Knowledge 7 Jordan Rubin-McGregora, Zidong Zhaoa, and Diana Tamira 8 aDepartment of …","url":["https://psyarxiv.com/m2wgk/download/?format=pdf"]} -{"year":"2020","title":"SINAI at eHealth-KD Challenge 2020: Combining Word Embeddings for Named Entity Recognition in Spanish Medical Records","authors":["P López-Úbedaa, JM Perea-Ortegab…"],"snippet":"… we have used two specific pre-trained word embeddings: BETO [28], which follows a BERT model trained on a big Spanish corpus, and XLM-RoBERTa [29], which were generated by using a large multilingual language model …","url":["http://ceur-ws.org/Vol-2664/eHealth-KD_paper7.pdf"]} -{"year":"2020","title":"Siva at WNUT-2020 Task 2: Fine-tuning Transformer Neural Networks for Identification of Informative Covid-19 Tweets","authors":["S Sai - Proceedings of the Sixth Workshop on Noisy User …, 2020"],"snippet":"… The base version of RoBERTa has 125M parameters, and the large version has 355M parameters. XLM-RoBERTa XLM-RoBERTa(Conneau et al., 2019) is a multilingual model trained on 2.5 TB data from CommonCrawl. This …","url":["https://www.aclweb.org/anthology/2020.wnut-1.45.pdf"]} -{"year":"2020","title":"SJTU-NICT's Supervised and Unsupervised Neural Machine Translation Systems for the WMT20 News Translation Task","authors":["Z Li, H Zhao, R Wang, K Chen, M Utiyama, E Sumita - arXiv preprint arXiv …, 2020"],"snippet":"… In the supervised PL→EN translation direction, we based on the XLM framework to pre-train a Polish language model using common crawl and news crawl monolingual data, and proposed the XLM enhanced NMT model …","url":["https://arxiv.org/pdf/2010.05122"]} -{"year":"2020","title":"SMAN: Stacked Multi-Modal Attention Network for Cross-Modal Image-Text Retrieval","authors":["BR Loss"],"snippet":"Page 1. warwick.ac.uk/lib-publications Manuscript version: Author's Accepted Manuscript The version presented in WRAP is the author's accepted manuscript and may differ from the published version or Version of Record. Persistent …","url":["https://pdfs.semanticscholar.org/7588/90bef9a1a85a25a1f6831a58f00a462476af.pdf"]} -{"year":"2020","title":"SML: Semantic Meta-learning for Few-shot Semantic Segmentation","authors":["AK Pambala, T Dutta, S Biswas - arXiv preprint arXiv:2009.06680, 2020"],"snippet":"… Word2vec (Mikolov et al. 2013) is trained on Google News dataset (Wang, Ye, and Gupta 2018) which contains 3-million words; (2) FastText (Joulin et al. 2016) is trained on Common-Crawl dataset (Mikolov et al. 2018). We use these …","url":["https://arxiv.org/pdf/2009.06680"]} -{"year":"2020","title":"SNK@ DANKMEMES: Leveraging Pretrained Embeddings for Multimodal Meme Detection","authors":["S Fiorucci - Proceedings of Seventh Evaluation Campaign of …, 2020"],"snippet":"… et al., 2018). Word embeddings are trained on Common Crawl and Wikipedia, using CBOW with positionweights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We calculated the …","url":["http://ceur-ws.org/Vol-2765/paper121.pdf"]} -{"year":"2020","title":"Social biases in word embeddings and their relation to human cognition","authors":["A Caliskan, M Lewis - 2020"],"snippet":"… state-of-the-art word embeddings is the vast amount of training data available. GloVe is trained on 840 billion tokens and more than 2 million unique words of Common Crawl data which is a crawl of the entire world wide web. Similarly, Word2vec is trained on a …","url":["https://psyarxiv.com/d84kg/download?format=pdf"]} -{"year":"2020","title":"Social Media Attributions in the Context of Water Crisis","authors":["R Sarkar, H Sarkar, S Mahinder, AR KhudaBukhsh - arXiv preprint arXiv:2001.01697, 2020"],"snippet":"… used embedding in this preprocessing step. We used the 300 dimensional GloVe model trained on 840 billion tokens of the CommonCrawl corpus, having a vocabulary size of 2.2 million. While calculating the embedding of …","url":["https://arxiv.org/pdf/2001.01697"]} -{"year":"2020","title":"Sociolinguistic Properties of Word Embeddings","authors":["A Arseniev-Koehler, JG Foster - SocArXiv. August, 2020"],"snippet":"… These studies use large, commonly available pre-trained embeddings or their training corpora, such as Google News, web data (Common Crawl), and Google Books … They replicated results using a pretrained model on Common Crawl data …","url":["https://osf.io/b8kud/download"]} -{"year":"2020","title":"Software for creating and analyzing semantic representations","authors":["FÅ Nielsen, LK Hansen - Statistical Semantics, 2020"],"snippet":"… This package provides models for the tagger, parser, named-entity recognizer and distributional semantic vectors trained on OntoNotes Release 5 and the Common Crawl dataset … 10 K–50 K. 300. 29 languages. GloVe. Common …","url":["https://link.springer.com/chapter/10.1007/978-3-030-37250-7_3"]} -{"year":"2020","title":"Spoken words as biomarkers: using machine learning to gain insight into communication as a predictor of anxiety","authors":["G Demiris, KL Corey Magan, D Parker Oliver… - Journal of the American …, 2020"],"snippet":"… The validity of using cosine distance in an embedding space to measure text similarity depends largely on how well the embedding space represents the semantic concepts present in the text. In our case, the word embeddings …","url":["https://academic.oup.com/jamia/advance-article-abstract/doi/10.1093/jamia/ocaa049/5831105"]} -{"year":"2020","title":"SPONTANEOUS STEREOTYPE CONTENT: MEASUREMENT AIMING TOWARD THEORETICAL INTEGRATION AND DISCOVERY","authors":["G Nicolas Ferreira - 2020","GN Ferreira - 2020"],"snippet":"Page 1. SPONTANEOUS STEREOTYPE CONTENT: MEASUREMENT AIMING TOWARD THEORETICAL INTEGRATION AND DISCOVERY GANDALF NICOLAS FERREIRA A DISSERTATION PRESENTED TO THE FACULTY OF PRINCETON UNIVERSITY IN …","url":["http://search.proquest.com/openview/41d33da8e87d459690442733f719668f/1?pq-origsite=gscholar&cbl=18750&diss=y","https://dataspace.princeton.edu/bitstream/88435/dsp01zp38wg55d/1/NicolasFerreira_princeton_0181D_13366.pdf"]} -{"year":"2020","title":"Stanza: A Python Natural Language Processing Toolkit for Many Human Languages","authors":["P Qi, Y Zhang, Y Zhang, J Bolton, CD Manning - arXiv preprint arXiv:2003.07082, 2020"],"snippet":"… For the character-level language models in the NER component, we pretrained them on a mix of the Common Crawl and Wikipedia dumps, and the news corpora released by the WMT19 Shared Task (Barrault et al., 2019), with …","url":["https://arxiv.org/pdf/2003.07082"]} -{"year":"2020","title":"STIL--Simultaneous Slot Filling, Translation, Intent Classification, and Language Identification: Initial Results using mBART on MultiATIS++","authors":["JGM FitzGerald - arXiv preprint arXiv:2010.00760, 2020"],"snippet":"… The mBART.cc25 model was trained on 25 languages for 500k steps using a 1.4 TB corpus of scraped website data taken from Common Crawl (Wenzek et al., 2019). The model was trained to reconstruct masked tokens and to rearrange scrambled sentences …","url":["https://arxiv.org/pdf/2010.00760"]} -{"year":"2020","title":"STILTool: A Semantic Table Interpretation evaLuation Tool","authors":["E Jimenez-Ruiz, A Maurino - The Semantic Web: ESWC 2020 Satellite Events …","M Cremaschi, A Siano, R Avogadro, E Jimenez-Ruiz…"],"snippet":"… In order to size the spread of tabular data, 2.5 M tables have been identified within the Common Crawl repository1 [3]. The current snapshot of Wikipedia contains more than 3.23 M tables from more than 520k Wikipedia articles …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=C0UIEAAAQBAJ&oi=fnd&pg=PA61&dq=commoncrawl&ots=OcUKD8orbe&sig=5EUZjTQOLRGuwqaWXWRmrck1S50","https://preprints.2020.eswc-conferences.org/posters_demos/paper_293.pdf"]} -{"year":"2020","title":"Structured deep neural network with low complexity","authors":["S Liao - 2020"],"snippet":"Page 1. STRUCTURED DEEP NEURAL NETWORK WITH LOW COMPLEXITY By SIYU LIAO A dissertation submitted to the School of Graduate Studies Rutgers, The State University of New Jersey in partial fulfillment of the …","url":["https://rucore.libraries.rutgers.edu/rutgers-lib/64996/PDF/1/"]} -{"year":"2020","title":"Study and Creation of Datasets for Comparative Questions Classification","authors":["S Stahlhacke"],"snippet":"… The data used by the system is a preprocessed version of the Common Crawl Text Corpus8, which crawled from the world wide web … Which one is better suited for me, Xbox One or PS4? 8https://commoncrawl.org/ 4 Page 11. CHAPTER 1. INTRODUCTION …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/teaching/theses/completed-theses/2020-ma-stahlhacke.pdf"]} -{"year":"2020","title":"Studying the Evolution of Greek Words via Word Embeddings","authors":["V Barzokas, E Papagiannopoulou, G Tsoumakas - 11th Hellenic Conference on …, 2020"],"snippet":"… Despite the limited size of the Greek corpus compared to Common Crawl and Wikipedia used for the pre-trained fastText embeddings, we didn't detect any notable difference in the quality of our models in comparison with the pre-trained one …","url":["https://dl.acm.org/doi/abs/10.1145/3411408.3411425"]} -{"year":"2020","title":"Substance over Style: Document-Level Targeted Content Transfer","authors":["A Hegel, S Rao, A Celikyilmaz, B Dolan - arXiv preprint arXiv:2010.08618, 2020"],"snippet":"Page 1. Substance over Style: Document-Level Targeted Content Transfer Allison Hegel1∗ Sudha Rao2 Asli Celikyilmaz2 Bill Dolan2 1Lexion, Seattle, WA, USA 2Microsoft Research, Redmond, WA, USA allison@lexion.ai {sudhra,aslicel,billdol}@microsoft.com Abstract …","url":["https://arxiv.org/pdf/2010.08618"]} -{"year":"2020","title":"Subword Segmentation and a Single Bridge Language Affect Zero-Shot Neural Machine Translation","authors":["A Rios, M Müller, R Sennrich - arXiv preprint arXiv:2011.01703, 2020","AR Gonzales, M Müller, R Sennrich - Proceedings of the Fifth Conference on …, 2020"],"snippet":"… Page 3. 530 corpora training dev test Language Pairs with English: de↔en Commoncrawl, Europarl-v9, Wikititles-v1 5M 250 2000 cs↔en Europarl-v9, CzEng1.7 5M 250 2000 fr↔en Commoncrawl, Europarl-v7 …","url":["https://arxiv.org/pdf/2011.01703","https://www.aclweb.org/anthology/2020.wmt-1.64.pdf"]} -{"year":"2020","title":"Suggesting Citations for Wikidata Claims based on Wikipedia's External References","authors":["P Curotto, A Hogan"],"snippet":"… Offline: Given that some Wikidata items do not have an associated Wikipedia article, that many Wikipedia articles have few references, etc., it would be interesting to develop a broader corpus with more documents from the Web, perhaps from the Common Crawl …","url":["http://aidanhogan.com/docs/wikidata-references.pdf"]} -{"year":"2020","title":"Supervised Understanding of Word Embeddings","authors":["HZ Yerebakan, P Bhatia, Y Shinagawa"],"snippet":"… In our experiments, we have used scikit-learn linear logistic regression model with a positive class weight of 2 to enhance the effect of positive words. We have used top 250k words of Fasttext Common Crawl word …","url":["https://rcqa-ws.github.io/papers/paper8.pdf"]} -{"year":"2020","title":"Surface pattern-enhanced relation extraction with global constraints","authors":["H Jiang, JT Liu, S Zhang, D Yang, Y Xiao, W Wang - Knowledge and Information …, 2020"],"snippet":"Relation extraction is one of the most important tasks in information extraction. The traditional works either use sentences or surface patterns (ie, the.","url":["https://link.springer.com/article/10.1007/s10115-020-01502-y"]} -{"year":"2020","title":"Survey on RNN and CRF models for de-identification of medical free text","authors":["JL Leevy, TM Khoshgoftaar, F Villanustre - Journal of Big Data, 2020"],"snippet":"The increasing reliance on electronic health record (EHR) in areas such as medical research should be addressed by using ample safeguards for patient privacy. These records often tend to be big data, and given that a significant …","url":["https://journalofbigdata.springeropen.com/articles/10.1186/s40537-020-00351-4"]} -{"year":"2020","title":"SYMPTOM EXTRACTION FROM ATRIAL FIBRILLATION PATIENT CLINICAL NOTES USING DEEP LEARNING","authors":["TET van Putten"],"snippet":"Page 1. Eindhoven University of Technology MASTER Symptom extraction from atrial fibrillation patient clinical notes using deep learning van Putten, TE Award date: 2020 Link to publication Disclaimer This document contains …","url":["https://pure.tue.nl/ws/portalfiles/portal/163432620/Master_Thesis_Tim_van_Putten.pdf"]} -{"year":"2020","title":"Syntax Role for Neural Semantic Role Labeling","authors":["Z Li, H Zhao, S He, J Cai - arXiv preprint arXiv:2009.05737, 2020"],"snippet":"Page 1. Syntax Role for Neural Semantic Role Labeling Zuchao Li Shanghai Jiao Tong University Department of Computer Science and Engineering charlee@sjtu. edu.cn Hai Zhao∗ Shanghai Jiao Tong University Department …","url":["https://arxiv.org/pdf/2009.05737"]} -{"year":"2020","title":"System and method for model derivation for entity prediction","authors":["FI Wyss, A Ganapathiraju, P Buduguppa - US Patent App. 16/677,989, 2020"],"snippet":"… The dense representation can be used to capture the contextual information. For an NER system, the information can be encoded in the form of “world knowledge' by using a corpus such as the Wikipedia corpus or Google's common crawl data …","url":["https://patents.google.com/patent/US20200151248A1/en"]} -{"year":"2020","title":"Systematic Mapping on Embedded Semantic Markup Validated with Data Mining Techniques","authors":["R Navarrete, C Montenegro, L Recalde - … Conference on Applied Human Factors and …, 2020"],"snippet":"… Markup format: microdata, rdfa, jsonld. Approach of the Research: ads, commerce, commoncrawl, crawl, deploy, education, egovernment, entity, error, extract, extraction, fix, government, learning, lod, mistake, owl, pld, plds, rdf, video, wdc, webdatacommons …","url":["https://link.springer.com/chapter/10.1007/978-3-030-51328-3_53"]} -{"year":"2020","title":"SYSTEMS AND METHODS FOR LEARNING USER REPRESENTATIONS FOR OPEN VOCABULARY DATA SETS","authors":["T Durand, G Mori - US Patent App. 16/826,215, 2020"],"snippet":"Systems and methods adapted for training a machine learning model to predict data labels are described. The approach includes receiving a first data set comprising first data objects and associated fi.","url":["https://www.freepatentsonline.com/y2020/0302340.html"]} -{"year":"2020","title":"TabEAno: Table to Knowledge Graph Entity Annotation","authors":["P Nguyen, N Kertkeidkachorn, R Ichise, H Takeda - arXiv preprint arXiv:2010.01829, 2020"],"snippet":"… Note that, tables in this study refer to relational vertical tables. A 3 Open Data Vision: https://opendatabarometer.org 4 Common Crawl: http://commoncrawl.org/ arXiv:2010.01829v1 [cs.AI] 5 Oct 2020 Page 2. 2 Phuc Nguyen et al …","url":["https://arxiv.org/pdf/2010.01829"]} -{"year":"2020","title":"TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data","authors":["P Yin, G Neubig, W Yih, S Riedel - arXiv preprint arXiv:2005.08314, 2020"],"snippet":"… Page 5. interesting avenue for future work. Specifically, we collect tables and their surrounding NL text from English Wikipedia and the WDC WebTable Corpus (Lehmberg et al., 2016), a large-scale table collection from CommonCrawl …","url":["https://arxiv.org/pdf/2005.08314"]} -{"year":"2020","title":"Tagging Reading Comprehension Materials with Document Extraction Attention Networks","authors":["B Sun, Y Zhu, R Xiao, Y Xiao, YG Wei - IEEE Transactions on Learning …, 2020"],"snippet":"Page 1. 1939-1382 (c) 2020 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/9079601/"]} -{"year":"2020","title":"Tagging with Weak Labels. Paper presented at AAAI Conference on","authors":["E Simpson, J Pfeiffer, I Gurevych"],"snippet":"… For FAMULUS, we use 300-dimensional German fastText embeddings (Grave et al. 2018), and for NER and PICO we use 300-dimensional English GloVe 3 embeddings trained on 840 billion tokens from Common Crawl. To …","url":["https://research-information.bris.ac.uk/files/225055068/AAAI_Low_Resource_Sequence_Tagging_with_Weak_Labels.pdf"]} -{"year":"2020","title":"Tailored retrieval of health information from the web for facilitating communication and empowerment of elderly people","authors":["M Alfano, B Lenzitti, D Taibi, M Helfert - 2020"],"snippet":"… contain all Microformat, Microdata and RDFa (Resource Description Framework in Attributes) data extracted from the open repository of Web crawl data named Common Crawl (CC)9 … 8 http://webdatacommons.org/ 9 …","url":["http://doras.dcu.ie/24469/2/ICT4AWE_2020_40_CR.pdf"]} -{"year":"2020","title":"Target driven visual navigation exploiting object relationships","authors":["Y Qiu, A Pal, HI Christensen - arXiv preprint arXiv:2003.06749, 2020"],"snippet":"… For the word embeddings, we used the 300-D GloVe vectors pretrained on 840 billion tokens of Common Crawl [49]. The A3C model is based on [50], and the model hyperparameters used were: learning rate …","url":["https://arxiv.org/pdf/2003.06749"]} -{"year":"2020","title":"Targeted Poisoning Attacks on Black-Box Neural Machine Translation","authors":["C Xu, J Wang, Y Tang, F Guzman, BIP Rubinstein… - arXiv preprint arXiv …, 2020"],"snippet":"… 3We find that the crawling services commonly used for parallel data collection, eg, Common Crawl (commoncrawl.org), are also fetching news articles from self-publishing sources like blogs (eg, with a subdomain of blogspot.com) …","url":["https://arxiv.org/pdf/2011.00675"]} -{"year":"2020","title":"Tell and guess: cooperative learning for natural image caption generation with hierarchical refined attention","authors":["W Zhang, S Tang, J Su, J Xiao, Y Zhuang - Multimedia Tools and Applications, 2020"],"snippet":"… Implementation details. We use Tensorflow to implement our model and its variants. Given a textual caption, we employ the word2vec model (ie, GloVe word embedding [22] ) which is pre-trained on the Common Crawl dataset [25] …","url":["https://link.springer.com/article/10.1007/s11042-020-08832-7"]} -{"year":"2020","title":"Tell Me Why You Feel That Way: Processing Compositional Dependency for Tree-LSTM Aspect Sentiment Triplet Extraction (TASTE)","authors":["A Sutherland, S Bensch, T Hellström, S Magg… - International Conference on …, 2020"],"snippet":"… aligned}$$. (6). $$\\begin{aligned}&{h}_{j} = o_j \\odot tanh(c_j). \\end{aligned}$$. (7). Words in a sentence are represented as Word Embeddings from the pre-trained Common-Crawl 840 B data 2 before they are fed to the DTLSTM. To …","url":["https://link.springer.com/chapter/10.1007/978-3-030-61609-0_52"]} -{"year":"2020","title":"Tencent AI Lab machine translation systems for the WMT20 chat translation task","authors":["L Wang, Z Tu, X Wang, L Ding, L Ding, S Shi - Proceedings of the Fifth Conference on …, 2020"],"snippet":"… Out-of-domain Parallel Data The participants are allowed to use all the training data in the News shared task.4 Thus, we combine six corpora including Euporal, ParaCrawl, CommonCrawl, TildeRapid, NewsCommentary and WikiMatrix …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.60.pdf"]} -{"year":"2020","title":"Testing pre-trained Transformer models for Lithuanian news clustering","authors":["L Stankevičius, M Lukoševičius - arXiv preprint arXiv:2004.03461, 2020"],"snippet":"… Specifically, we will use well known baselines – multilingual BERT and recently published XLM-R, trained on more than two terabytes of filtered CommonCrawl data. We chose clustering task to also try to advance the field of data mining …","url":["https://arxiv.org/pdf/2004.03461"]} -{"year":"2020","title":"Text as data: a machine learning-based approach to measuring uncertainty","authors":["R Nyman, P Ormerod - arXiv preprint arXiv:2006.06457, 2020"],"snippet":"… The authors assemble a very large corpus of words from various sources. We use the one described on the GloVe website as Common Crawl (glove.42B.300d.zip). A co-occurrence matrix is constructed, which describes …","url":["https://arxiv.org/pdf/2006.06457"]} -{"year":"2020","title":"Text Classification: Exploiting the Social Network","authors":["SBM Alkhereyf"],"snippet":"Page 1. Text Classification: Exploiting the Social Network Sakhar Alkhereyf Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy under the Executive Committee of the Graduate School of …","url":["https://academiccommons.columbia.edu/doi/10.7916/d8-t2jv-sb09/download"]} -{"year":"2020","title":"Text mining-based construction site accident classification using hybrid supervised machine learning","authors":["MY Cheng, D Kusoemo, RA Gosno - Automation in Construction, 2020"],"snippet":"… There are various pre-trained databases in the GloVe website that is open for public, such as Wikipedia database consists of 6 billion words and 100 dimension, Common Crawl database consists of 42 billion words and 300 …","url":["https://www.sciencedirect.com/science/article/pii/S092658051931341X"]} -{"year":"2020","title":"Text-based classification of interviews for mental health--juxtaposing the state of the art","authors":["JV Wouts - arXiv preprint arXiv:2008.01543, 2020"],"snippet":"… Model name Pretrain corpus Tokenizer type Acc Sentiment analysis belabBERT Common Crawl Dutch (non-shuffled) BytePairEncoding 95.92∗ % RobBERT Common Crawl Dutch (shuffled) BytePairEncoding 94.42 …","url":["https://arxiv.org/pdf/2008.01543"]} -{"year":"2020","title":"TextSETTR: Label-Free Text Style Extraction and Tunable Targeted Restyling","authors":["P Riley, N Constant, M Guo, G Kumar, D Uthus… - arXiv preprint arXiv …, 2020"],"snippet":"… Furthermore, we demonstrate that a single model trained on unlabeled Common Crawl data is capable of transferring along multiple dimensions including dialect, emotiveness, formality, politeness, and sentiment. 1 INTRODUCTION …","url":["https://arxiv.org/pdf/2010.03802"]} -{"year":"2020","title":"TF-CR: Weighting Embeddings for Text Classification","authors":["A Zubiaga - arXiv preprint arXiv:2012.06606, 2020"],"snippet":"… Page 6. • cglove: GloVe embeddings trained from Common Crawl. • wglove: GloVe embeddings trained from Wikipedia.6 We use two different classifiers for these experiments, SVM and Logistic Regression, which are known …","url":["https://arxiv.org/pdf/2012.06606"]} -{"year":"2020","title":"The 2019 BBN Cross-lingual Information Retrieval System","authors":["DK Le Zhang, W Hartmann, M Srivastava, L Tarlin… - LREC 2020 Language Resources …","L Zhang, D Karakos, W Hartmann, M Srivastava… - Proceedings of the …, 2020"],"snippet":"… 4.1. Training Data The primary data source for constructing MT models is parallel data from the build language pack, augmented with a variety of web data, such as CommonCrawl2 and open parallel corpus (Tiedemann …","url":["http://www.lrec-conf.org/proceedings/lrec2020/workshops/CLSSTS2020/CLSSTS-2020.pdf#page=49","https://www.aclweb.org/anthology/2020.clssts-1.8.pdf"]} -{"year":"2020","title":"The 2020 bilingual, bi-directional webnlg+ shared task overview and evaluation results (webnlg+ 2020)","authors":["TC Ferreira, C Gardent, C van der Lee, N Ilinykh… - Proceedings of the 3rd …, 2020"],"snippet":"… 3.3 Mono-task, Bilingual Approaches cuni-ufal. The mBART model (Liu et al., 2020) is pre-trained for multilingual denoising on the large-scale multilingual CC25 corpus extracted from Common Crawl, which contains …","url":["https://webnlg-challenge.loria.fr/files/2020.webnlg-papers.7.pdf"]} -{"year":"2020","title":"THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES","authors":["M Toshevska, F Stojanovska, J Kalajdjieski"],"snippet":"… architectures [25]. In our experiments, we have used pre-trained models both trained with subword information on Wikipedia 2017 (16B tokens) and trained with subword information on Common Crawl (600B tokens)4. 2https …","url":["http://www.academia.edu/download/63915170/120200714-10552-nn915u.pdf"]} -{"year":"2020","title":"The ADAPT Centre's neural MT systems for the WAT 2020 document-level translation task","authors":["W Jooste, R Haque, A Way - 2020"],"snippet":"… Finally, source-language monolingual data with n-grams similar to that of the documents in the test set was mined from the Common Crawl Corpus6 to be used as a source-side original synthetic corpus (SOSC) for fine-tuning the NMT model parameters …","url":["http://doras.dcu.ie/25205/1/WAT_2020.pdf"]} -{"year":"2020","title":"The afrl wmt20 news-translation systems","authors":["J Gwinnup, T Anderson - Proceedings of the Fifth Conference on Machine …, 2020"],"snippet":"… Page 2. 207 corpus unfiltered lines filtered lines percent remain commoncrawl 723,256 655,069 90.57% newscommentaryv15 319,242 286,947 89.88% yandex 1,000,000 901,318 90.13 … 2013. Dirt cheap webscale parallel text from the common crawl …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.20.pdf"]} -{"year":"2020","title":"The Art of Reproducible Machine Learning","authors":["V Novotný - RASLAN 2020 Recent Advances in Slavonic Natural …, 2020"],"snippet":"… We then use the initializations to reproduce 14 the results of Mikolov et al.(2018)[19, Table 2] us- ing the subword cbow model of Bojanowski et al.(2017)[2] and the 2017 English Wikipedia 15 training corpus (4% of the …","url":["https://nlp.fi.muni.cz/raslan/raslan20.pdf#page=63"]} -{"year":"2020","title":"The birth of Romanian BERT","authors":["SD Dumitrescu, AM Avram, S Pyysalo - arXiv preprint arXiv:2009.08712, 2020"],"snippet":"… In total, the OPUS corpus contains around 4GB of Romanian text. OSCAR OSCAR (Ortiz Suárez et al., 2019), or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language …","url":["https://arxiv.org/pdf/2009.08712"]} -{"year":"2020","title":"The Case For Alternative Web Archival Formats To Expedite The Data-To-Insight Cycle","authors":["X Wang, Z Xie - arXiv preprint arXiv:2003.14046, 2020"],"snippet":"… Large-scale, comprehensive web archiving initiatives include the Internet Archive [8], the Common Crawl [16], and many programs at national libraries and archives. These … 5.2 Data We chose to use Common Crawl's web …","url":["https://arxiv.org/pdf/2003.14046"]} -{"year":"2020","title":"The Challenge of Diacritics in Yoruba Embeddings","authors":["TP Adewumi, F Liwicki, M Liwicki - arXiv preprint arXiv:2011.07605, 2020"],"snippet":"… (2018) are tabulated: Wiki, U_Wiki, C3 & CC, representing embeddings from the cleaned Wikipedia dump, its undiacritized (normalized) version, the diacritized data from Alabi et al. (2020) and the Common Crawl embedding by Grave et al. (2018), respectively …","url":["https://arxiv.org/pdf/2011.07605"]} -{"year":"2020","title":"The Danish Gigaword Project","authors":["L Strømberg-Derczynski, R Baglini, MH Christiansen… - arXiv preprint arXiv …, 2020"],"snippet":"… Similarly, other huge monolithic datasets such as the Common Crawl Danish data suffer from large amounts of non-Danish content, possibly due to the pervasive confusion between Danish and Norwegian Bokmål … Common …","url":["https://arxiv.org/pdf/2005.03521"]} -{"year":"2020","title":"The ELTE. DH Pilot Corpus–Creating a Handcrafted Gigaword Web Corpus with Metadata","authors":["B Indig, Á Knap, Z Sárközi-Lindner, M Timári, G Palkó - … of the 12th Web as Corpus …, 2020"],"snippet":"… Nowadays, large corpora are utilising the Common Crawl archive like the OSCAR corpus (Ortiz Suárez et al., 2019) with 5.16 billion (2.33 … All of these corpora – except the ones based on Common Crawl – have the same …","url":["https://www.aclweb.org/anthology/2020.wac-1.5.pdf"]} -{"year":"2020","title":"The Emergence, Advancement and Future of Textual Answer Triggering","authors":["KN Acheampong, W Tian, EB Sifah… - Science and Information …, 2020"],"snippet":"… A similar observation is realized (\\(A_1 \\approx 0.8986\\); \\(A_2 \\approx 0.7352\\); difference in margin \\(\\approx 0.1634 \\)) when the model the 300-dimensional word vectors trained on Common Crawl with GloVe from spaCy is used …","url":["https://link.springer.com/chapter/10.1007/978-3-030-52246-9_50"]} -{"year":"2020","title":"The Geometry of Distributed Representations for Better Alignment, Attenuated Bias, and Improved Interpretability","authors":["S Dev - arXiv preprint arXiv:2011.12465, 2020"],"snippet":"… 39 5 RMSE variation with word frequency in (a) GloVe Wiki to GloVe Common Crawl and (b) word2vec to GloVe evaluated for Wiki dataset. All words were used for tests in lower case as listed in the table. . . . . 40 …","url":["https://arxiv.org/pdf/2011.12465"]} -{"year":"2020","title":"The NiuTrans Machine Translation Systems for WMT20","authors":["Y Zhang, Z Wang, R Cao, B Wei, W Shan, S Zhou… - Proceedings of the Fifth …, 2020"],"snippet":"… mentary, Common Crawl , TED Talks 4 Japanese monolingual data corpus about 1.7 billion. After the data filter, 12 million parallel data was left and 11 million selected by the neural language model was used as training data …","url":["https://www.aclweb.org/anthology/2020.wmt-1.37.pdf"]} -{"year":"2020","title":"The POLAR Framework: Polar Opposites Enable Interpretability of Pre-Trained Word Embeddings","authors":["B Mathew, S Sikdar, F Lemmerich, M Strohmaier - arXiv preprint arXiv:2001.09876, 2020"],"snippet":"… (2) GloVe embeddings [27]3 trained on Web data from Common Crawl … 3We used the Common Crawl embeddings with 42B tokens: https://nlp. stanford.edu/ projects/glove/ 4The datasets are available here: https://github …","url":["https://arxiv.org/pdf/2001.09876"]} -{"year":"2020","title":"The POLUSA Dataset: 0.9 M Political News Articles Balanced by Time and Outlet Popularity","authors":["L Gebhard, F Hamborg - arXiv preprint arXiv:2005.14024, 2020"],"snippet":"… RoBERTa: A Robustly Optimized BERT Pretraining Ap- proach. arXiv: 1907.11692 [cs] [6] Sebastian Nagel. 2016. Common Crawl – News Dataset Available. Retrieved May 8, 2020 from https://commoncrawl.org/2016/10 …","url":["https://arxiv.org/pdf/2005.14024"]} -{"year":"2020","title":"The presence of occupational structure in online texts based on word embedding NLP models","authors":["Z Kmetty, J Koltai, T Rudas - arXiv preprint arXiv:2005.08612, 2020"],"snippet":"… We used three pre-trained vector spaces in the analysis. The first vector model we used was trained on the English language texts of the Common Crawl (CC) corpus1, a huge web archive … 2016) 1 http://commoncrawl.org 2 …","url":["https://arxiv.org/pdf/2005.08612"]} -{"year":"2020","title":"The role of affective meaning, semantic associates, and orthographic neighbours in modulating the N400 in single words","authors":["F Blomberg, M Roll, J Frid, M Lindgren, M Horne - The Mental Lexicon, 2020"],"snippet":"… of fastText compared to other popular implementations (such as Word2Vec and Glove) is that it already has a model for Swedish trained on millions of words taken from the Swedish version of the free online encyclopedia …","url":["https://www.jbe-platform.com/content/journals/10.1075/ml.19021.blo"]} -{"year":"2020","title":"The Two-Pass Softmax Algorithm","authors":["M Dukhan, A Ablavatski - arXiv preprint arXiv:2001.04438, 2020"],"snippet":"Page 1. The Two-Pass Softmax Algorithm Marat Dukhan ∗1,2 and Artsiom Ablavatski1 1Google Research 2Georgia Institute of Technology Abstract The softmax (also called softargmax) function is widely used in machine learning …","url":["https://arxiv.org/pdf/2001.04438"]} -{"year":"2020","title":"The University of Edinburgh's English-Tamil and English-Inuktitut Submissions to the WMT20 News Translation Task","authors":["R Bawden, A Birch, R Dobreva, A Oncevay… - 5th Conference on Machine …, 2020"],"snippet":"… The only additional monolingual Inuktitut data was 163k sentences of common-crawl data, which we backtranslated for the English→Inuktitut system … Synthetic (from en Europarl) en-iu 650k Synthetic (from en News …","url":["https://hal.archives-ouvertes.fr/hal-02981153/document"]} -{"year":"2020","title":"The University of Edinburgh's submission to the German-to-English and English-to-German Tracks in the WMT 2020 News Translation and Zero-shot Translation …","authors":["U Germann - 2020"],"snippet":"… High-quality parallel data Europarl ca. 1.79 M Rapid ca. 1.45 M News Commentary ca. 0.35 M Crawled parallel data ParaCrawl 5.1 ca. 34.37 M CommonCrawl ca. 2.40 M WikiMatrix ca. 6.22 M WikiTitles ca. 1.38 M Monolingual crawled news data German ca …","url":["http://statmt.org/wmt20/pdf/2020.wmt-1.18.pdf"]} -{"year":"2020","title":"The University of Helsinki and Aalto University submissions to the WMT 2020 news and lowresource translation tasks","authors":["Y Scherrer, SA Grönroos, S Virpioja - the Fifth Conference on Machine Translation, 2020"],"snippet":"… NewsDiscuss 2019 2 000 000 1 000 000 CommonCrawl 80 244 80 244 … In terms of monolingual Inuktitut data, besides the unaligned NH data, the organizers only provided a CommonCrawl dump. This corpus was again backtranslated to English and filtered …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.134.pdf"]} -{"year":"2020","title":"The University of Helsinki submission to the IWSLT2020 Offline Speech Translation Task","authors":["R Vázquez, M Aulamo, U Sulubacak, J Tiedemann - The 17th International …, 2020"],"snippet":"… filter out noisy translations. OpenSubtitles2018, which consists of subtitle translations, and corpora gathered by crawling the internet, Common Crawl and ParaCrawl, are especially likely to contain noisy data. For filtering the …","url":["https://tuhat.helsinki.fi/ws/files/137272405/uni_helsinki_submission_iwslt_2020.pdf"]} -{"year":"2020","title":"The Unreasonable Effectiveness of Machine Learning in Moldavian versus Romanian Dialect Identification","authors":["M Găman, RT Ionescu - arXiv preprint arXiv:2007.15700, 2020"],"snippet":"… We note that these representations are learned from Romanian corpora, such as the corpus for contemporary Romanian language (CoRoLa) (Mititelu, Tufis, and Irimia 2018; Pais and Tufis 2018), Common Crawl (CC) and Wikipedia (Grave et al …","url":["https://arxiv.org/pdf/2007.15700"]} -{"year":"2020","title":"The Voice and Speech Processing within Language Technology Applications: Perspective of the Russian Data Protection Law","authors":["I Ilin"],"snippet":"… of collecting, systematizing and annotating language data are various language datasets such as Open Subtitles43, the Common Crawl da- taset44, the … 43 Available at: https://www.opensubtitles.org/ru (accessed: 18.05.2020) …","url":["https://www.researchgate.net/profile/Ilya_Ilin/publication/345237982_The_Voice_and_Speech_Processing_within_Language_Technology_Application_Perspective_of_the_Russian_Data_Protection_Law/links/5fa11750458515b7cfb5ce68/The-Voice-and-Speech-Processing-within-Language-Technology-Application-Perspective-of-the-Russian-Data-Protection-Law.pdf"]} -{"year":"2020","title":"The Volctrans Machine Translation System for WMT20","authors":["L Wu, X Pan, Z Lin, Y Zhu, M Wang, L Li - arXiv preprint arXiv:2010.14806, 2020"],"snippet":"… We use all parallel data available: Eu- roparl v10, ParaCrawl v5.1, Common Crawl corpus, News Commentary v15, Wiki Titles v2, Tilde Rapid corpus and WikiMatrix corpus … Each part contains 10M common crawl sentences and 3M Newscrawl sentences …","url":["https://arxiv.org/pdf/2010.14806"]} -{"year":"2020","title":"TheNorth@ HaSpeeDe 2: BERT-based Language Model Fine-tuning for Italian Hate Speech Detection","authors":["E Lavergne, R Saini, G Kovács, K Murphy"],"snippet":"… ERT. AlBERTo was pretrained on TWITA, that is a collection of Italian tweets (Polignano et al., 2019b). UmBERTo was pretrained on Commoncrawl ITA exploiting OSCAR Italian large corpus (Parisi et al., 2020). Finally, PoliBERT …","url":["http://ceur-ws.org/Vol-2765/paper135.pdf"]} -{"year":"2020","title":"This is a post-peer-review, pre-copyedit version of an article in press in Motivation and Emotion. The final authenticated version is available online at: http://dx. doi. org …","authors":["JS Pang, H Ring"],"snippet":"… datasets. Based on these experiments we decided to use Facebook's FastText subword embeddings of 300 dimensions trained on Common Crawl (600 billion tokens).5 This is the set of pre-trained vectors that we …","url":["http://www.academia.edu/download/63571037/Pang_Ring-2020-Automating_implicit_motive_coding-ME_AAM.pdf"]} -{"year":"2020","title":"This is a post-peer-review, pre-copyedit version of an article in press in Motivation and Emotion. The final authenticated version will be available online at: http://dx. doi …","authors":["JS Pang, H Ring"],"snippet":"… datasets. Based on these experiments we decided to use Facebook's FastText subword embeddings of 300 dimensions trained on Common Crawl (600 billion tokens).5 This is the set of pre-trained vectors that we …","url":["https://osf.io/b7d96/download"]} -{"year":"2020","title":"Tight Integrated End-to-End Training for Cascaded Speech Translation","authors":["P Bahar, T Bieschke, R Schlüter, H Ney - arXiv preprint arXiv:2011.12167, 2020"],"snippet":"… For MT training on En→De, we utilize the parallel data allowed for the IWSLT 2020. After filtering the noisy corpora, namely ParaCrawl, CommonCrawl, Rapid and OpenSubtitles2018, we end up with almost 27M bilingual text sentences …","url":["https://arxiv.org/pdf/2011.12167"]} -{"year":"2020","title":"Tilde at WMT 2020: News Task Systems","authors":["R Krišlauks, M Pinnis - arXiv preprint arXiv:2010.15423, 2020"],"snippet":"… translation. In order to make use of the Polish CommonCrawl corpus, we scored sentences using the in-domain language models and selected top-scoring sentences as additional monolingual data for back-translation. Many …","url":["https://arxiv.org/pdf/2010.15423"]} -{"year":"2020","title":"Tired of Topic Models? Clusters of Pretrained Word Embeddings Make for Fast and Good Topics too!","authors":["S Sia, A Dalmia, SJ Mielke - arXiv preprint arXiv:2004.14914, 2020"],"snippet":"… 0.177 FastText 2B (Wikipedia) -0.561 -0.657 -0.419 0.225 0.142 0.196 -0.382 -0.187 0.212 0.235 0.240 0.253 Glove 840B (Common Crawl) -0.436 -0.111 -0.299 0.182 0.213 0.155 -0.043 0.179 0.233 0.219 0.237 0.240 BERT …","url":["https://arxiv.org/pdf/2004.14914"]} -{"year":"2020","title":"To BERT or Not to BERT: Comparing Task-specific and Task-agnostic Semi-Supervised Approaches for Sequence Tagging","authors":["K Bhattacharjee, M Ballesteros, R Anubhai, S Muresan… - arXiv preprint arXiv …, 2020","KBMBR Anubhai, S Muresan, JMFLY Al, OA AI"],"snippet":"… Cloze (Baevski et al., 2019) and BERT-MRC+DSC (Li et al., 2019) are SOTA baselines for CONLL-2003 and CONLL-2012, respectively, for this task. Baevski et al. (2019) also use subsampled Common Crawl and News Crawl …","url":["https://arxiv.org/pdf/2010.14042","https://assets.amazon.science/79/37/7a3f91804693baaaadc5062a9821/to-bert-or-not-to-bert-comparing-task-specific-and-task-agnostic-semi-supervised-approaches-for-sequence-tagging.pdf"]} -{"year":"2020","title":"Tohoku-AIP-NTT at WMT 2020 News Translation Task","authors":["S Kiyono, T Ito, R Konno, M Morishita, J Suzuki - … of the Fifth Conference on Machine …, 2020"],"snippet":"… 2.2 Monolingual Corpus The origins of the monolingual corpus in our system are the Europarl, NewsCommentary, and en- tire NewsCrawl (2008-2019) corpora for English and German, and the Europarl …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.12.pdf"]} -{"year":"2020","title":"Topics in Sequence-to-Sequence Learning for Natural Language Processing","authors":["R Aharoni"],"snippet":"Page 1. Topics in Sequence-to-Sequence Learning for Natural Language Processing Roee Aharoni Ph.D. Thesis Submitted to the Senate of Bar-Ilan University Ramat Gan, Israel May 2020 Page 2. This work was …","url":["http://www.roeeaharoni.com/Phd_Thesis.pdf"]} -{"year":"2020","title":"Touché: First Shared Task on Argument Retrieval","authors":["A Bondarenko, M Hagen, M Potthast, H Wachsmuth…"],"snippet":"… Systems. pp. 44–52 (2012) 2. Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic ChatNoir: Search Engine for the ClueWeb and the Common Crawl. In: Proceedings of the 40th European Conference on IR Research (ECIR). pp …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2020-bondarenkoetal-ecir-touche.pdf"]} -{"year":"2020","title":"Toward building recommender systems for the circular economy: Exploring the perils of the European Waste Catalogue","authors":["G van Capelleveen, C Amrit, H Zijm, DM Yazan, A Abdi - Journal of Environmental …"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0301479720313554"]} -{"year":"2020","title":"Towards Context-Aware Opinion Summarization for Monitoring Social Impact of News","authors":["A Ramón-Hernández, A Simón-Cuevas, MMG Lorenzo… - Information, 2020"],"snippet":"… Specifically, those vectors are generated by using the word2vec pre-trained model included in the es_core_news_md model of the spaCy library, which includes 300-dimensional vectors trained using FastText CBOW on Wikipedia …","url":["https://www.mdpi.com/2078-2489/11/11/535/pdf"]} -{"year":"2020","title":"Towards countering hate speech against journalists on social media","authors":["P Charitidis, S Doropoulos, S Vologiannidis… - Online Social Networks and …, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S2468696420300124"]} -{"year":"2020","title":"Towards Effective Utilization of Pretrained Language Models—Knowledge Distillation from BERT","authors":["L Liu - 2020"],"snippet":"… For non-contextual embeddings, there are multiple pre-trained word vectors, such as word2vec trained on Google News, GloVe [60] trained on Wikipedia/Gigaword/Common Crawl, and fastText[67] trained on Wikipedia/Common Crawl …","url":["https://uwspace.uwaterloo.ca/bitstream/handle/10012/16225/Liu_Linqing.pdf?sequence=3"]} -{"year":"2020","title":"Towards Efficient and Reproducible Natural Language Processing","authors":["J Dodge - 2020"],"snippet":"… multiple epochs is standard). For example, the July 2019 Common Crawl contains 242 TB of uncompressed data,8 so even storing the data is expensive … 7https://opensource.google.com/projects/open-images-dataset …","url":["https://www.lti.cs.cmu.edu/sites/default/files/dodge%2C%20jesse%20-%20May%202020.pdf"]} -{"year":"2020","title":"Towards Generalized Neural Semantic Parsing","authors":["P Yin - 2020"],"snippet":"Page 1. April 27, 2020 DRAFT Thesis Proposal Towards Generalized Neural Semantic Parsing Pengcheng Yin April 27, 2020 Language Technologies Institute School of Computer Science Carnegie Mellon University Pittsburgh, PA 15123 Thesis Committee …","url":["http://pcyin.me/thesis_proposal.pdf"]} -{"year":"2020","title":"Towards IP-based Geolocation via Fine-grained and Stable Webcam Landmarks","authors":["Z Wang, Q Li, J Song, H Wang, L Sun - Proceedings of The Web Conference 2020, 2020"],"snippet":"Page 1. Towards IP-based Geolocation via Fine-grained and Stable Webcam Landmarks Zhihao Wang Institute of Information Engineering Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy …","url":["https://dl.acm.org/doi/pdf/10.1145/3366423.3380216"]} -{"year":"2020","title":"Towards Orthographic and Grammatical Clinical Text Correction: a First Approach","authors":["S Lima López - 2020"],"snippet":"… Their application to GEC is based on the idea that correct sequences are bound to have a higher probability score than incorrect ones. They are very dependent on the data that is used to build them, and so large corpora …","url":["https://addi.ehu.es/bitstream/handle/10810/48624/MAL-Salvador_Lima.pdf?sequence=1"]} -{"year":"2020","title":"Towards Useful Word Embeddings","authors":["V Novotný, M Štefánik, D Lupták, P Sojka"],"snippet":"… The size of our dataset is only 4% of the Common Crawl dataset used by Mikolov et al … We will also train our word vector models using larger corpora such as Common Crawl to enable meaningful comparison to sota results. Acknowledgments …","url":["https://www.fi.muni.cz/usr/sojka/papers/raslan-2020-novotny-stefanik-luptak-sojka.pdf"]} -{"year":"2020","title":"Towards Visual Dialog for Radiology","authors":["O Kovaleva, C Shivade, S Kashyap, K Kanjaria, J Wu… - Proceedings of the 19th …, 2020"],"snippet":"… the models, (b) domain-independent GloVe Common Crawl embeddings (Pennington et al., 2014), and (c) domain-specific fastText embeddings trained by (Romanov and Shivade, 2018). The latter are initialized with GloVe …","url":["https://www.aclweb.org/anthology/2020.bionlp-1.6.pdf"]} -{"year":"2020","title":"Traceability Support for Multi-Lingual Software Projects","authors":["Y Liu, J Lin, J Cleland-Huang - arXiv preprint arXiv:2006.16940, 2020"],"snippet":"Page 1. Traceability Support for Multi-Lingual Software Projects Yalin Liu, Jinfeng Lin, Jane Cleland-Huang University of Notre Dame Notre Dame, IN yliu26@nd.edu, jlin6@nd.edu,JaneHuang@nd.edu ABSTRACT Software …","url":["https://arxiv.org/pdf/2006.16940"]} -{"year":"2020","title":"Tracing the emergence of gendered language in childhood","authors":["B Prystawski, E Grant, A Nematzadeh, SWS Lee…"],"snippet":"… We used three commonly-used sets of pre-trained word em- beddings: the word2vec embeddings trained on the Google News corpus (Mikolov et al., 2013a), the GloVe embeddings trained on the Common Crawl corpus, and …","url":["https://cognitivesciencesociety.org/cogsci20/papers/0190/0190.pdf"]} -{"year":"2020","title":"Train Hard, Finetune Easy: Multilingual Denoising for RDF-to-Text Generation","authors":["Z Kasner, O Dušek - Proceedings of the 3rd WebNLG Workshop on Natural …, 2020"],"snippet":"… al., 2019). Adopting BART's objective and architecture, mBART (Liu et al., 2020) is pre-trained on the large-scale CC25 corpus extracted from Common Crawl, which contains data in 25 languages (Wenzek et al., 2020). The …","url":["https://webnlg-challenge.loria.fr/files/2020.webnlg-papers.20.pdf"]} -{"year":"2020","title":"Transformer based Deep Intelligent Contextual Embedding for Twitter sentiment analysis","authors":["U Naseem, I Razzak, K Musial, M Imran - Future Generation Computer Systems, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0167739X2030306X"]} -{"year":"2020","title":"Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals","authors":["M Popel, M Tomkova, J Tomek, Ł Kaiser, J Uszkoreit… - Nature Communications, 2020"],"snippet":"The quality of human translation was long thought to be unattainable for computer translation systems. In this study, we present a deep-learning system, CUBBITT, which challenges this view. In a context-aware blind evaluation …","url":["https://www.nature.com/articles/s41467-020-18073-9"]} -{"year":"2020","title":"Translation Artifacts in Cross-lingual Transfer Learning","authors":["M Artetxe, G Labaka, E Agirre - arXiv preprint arXiv:2004.04721, 2020"],"snippet":"… We first collect the premises from a filtered version of CommonCrawl (Buck et al., 2014), taking a subset of 5 websites that represent a diverse set of genres: a newspaper, an economy forum, a celebrity magazine, a literature blog, and a consumer magazine …","url":["https://arxiv.org/pdf/2004.04721"]} -{"year":"2020","title":"Translation System and Method","authors":["N Bertoldi, D Caroselli, MA Farajian, M Federico… - US Patent App. 16/118,273, 2020"],"snippet":"US20200073947A1 - Translation System and Method - Google Patents. Translation System and Method. Download PDF Info. Publication number US20200073947A1. US20200073947A1 US16/118,273 US201816118273A US2020073947A1 …","url":["https://patents.google.com/patent/US20200073947A1/en"]} -{"year":"2020","title":"TransQuest: Translation Quality Estimation with Cross-lingual Transformers","authors":["T Ranasinghe, C Orasan, R Mitkov - arXiv preprint arXiv:2011.01536, 2020"],"snippet":"… to acquire. Instead, XLM-R trains RoBERTa(Liu et al., 2019) on a huge, multilingual dataset at an enormous scale: unlabelled text in 104 languages is extracted from CommonCrawl datasets, totalling 2.5TB of text. It is trained …","url":["https://arxiv.org/pdf/2011.01536"]} -{"year":"2020","title":"Triclustering in Big Data Setting","authors":["D Egurnov, DI Ignatov, D Tochilkin - arXiv preprint arXiv:2010.12933, 2020"],"snippet":"Page 1. Triclustering in Big Data Setting Dmitry Egurnov, Dmitry I. Ignatov, and Dmitry Tochilkin Abstract In this paper, we describe versions of triclustering algorithms adapted for efficient calculations in distributed environments …","url":["https://arxiv.org/pdf/2010.12933"]} -{"year":"2020","title":"Triple E-Effective Ensembling of Embeddings and Language Models for NER of Historical German.","authors":["S Schweter, L März"],"snippet":"… We use the FastText embeddings trained on Wikipedia (FastText Wiki) and Common Crawl (FastText CC) in a ”classic” word embeddings manner, that means we do not use subwords … BPE MultiBPEmb Wikipedia < 7000 …","url":["http://ceur-ws.org/Vol-2696/paper_173.pdf"]} -{"year":"2020","title":"TULIP: A Five-Star Table and List-from Machine-Readable to Machine-Understandable Systems","authors":["J Nandakwang, P Chongstitvatana - Linked Open Data-Applications, Trends and …, 2020"],"snippet":"Currently, Linked Data is increasing at a rapid rate as the growth of the Web. Aside from new information that has been created exclusively as Semantic Web-ready, part of them comes from the transformation of existing structural …","url":["https://www.intechopen.com/online-first/tulip-a-five-star-table-and-list-from-machine-readable-to-machine-understandable-systems"]} -{"year":"2020","title":"TweetBERT: A Pretrained Language Representation Model for Twitter Text Analysis","authors":["MMA Qudar, V Mago - arXiv preprint arXiv:2010.11091, 2020"],"snippet":"… It has been pre-trained on an extremely large, five different types of corpora: BookCorpus, English Wikipedia, CC-News (collected from CommonCrawl News) dataset, OpenWebText, a WebText corpus [23], and Stories, a dataset containing story-like content [23] …","url":["https://arxiv.org/pdf/2010.11091"]} -{"year":"2020","title":"Two-Level Transformer and Auxiliary Coherence Modeling for Improved Text Segmentation","authors":["G Glavaš, S Somasundaran - arXiv preprint arXiv:2001.00891, 2020"],"snippet":"… In all our experiments we use 300dimensional monolingual FASTTEXT word embeddings pretrained on the Common Crawl corpora of respective languages: EN, CS, FI, and TR.9 We induce a cross-lingual word embedding …","url":["https://arxiv.org/pdf/2001.00891"]} -{"year":"2020","title":"UHH-LT & LT2 at SemEval-2020 Task 12: Fine-Tuning of Pre-Trained Transformer Networks for Offensive Language Detection","authors":["G Wiedemann, SM Yimam, C Biemann - arXiv preprint arXiv:2004.11493, 2020"],"snippet":"… languages at once (Conneau et al., 2019). The model itself is equivalent to RoBERTa, but the training data consists of texts from more than 100 languages filtered from the CommonCrawl1 dataset. ALBERT – A Lite BERT for Self …","url":["https://arxiv.org/pdf/2004.11493"]} -{"year":"2020","title":"Uncertainty-Aware Machine Support for Paper Reviewing on the Interspeech 2019 Submission Corpus","authors":["L Stappen, G Rizos, M Hasan, T Hain, BW Schuller"],"snippet":"… compressed to 300 dimensions. FastText and GloVe are based on the Common Crawl (1.9 M unique words, 840 B tokens) and Word2Vec on the GoogleNews (3 M unique words, 100 B total) dataset. We additionally experimented …","url":["https://indico2.conference4me.psnc.pl/event/35/contributions/3133/attachments/305/328/Tue-1-9-1.pdf"]} -{"year":"2020","title":"Underlying Cause of Death Identification from Death Certificates using Reverse Coding to Text and a NLP Based Deep Learning Approach","authors":["V Della Mea, MH Popescu, K Roitero - Informatics in Medicine Unlocked, 2020"],"snippet":"… XLM-R (a variation of XLM), trained on one hundred languages using more than two terabytes of filtered CommonCrawl data, outperformed multilingual BERT (mBERT) on a variety of cross-lingual benchmarks [5]. 3.3.4. XLNet …","url":["https://www.sciencedirect.com/science/article/pii/S2352914820306067"]} -{"year":"2020","title":"Understanding phishers' strategies of mimicking uniform resource locators to leverage phishing attacks: A machine learning approach","authors":["JS Tharani, NAG Arachchilage - arXiv preprint arXiv:2007.00489, 2020"],"snippet":"… webpage data downloaded from PhishTank2, OpenPhish3 and Legitimate ones are downloaded from Alexa4 and Common Crawl5 According … (4) 4 https://www.alexa.com/topsites/category/Computers/Internet/OntheW eb/W …","url":["https://arxiv.org/pdf/2007.00489"]} -{"year":"2020","title":"Understanding Word Embeddings and Language Models","authors":["JM Gomez-Perez, R Denaux, A Garcia-Silva - A Practical Guide to Hybrid Natural …, 2020"],"snippet":"… 1) pre-trained contextualized word embeddings (ELMo), (2) pre-trained context-independent word embeddings learnt from Common Crawl (fastText), Twitter … Another version of this classifier using in addition fastText embeddings …","url":["https://link.springer.com/chapter/10.1007/978-3-030-44830-1_3"]} -{"year":"2020","title":"UniBO@ KIPoS: Fine-tuning the Italian “BERTology” for PoS-tagging Spoken Data","authors":["F Tamburini"],"snippet":"… project. Also for GilBERTo it is available only the uncased model. • UmBERTo4: the more recent model de- veloped explicitly for Italian, as far as we know, is UmBERTo ('Musixmatch/umbertocommoncrawl-cased-v1' – umC). As …","url":["http://ceur-ws.org/Vol-2765/paper94.pdf"]} -{"year":"2020","title":"UninaStudents@ SardiStance: Stance Detection in Italian Tweets-Task A","authors":["M Moraca, G Sabella, S Morra - Proceedings of the 7th Evaluation Campaign of …, 2020"],"snippet":"… As Master students, we approached these NLP topics for the first time. Therefore, we are aware 5https://huggingface.co/Musixmatch/umbertocommoncrawl-cased-v1 that our results are not at the state of the art in the field. However …","url":["http://ceur-ws.org/Vol-2765/paper146.pdf"]} -{"year":"2020","title":"Unit Test Case Generation with Transformers","authors":["M Tufano, D Drain, A Svyatkovskiy, SK Deng… - arXiv preprint arXiv …, 2020"],"snippet":"… It has been pre-trained on the Common Crawl dataset [32] constituting nearly a trillion words, an expanded version of the WebText [33] dataset, two internet-based books corpora (Books1 and Books2), and English-language Wikipedia …","url":["https://arxiv.org/pdf/2009.05617"]} -{"year":"2020","title":"Uniting Plain Language, Cognitive Fluency, and Believability","authors":["SI Johnson - 2020"],"snippet":"… considering at least four linguistic features (Romanyshyn, 2018). Similar to Randall (2019), Romanyshyn (2018) considers the frequency of a word using Common Crawl, a large corpus of web content. Taking this approach one step further, she lemmatizes the 1 …","url":["http://search.proquest.com/openview/9b0f9b3644e2372cabfc4aedb4849573/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2020","title":"UNITOR@ Sardistance2020: Combining Transformer-based Architectures and Transfer Learning for Robust Stance Detection","authors":["S Giorgioni, M Politi, S Salman, D Croce, R Basili - … of the 7th Evaluation Campaign of …, 2020"],"snippet":"… with 3. 2https://huggingface.co/Musixmatch/ umberto-commoncrawlcased-v1 3We discarded the few available messages with mixed po- larity, to simplify the final classification task. Irony Detection. We speculate …","url":["http://ceur-ws.org/Vol-2765/paper99.pdf"]} -{"year":"2020","title":"Unsupervised Cross-Lingual Part-of-Speech Tagging for Truly Low-Resource Scenarios","authors":["R Eskander, S Muresan, M Collins - Proceedings of the 2020 Conference on …, 2020"],"snippet":"… essential when the domain of the training data is different from the one of the pre-trained em- beddings, which is the case in our learning setup, where we use the Bible data for training, while the XLM-R model is trained on text …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.391.pdf"]} -{"year":"2020","title":"Unsupervised Cross-lingual Representation Learning for Speech Recognition","authors":["A Conneau, A Baevski, R Collobert, A Mohamed… - arXiv preprint arXiv …, 2020"],"snippet":"… For comparison with [29] only, we train 4-gram n-gram language models on CommonCrawl data [25, 50] for Assamese (140MiB of text data), Swahili (2GiB), Tamil (4.8GiB) and Lao (763MiB); for this experiment only we report word error rate (WER). 3.2 Training details …","url":["https://arxiv.org/pdf/2006.13979"]} -{"year":"2020","title":"Unsupervised Domain Clusters in Pretrained Language Models","authors":["R Aharoni, Y Goldberg - arXiv preprint arXiv:2004.02105, 2020"],"snippet":"… exact requirements from such data with respect to all the aforementioned aspects. On top of that, domain labels are usually unavailable – eg in large-scale web-crawled data like Common Crawl1 which was recently used to …","url":["https://arxiv.org/pdf/2004.02105"]} -{"year":"2020","title":"Unsupervised Evaluation of Human Translation Quality","authors":["Y Zhou, D Bollegala - 2019"],"snippet":"… publicly available monolingual word embeddings. Specifically, we first use the monolingual word embeddings, which are trained on Wikipedia and Common Crawl using fastText (Grave et al., 2018). Because our dataset contains …","url":["https://pdfs.semanticscholar.org/2735/60e715ee0dfaae2dee75fbb7484f811816d2.pdf"]} -{"year":"2020","title":"Unsupervised Label Refinement Improves Dataless Text Classification","authors":["Z Chu, K Stratos, K Gimpel - arXiv preprint arXiv:2012.04194, 2020"],"snippet":"… We use the 300 dimensional GloVe vectors7 trained on Common Crawl.8 We experiment with two distance functions when using GloVe: cosine and L2 … 7http://nlp.stanford.edu/ data/glove.840B.300d.zip 8https://commoncrawl.org/ ROBERTA Dual Encoder …","url":["https://arxiv.org/pdf/2012.04194"]} -{"year":"2020","title":"Unsupervised Question Decomposition for Question Answering","authors":["E Perez, P Lewis, W Yih, K Cho, D Kiela"],"snippet":"… Specifically, by leveraging >10M questions from Common Crawl, we learn to map from the distribution of multi-hop questions to the distribution of single-hop subquestions … We retrieve candidates from a corpus of 10M simple …","url":["https://rcqa-ws.github.io/papers/paper9.pdf"]} -{"year":"2020","title":"UPB at GermEval-2020 Task 3: Assessing Summaries for German Texts using BERTScore and Sentence-BERT","authors":["A Paraschiv"],"snippet":"… bert-base-german-europeana-uc 2 Uncased Europeana newspapers bert-base-germanuc2 Uncased Wikipedia, Subtitles, News, Commoncrawl literary-german-bert3 Uncased German Fiction Literature bert-adapted-german-press4 Uncased Newspapers …","url":["http://ceur-ws.org/Vol-2624/germeval-task3-paper2.pdf"]} -{"year":"2020","title":"Upgrading the Newsroom: An Automated Image Selection System for News Articles","authors":["F Liu, R Lebret, D Orel, P Sordet, K Aberer - arXiv preprint arXiv:2004.11449, 2020"],"snippet":"Page 1. 1 Upgrading the Newsroom: An Automated Image Selection System for News Articles FANGYU LIU∗, Language Technology Lab (LTL), University of Cambridge, United Kingdom RÉMI LEBRET, Distributed Information …","url":["https://arxiv.org/pdf/2004.11449"]} -{"year":"2020","title":"UR NLP@ HaSpeeDe 2 at EVALITA 2020: Towards Robust Hate Speech Detection with Contextual Embeddings","authors":["J Hoffmann, U Kruschwitz"],"snippet":"… XLM-R is based on XLM and RoBERTa. It is trained on data covering 100 languages in a very large (2TB) CommonCrawl. Transformer document embeddings are obtained from (the large version of) XLM-R. In addition Page 3 …","url":["http://ceur-ws.org/Vol-2765/paper105.pdf"]} -{"year":"2020","title":"Urban Dictionary Embeddings for Slang NLP Applications","authors":["S Wilson, W Magdy, B McGillivray, K Garimella… - Proceedings of The 12th …, 2020"],"snippet":"… with the goal of producing generally applicable word embeddings, many popular pre-trained word embeddings have been fit to large and diverse corpora of text from the web such as the Common Crawl.3 In … 2 http://smash …","url":["https://www.aclweb.org/anthology/2020.lrec-1.586.pdf"]} -{"year":"2020","title":"URL-based Phishing Attack Detection by Convolutional Neural Networks","authors":["J Nowak, M Korytkowski, P Najgebauer, M Wozniak…"],"snippet":"… The database downloaded during the article writing contained 10,604 records. To obtain legitimate websites, the second part of the training dataset was downloaded from the Common Crawl Foundation (http://commoncrawl.org/) …","url":["http://ajiips.com.au/papers/V15.2/v15n2_64-71.pdf"]} -{"year":"2020","title":"Using Natural Language Preprocessing Architecture (NLPA) for Big Data Text Sources","authors":["M Novo-Lourés, R Pavón, R Laza, D Ruano-Ordas… - Scientific Programming, 2020"],"snippet":"Journals; Publish with us; Publishing partnerships; About us; Blog. Scientific Programming. +Journal Menu. PDF. Journal overview. For authorsFor reviewersFor editorsTable of Contents Special Issues.","url":["https://www.hindawi.com/journals/sp/2020/2390941/"]} -{"year":"2020","title":"Using Natural Language Processing to Identify Similar Patent Documents","authors":["J Navrozidis, H Jansson - LU-CS-EX, 2020"],"snippet":"Page 1. MASTER'S THESIS 2020 Using Natural Language Processing to Identify Similar Patent Documents Hannes Jansson, Jakob Navrozidis ISSN 1650-2884 LU-CS-EX: 2020-05 DEPARTMENT OF COMPUTER …","url":["https://lup.lub.lu.se/student-papers/record/9008699/file/9026407.pdf"]} -{"year":"2020","title":"Using Probabilistic Soft Logic to Improve Information Extraction in the Legal Domain","authors":["B Kirsch, S Giesselbach, T Schmude, M Völkening…"],"snippet":"… spaCy Classifier: This architecture is based on a CNN with mean pooling and a final feed-forward layer. The network is fed with pretrained word embeddings trained on the German Wikipedia and the German common crawl (Ortiz Suárez et al., 2019).9 …","url":["http://ceur-ws.org/Vol-2738/LWDA2020_paper_29.pdf"]} -{"year":"2020","title":"Using Publisher Partisanship for Partisan News Detection","authors":["CL Yeh"],"snippet":"Page 1. Using Publisher Partisanship for Partisan News Detection A Comparison of Performance between Annotation Levels Chia-Lun Yeh Page 2. Using Publisher Partisanship for Partisan News Detection A …","url":["https://pdfs.semanticscholar.org/604f/233a21249d44085e41e7415ed9741fc69d5e.pdf"]} -{"year":"2020","title":"Using Sentences as Semantic Representations in Large Scale Zero-Shot Learning","authors":["YL Cacheux, HL Borgne, M Crucianu - arXiv preprint arXiv:2010.02959, 2020"],"snippet":"… For the same reason, we used FastText and Glove models pre-trained on Common Crawl We used a 300-dimension version for all three … Fasttext: https://fasttext.cc/docs/en/englishvectors.html (version trained on Common Crawl with 600B tokens, no subword information) …","url":["https://arxiv.org/pdf/2010.02959"]} -{"year":"2020","title":"Using Word Embeddings to Learn a Better Food Ontology. Front","authors":["J Youn, T Naravane, I Tagkopoulos - Artif. Intell, 2020"],"snippet":"… W ikinews W ikipedia 2017 + UMBC webbase + s tatmt.org 0.313 2.98 Crawl Common Crawl 0 .317 3.00 Word2vec (Mikolov …","url":["https://pdfs.semanticscholar.org/1c47/eb747f27eab42bc8e9e9ded83dd784eadf4c.pdf"]} -{"year":"2020","title":"ValNorm: A New Word Embedding Intrinsic Evaluation Method Reveals Valence Biases are Consistent Across Languages and Over Decades","authors":["A Toney, A Caliskan - arXiv preprint arXiv:2006.03950, 2020"],"snippet":"… We choose six widely used pre-trained word embedding sets, listed in Table 2, to compare ValNorm's performance on different algorithms (GloVe, fastText, word2vec) and training corpora (Common Crawl, Wikipedia, OpenSubtitles …","url":["https://arxiv.org/pdf/2006.03950"]} -{"year":"2020","title":"Vandalism Detection in Crowdsourced Knowledge Bases","authors":["S Heindorf - 2019"],"snippet":"… Manual OpenStreetMap, Uniprot, WordNet, MusicBrainz, IMDb Wikipedia, WikiHow, YouTube Wikia/FANDOM, StackExchange, Quora, Yahoo Answers DBpedia, YAGO, NELL dblp, BabelNet Internet Archive, Common Crawl NASA …","url":["https://pdfs.semanticscholar.org/e70f/b288ceb09fc244554a274f31cd1217663027.pdf"]} -{"year":"2020","title":"Variational Transformers for Diverse Response Generation","authors":["Z Lin, GI Winata, P Xu, Z Liu, P Fung - arXiv preprint arXiv:2003.12738, 2020"],"snippet":"… embeddings. The first is EMBFT (Liu et al., 2016) that calculates the average of word embeddings in a sentence using FastText (Mikolov et al., 2018) which is trained with Common Crawl and Wikipedia data. We use FastText …","url":["https://arxiv.org/pdf/2003.12738"]} -{"year":"2020","title":"VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation","authors":["F Luo, W Wang, J Liu, Y Liu, B Bi, S Huang, F Huang… - arXiv preprint arXiv …, 2020"],"snippet":"… We adopt the same 250K vocabulary that is also used by XLM-R (Conneau et al., 2019) and mBART (Liu et al., 2020b). Pre-Training Datasets For monolingual training datasets, we reconstruct Common-Crawl Corpus used in XLM-R (Conneau et al., 2019) …","url":["https://arxiv.org/pdf/2010.16046"]} -{"year":"2020","title":"Video Question Answering on Screencast Tutorials","authors":["W Zhao, S Kim, N Xu, H Jin"],"snippet":"… visual cues, and graph embeddings. All the models have the word embeddings initialized with the 300-dimensional pretrained fastText [Bojanowski et al., 2017] vectors on Common Crawl dataset. The convolutional layer in …","url":["https://www.ijcai.org/Proceedings/2020/0148.pdf"]} -{"year":"2020","title":"Visual and Textual Deep Feature Fusion for Document Image Classification","authors":["S Bakkali, Z Ming, M Coustaty, M Rusinol - Proceedings of the IEEE/CVF Conference …, 2020"],"snippet":"… FastText algorithm we used was pretrained on 2 million word vectors trained on Common Crawl (600B tokens), and uses 1,999,996 word vectors. Bert: Bert [11] is a contextualized bidirectional word embedding based on the transformer architecture …","url":["http://openaccess.thecvf.com/content_CVPRW_2020/papers/w34/Bakkali_Visual_and_Textual_Deep_Feature_Fusion_for_Document_Image_Classification_CVPRW_2020_paper.pdf"]} -{"year":"2020","title":"Visual Relations Augmented Cross-modal Retrieval","authors":["Y Guo, J Chen, H Zhang, YG Jiang - … of the 2020 International Conference on …, 2020"],"snippet":"… vector with the corresponding visual feature. The label embedding vector is obtained with a learnable embedding layer initialized by GloVe [25] that pre-trained on the Common-Crawl dataset. Given a set of object categories …","url":["https://dl.acm.org/doi/pdf/10.1145/3372278.3390709"]} -{"year":"2020","title":"Visualizing and Interpreting RNN Models in URL-based Phishing Detection","authors":["T Feng, C Yue - Proceedings of the 25th ACM Symposium on Access …, 2020"],"snippet":"… The legitimate URLs came from the Common Crawl (www.commoncrawl.org) open web searching database, while the phishing URLs came from the popular PhishTank (www.phishtank.com) phishing website repository. In …","url":["https://dl.acm.org/doi/pdf/10.1145/3381991.3395602"]} -{"year":"2020","title":"Wat zei je? Detecting Out-of-Distribution Translations with Variational Transformers","authors":["TZ Xiao, AN Gomez, Y Gal - arXiv preprint arXiv:2006.08344, 2020"],"snippet":"… The following datasets were used in our experiments: • WMT EN ↔ DE: The training set for translation tasks between English (EN) and German (DE) composed of news-commentary-v13 with 284k sentences pairs, wmt13 …","url":["https://arxiv.org/pdf/2006.08344"]} -{"year":"2020","title":"Weakly-Supervised Multi-Level Attentional Reconstruction Network for Grounding Textual Queries in Videos","authors":["Y Song, J Wang, L Ma, Z Yu, J Yu - arXiv preprint arXiv:2003.07048, 2020"],"snippet":"… For each second we uniformly sample 16 frames as input to C3D, and obtain a 4096-dimentional visual feature from fc6 layer. Each word from the query is represented by GloVe [22] word embedding vector pre-trained on Common Crawl …","url":["https://arxiv.org/pdf/2003.07048"]} -{"year":"2020","title":"Web Crawl Processing on Big Data Scale","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… We got this domain ranks file and column names from the common crawl blog post (https://commoncrawl.org/2020/02/host-and-domain-level-webgraphs-novdecjan-2019-2020/); they publish new domain ranks about four …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_7"]} -{"year":"2020","title":"Web Table Extraction, Retrieval, and Augmentation: A Survey","authors":["S Zhang, K Balog - ACM Transactions on Intelligent Systems and …, 2020"],"snippet":"… Table corpora Type #tables Source WDC 2012 Web Table Corpus Web tables 147M Web crawl (Common Crawl) WDC 2015 Web Table Corpus Web tables 233M Web crawl (Common Crawl) Dresden Web Tables Corpus …","url":["https://dl.acm.org/doi/abs/10.1145/3372117"]} -{"year":"2020","title":"Webis at TREC 2019: Decision Track","authors":["A Bondarenko, M Fröbe, V Kasturia, M Völske, B Stein…"],"snippet":"… In Proceedings of SIGIR 2017. 1419–1420. [2] Janek Bevendor , Benno Stein, Ma hias Hagen, and Martin Po hast. 2018. Elastic ChatNoir: Search Engine for the ClueWeb and the Common Crawl. In Proceedings of ECIR 2018. 820–824 …","url":["https://trec.nist.gov/pubs/trec28/papers/Webis.D.pdf"]} -{"year":"2020","title":"Webis at TREC 2020: Health Misinformation Track","authors":["A Bondarenko, M Fröbe, S Günther, M Hagen… - 2020","J Bevendor, A Bondarenko, M Fröbe, S Günther…"],"snippet":"… During retrieval, we used ChatNoirs existing weighting scheme for the two Common Crawl snapshots, which combines BM25 scores of multiple elds … we relax the precondition of documents' 1We have indexed a 2015 and …","url":["https://webis.de/downloads/publications/papers/stein_2020zb.pdf","https://webis.de/downloads/publications/slides/stein_2020zb.pdf"]} -{"year":"2020","title":"Webly Supervised Semantic Embeddings for Large Scale Zero-Shot Learning","authors":["YL Cacheux, A Popescu, HL Borgne - arXiv preprint arXiv:2008.02880, 2020"],"snippet":"… prototypes. These embeddings are extracted from generic large scale text collections such as Wikipedia [21,34] or Common Crawl [6,33] … use. For GloVe on ImageNet, the model pretrained on Common Crawl has the best performance …","url":["https://arxiv.org/pdf/2008.02880"]} -{"year":"2020","title":"WeChat Neural Machine Translation Systems for WMT20","authors":["F Meng, J Yan, Y Liu, Y Gao, X Zeng, Q Zeng, P Li… - arXiv preprint arXiv …, 2020"],"snippet":"… Commentary, Common Crawl and Gigaword corpus. The English monolingual data includes News crawl, News discussions, Europarl v10, News Commentary, Common Crawl, Wiki dumps and the Gigaword corpus. After …","url":["https://arxiv.org/pdf/2010.00247"]} -{"year":"2020","title":"WEFE: The Word Embeddings Fairness Evaluation Framework","authors":["P Badilla, F Bravo-Marquez, J Pérez"],"snippet":"… The following are the pre-trained em- bedding models that we consider: 1) conceptnet, 2) fasttextwikipedia, 3) glove-twitter, 4) glove-wikipedia, 5) lexveccommoncrawl, 6) word2vec-googlenews, and 7) word2vecgender-hard …","url":["https://felipebravom.com/publications/ijcai2020.pdf"]} -{"year":"2020","title":"WEmbSim: A Simple yet Effective Metric for Image Captioning","authors":["N Sharif, L White, M Bennamoun, W Liu, SAA Shah - arXiv preprint arXiv:2012.13137, 2020"],"snippet":"… Page 5. TABLE I T - . Name Source Dims Corpus Corpus Size Vocabulary Size GloVE 840B [12] 300 Common Crawl 8.4 · 1011 2 · 106 Word2vec [6] 300 Google News (100B) 1.0 · 1011 3 · 106 FastText [13] 300 Wikipedia 4.0 · 109 3 · 106 …","url":["https://arxiv.org/pdf/2012.13137"]} -{"year":"2020","title":"What determines the order of adjectives in English? Comparing efficiency-based theories using dependency treebanks","authors":["R Futrell, W Dyer, G Scontras - Proceedings of the 58th Annual Meeting of the …, 2020"],"snippet":"… sklearn.cluster.KMeans applied to a pretrained set of 1.9 million 300-dimension GloVe vectors2 generated from the Common Crawl corpus … Table 1a shows the accuracies of our predictors in predicting held-out …","url":["https://www.aclweb.org/anthology/2020.acl-main.181.pdf"]} -{"year":"2020","title":"What Sparks Joy: The AffectVec Emotion Database","authors":["S Raji, G de Melo - Proceedings of The Web Conference 2020, 2020"],"snippet":"… We consider the cosine similarity of word– emotion pairs in word2vec trained on the Google News corpus [18], GloVe [26] trained on Twitter (200-dim.) and CommonCrawl (840B, 300-dim.), as well as the counterfitted vectors by Mrksic et al. [24]. Results …","url":["https://dl.acm.org/doi/pdf/10.1145/3366423.3380068"]} -{"year":"2020","title":"What the [MASK]? Making Sense of Language-Specific BERT Models","authors":["D Nozza, F Bianchi, D Hovy - arXiv preprint arXiv:2003.02912, 2020"],"snippet":"… OSCAR (Open Super-large Crawled Almanach coRpus) (Or- tiz Suárez et al., 2019) is a huge multilingual corpus obtained by filtering the Common Crawl corpus, which is a parallel multilingual corpus comprised of crawled documents from the internet …","url":["https://arxiv.org/pdf/2003.02912"]} -{"year":"2020","title":"When and Why is Unsupervised Neural Machine Translation Useless?","authors":["Y Kim, M Graça, H Ney - arXiv preprint arXiv:2004.10581, 2020","YKM Graça, H Ney - 22nd Annual Conference of the European Association …"],"snippet":"… However, for low-resource language pairs, it is difficult to match the data domain of both sides on a large scale. For example, our monolingual data for Kazakh is mostly from Wikipedia and Common Crawl, while the English data is solely from News Crawl …","url":["https://arxiv.org/pdf/2004.10581","https://www.aclweb.org/anthology/2020.eamt-1.pdf#page=55"]} -{"year":"2020","title":"When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models","authors":["B Muller, A Anastasopoulos, B Sagot, D Seddah - arXiv preprint arXiv:2010.12858, 2020"],"snippet":"… al., 2019). OSCAR is a corpus extracted from a Common Crawl Web snapshot.3 It provides a significant 2Also see the discussion in Section §3.2 on the script distributions in mBERT. 3http://commoncrawl.org/ Language (iso …","url":["https://arxiv.org/pdf/2010.12858"]} -{"year":"2020","title":"When do Word Embeddings Accurately Reflect Surveys on our Beliefs About People?","authors":["K Joseph, JH Morgan - arXiv preprint arXiv:2004.12043, 2020"],"snippet":"Page 1. When do Word Embeddings Accurately Reflect Surveys on our Beliefs about People? Kenneth Joseph Computer Science and Engineering University at Buffalo Buffalo, NY, 14226 kjoseph@buffalo.edu Jonathan H. Morgan …","url":["https://arxiv.org/pdf/2004.12043"]} -{"year":"2020","title":"When Does Unsupervised Machine Translation Work?","authors":["K Marchisio, K Duh, P Koehn - arXiv preprint arXiv:2004.05516, 2020"],"snippet":"… “News crawl” (News) and “Common Crawl” (CC) settings determine whether the system can flexibly handle diverse datasets. Specifics of the datasets used are described in subsequent subsections … UN = United Nations …","url":["https://arxiv.org/pdf/2004.05516"]} -{"year":"2020","title":"Which* BERT? A Survey Organizing Contextualized Encoders","authors":["P Xia, S Wu, B Van Durme - arXiv preprint arXiv:2010.00854, 2020"],"snippet":"… Raffel et al. (2019) curate a 745GB subset of Common Crawl (CC),10 which starkly contrasts with the 13GB used in BERT … 9https://sites.google.com/ view/ sustainlp2020/shared-task 10https://commoncrawl.org/ scrapes publicly …","url":["https://arxiv.org/pdf/2010.00854"]} -{"year":"2020","title":"Who is asking? humans and machines experience","authors":["M Klein, L Balakireva, H Shankar"],"snippet":"… licenses/by/4.0/). Similarly, the motivation behind the recent study by Thompson and Jian [14] based on two Common Crawl samples of the web was to quantify the use of HTTP DOIs versus URLs of landing pages. They found …","url":["https://osf.io/pgxc3/download"]} -{"year":"2020","title":"Why are events important and how to compute them in geospatial research?","authors":["M Yuan"],"snippet":"… GPT-3 is a gigantic neural network with 175 billion input parameters and 96 layers of transformer decoders, each of which has 1.8 billion parameters, and is pre-trained with 45TB (499 billion tokens) compressed data from five …","url":["https://www.josis.org/index.php/josis/article/viewFile/723/300"]} -{"year":"2020","title":"Why Not Simply Translate? A First Swedish Evaluation Benchmark for Semantic Similarity","authors":["T Isbister, M Sahlgren - arXiv preprint arXiv:2009.03116, 2020"],"snippet":"… et al., 2018). We use the CBOW model that has been trained on Common Crawl and Wikipedia.6 As with Word2Vec, the vectors for sentences are obtained by averaging the embedding vector for each word. BERT: Deep Transformer …","url":["https://arxiv.org/pdf/2009.03116"]} -{"year":"2020","title":"Why Overfitting Isn't Always Bad: Retrofitting Cross-Lingual Word Embeddings to Dictionaries","authors":["M Zhang, Y Fujinuma, MJ Paul, J Boyd-Graber"],"snippet":"… We align English embeddings with six target languages: German (DE), Spanish (ES), French (FR), Italian (IT), Japanese (JA), and Chinese (ZH). We use 300-dimensional fastText vectors trained on Wikipedia and Common Crawl (Grave et al., 2018) …","url":["http://users.umiacs.umd.edu/~mozhi/pdf/retrofit.pdf"]} -{"year":"2020","title":"Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types","authors":["D Rozado - PLOS ONE, 2020"],"snippet":"… This work systematically analyzed 3 popular word embeddings methods: Word2vec (Skip-gram) [4], Glove [9] and FastText [10], externally pretrained on a wide array of corpora such as Google News, Wikipedia, Twitter or Common Crawl …","url":["https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0231189"]} -{"year":"2020","title":"WikiAsp: A Dataset for Multi-domain Aspect-based Summarization","authors":["H Hayashi, P Budania, P Wang, C Ackerson… - arXiv preprint arXiv …, 2020"],"snippet":"… sections of Wikipedia from referenced web pages. Following the WikiSum data generation script,3 we first crawled cited references covered by CommonCrawl for each Wikipedia article. We then recover all the sections4 of …","url":["https://arxiv.org/pdf/2011.07832"]} -{"year":"2020","title":"Will it Unblend?","authors":["Y Pinter, CL Jacobs, J Eisenstein - arXiv preprint arXiv:2009.09123, 2020"],"snippet":"Page 1. Will it Unblend? Yuval Pinter School of Interactive Computing Georgia Institute of Technology Atlanta, GA, USA uvp@gatech.edu Cassandra L. Jacobs Department of Psychology University of Wisconsin Madison, WI, USA cjacobs2@wisc.edu …","url":["https://arxiv.org/pdf/2009.09123"]} -{"year":"2020","title":"Word associations and the distance properties of context-aware word embeddings","authors":["MA Rodriguez, P Merlo - Proceedings of the 24th Conference on Computational …, 2020"],"snippet":"… However, in this work, we used the pre-trained FASTTEXT embeddings provided by the official site of FASTTEXT, that we expressly do not modify.3 The embeddings are trained on 600-billion tokens from …","url":["https://www.aclweb.org/anthology/2020.conll-1.30.pdf"]} -{"year":"2020","title":"Word Embedding Evaluation for Sinhala","authors":["D Lakmal, S Ranathunga, S Peramuna, I Herath - Proceedings of The 12th Language …, 2020"],"snippet":"… Common Crawl can be considered as a precious starting point for building a cleaned large corpus for … Common Crawl monthly dataset only contains 0.007% of content in Sinhala4, however, this amount is still … 4https …","url":["https://www.aclweb.org/anthology/2020.lrec-1.231.pdf"]} -{"year":"2020","title":"WORD EMBEDDINGS IN ROMANIAN FOR THE RETAIL BANKING DOMAIN","authors":["I RAICU, N BOITOUT, R BOLOGA, MG STURZA"],"snippet":"… In addition, Facebook released a year later another version of FastText pre-trained word embeddings, trained on Common Crawl and Wikipedia [4]. Another pre-trained word embeddings in Romanian can be found at …","url":["https://www.researchgate.net/profile/Irina_Raicu2/publication/341553193_WORD_EMBEDDINGS_IN_ROMANIAN_FOR_THE_RETAIL_BANKING_DOMAIN/links/5ec6d768a6fdcc90d68c8596/WORD-EMBEDDINGS-IN-ROMANIAN-FOR-THE-RETAIL-BANKING-DOMAIN.pdf"]} -{"year":"2020","title":"Word Embeddings Inherently Recover the Conceptual Organization of the Human Mind","authors":["V Swift - arXiv preprint arXiv:2002.10284, 2020"],"snippet":"… Sub-word information was incorporated on the basis of n-grams (length = 5), with a window size of 5 and 10 negatives, and a step size of .05. The English model was trained on a Common Crawl corpus comprised of English text from 2.96 billion webpages …","url":["https://arxiv.org/pdf/2002.10284"]} -{"year":"2020","title":"Word meaning in minds and machines","authors":["BM Lake, GL Murphy - arXiv preprint arXiv:2008.01766, 2020"],"snippet":"… is illustrated in Figure 1A. CBOW has been trained on tremendous corpora; for instance, in this article, we analyze a large-scale CBOW model trained on the Common Crawl corpus of 630 billion words. CBOW learns a word …","url":["https://arxiv.org/pdf/2008.01766"]} -{"year":"2020","title":"Word Representations for Named Entity Recognition","authors":["R Agerri"],"snippet":"… Transformers: Bertin (Gigaword+Wikipedia), XLM-RoBERTa (Common Crawl) and mBERT (Wikipedia + books) • Project annotations (various strategies) Page 54 … BETO (various sources) – XLM-RoBERTa (Common Crawl 2.5TB) – mBERT (Wikipedia + books) …","url":["https://cit-ai.net/archive/CitAI_Seminar_11Nov20_Agerri.pdf"]} -{"year":"2020","title":"Word Representations for Neural Network Based Myanmar Text-to-Speech System","authors":["AM Hlaing, WP Pa"],"snippet":"… In [21], the size of word vectors is small, and it contains about 55K entries for Myanmar language and can be downloaded from the link [23]. In [22], the word vectors are trained on Common Crawl and Wikipedia using fastText …","url":["http://www.inass.org/2020/2020043023.pdf"]} -{"year":"2020","title":"Word Rotator's Distance","authors":["S Yokoi, R Takahashi, R Akama, J Suzuki, K Inui - Proceedings of the 2020 …, 2020"],"snippet":"Page 1. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 2944–2960, November 16–20, 2020. c 2020 Association for Computational Linguistics 2944 Word Rotator's Distance …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.236.pdf"]} -{"year":"2020","title":"Word Rotator's Distance: Decomposing Vectors Gives Better Representations","authors":["S Yokoi, R Takahashi, R Akama, J Suzuki, K Inui - arXiv preprint arXiv:2004.15003, 2020"],"snippet":"Page 1. Word Rotator's Distance: Decomposing Vectors Gives Better Representations Sho Yokoi1 Ryo Takahashi1,2 Reina Akama1,2 Jun Suzuki1,2 Kentaro Inui1,2 1 Tohoku University 2 RIKEN {yokoi, ryo.t, reina.a, jun.suzuki, inui}@ecei.tohoku.ac.jp Abstract …","url":["https://arxiv.org/pdf/2004.15003"]} -{"year":"2020","title":"Word Sense Disambiguation for 158 Languages using Word Embeddings Only","authors":["V Logacheva, D Teslenko, A Shelmanov, S Remus… - arXiv preprint arXiv …, 2020"],"snippet":"… The contributions of our work are the following: 1The full list languages is available at fasttext.cc and includes English and 157 other languages for which embeddings were trained on a combination of Wikipedia and CommonCrawl texts …","url":["https://arxiv.org/pdf/2003.06651"]} -{"year":"2020","title":"Word2Sent: A new learning sentiment‐embedding model with low dimension for sentence level sentiment classification","authors":["M Kasri, M Birjali, A Beni‐Hssane - Concurrency and Computation: Practice and …"],"snippet":"Abstract Word embedding models become an increasingly important method that embeds words into a high dimensional space. These models have been widely utilized to extract semantic and syntactic feat...","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/cpe.6149"]} -{"year":"2020","title":"Words Matter: Gender, Jobs and Applicant Behavior in India","authors":["S Chaturvedi, K Mahajan, Z Siddique - 2020"],"snippet":"… Pennington et al., 2014). The 300 dimensional pretrained word vectors have been obtained by training the algorithm on web data from common crawl, and comprise 2.2 million unique words. Cosine similarity between any …","url":["https://www.dse.univr.it/documenti/Seminario/documenti/documenti102498.pdf"]} -{"year":"2020","title":"Words, constructions and corpora: Network representations of constructional semantics for Mandarin space particles","authors":["ACH Chen - Corpus Linguistics and Linguistic Theory, 2020"],"snippet":"Jump to Content Jump to Main Navigation Publications. Subjects. Architecture and Design Arts Asian and Pacific Studies Business and Economics Chemistry Classical and Ancient Near Eastern Studies Computer Sciences Cultural …","url":["https://www.degruyter.com/view/journals/cllt/ahead-of-print/article-10.1515-cllt-2020-0012/article-10.1515-cllt-2020-0012.xml"]} -{"year":"2020","title":"Wrestling with Complexity in Computational Social Science: Theory, Estimation and Representation","authors":["S de Marchi - The SAGE Handbook of Research Methods in Political …, 2020"]} -{"year":"2020","title":"WT5?! Training Text-to-Text Models to Explain their Predictions","authors":["S Narang, C Raffel, K Lee, A Roberts, N Fiedel… - arXiv preprint arXiv …, 2020"],"snippet":"… 1997; Ruder, 2017). In Raffel et al. (2019), this framework was used to pre-train Transformer (Vaswani et al., 2017) models on a large collection of unlabeled text drawn from the Common Crawl web scrape. We use the result …","url":["https://arxiv.org/pdf/2004.14546"]} -{"year":"2020","title":"X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models","authors":["Z Jiang, A Anastasopoulos, J Araki, H Ding, G Neubig - Proceedings of the 2020 …, 2020"],"snippet":"Page 1. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 5943–5959, November 16–20, 2020. c 2020 Association for Computational Linguistics 5943 X-FACTR: Multilingual …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.479.pdf"]} -{"year":"2020","title":"XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation","authors":["Y Liang, N Duan, Y Gong, N Wu, F Guo, W Qi, M Gong… - arXiv preprint arXiv …, 2020"],"snippet":"… 2.1.2 Large Corpus (LC) Multilingual Corpus Following Wenzek et al. (2019), we construct a clean version of Common Crawl (CC)3 as the multilingual corpus … 2https://github.com/ attardi/wikiextractor. 3https://commoncrawl.org/. available in English …","url":["https://arxiv.org/pdf/2004.01401"]} -{"year":"2020","title":"Xiaomi's Submissions for IWSLT 2020 Open Domain Translation Task","authors":["Y Sun, M Guo, X Li, J Cui, B Wang - Proceedings of the 17th International Conference …, 2020"],"snippet":"… And for unconstrained submission, we choose the largescale amounts of Commoncrawl Chinese10 and Japanese11 dataset as additional monolingual data for training LMs and executing BT to enhance our NMT systems …","url":["https://www.aclweb.org/anthology/2020.iwslt-1.18.pdf"]} -{"year":"2020","title":"YNU OXZ@ HaSpeeDe 2 and AMI: XLM-RoBERTa with Ordered Neurons LSTM for classification task at EVALITA 2020","authors":["X Ou, H Li - Proceedings of Sixth Evaluation Campaign of Natural …, 2020"],"snippet":"… scale multi-language pre-training model. It can be un- derstood as a combination of XLM and RoBER- Ta. It is trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages. Because the training of the model …","url":["http://ceur-ws.org/Vol-2765/paper93.pdf"]} -{"year":"2020","title":"Zero Shot Domain Generalization","authors":["U Maniyar, AA Deshmukh, U Dogan… - arXiv preprint arXiv …, 2020"],"snippet":"… Thus using semantic space helps us in the visual classification task. We use word embeddings of classes - in particular, simple GloVe embeddings [28] trained on Common Crawl corpus - as the semantic space in this work …","url":["https://arxiv.org/pdf/2008.07443"]} -{"year":"2020","title":"Zero-shot semantic segmentation using relation network","authors":["Y Zhang - 2020"],"snippet":"Page 1. University of Jyväskylä Faculty of Information Technology Yindong Zhang Zero-shot Semantic Segmentation using Relation Network Master's thesis of information technology May 28, 2020 Page 2. i Author: Yindong …","url":["https://jyx.jyu.fi/bitstream/handle/123456789/69720/1/URN%3ANBN%3Afi%3Ajyu-202006043976.pdf"]} -{"year":"2021","title":"'I'm just feeling like it'. On the relationship between the use of the progressive and sentiment polarity in Italian","authors":["L Viola"],"snippet":"… art transformer-based machine learning model for emotion and sentiment classification in Italian which employs the Italian BERT model UmBERTo trained on Commoncrawl ITA (Parisi, Francia, and Magnani [2020] 2021). For …","url":["https://www.uib.no/sites/w3.uib.no/files/attachments/viola.pdf"]} -{"year":"2021","title":"4. Unlocking value from AI in financial services: strategic and organizational tradeoffs vs. media narratives","authors":["G Lanzolla, S Santoni, C Tucci - Artificial Intelligence for Sustainable Value Creation, 2021"],"snippet":"Page 87. 4. Unlocking value from AI in financial services: strategic and organizational tradeoffs vs. media narratives Gianvito Lanzolla, Simone Santoni and Christopher Tucci 1. INTRODUCTION In 1955, McCarthy wrote that …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=_9BCEAAAQBAJ&oi=fnd&pg=PA70&dq=commoncrawl&ots=Z-4LjY9D6U&sig=BHpJ4i9Wq18ZWZDIWoGm5BnHqSY"]} -{"year":"2021","title":"6 Data Collection and Representation for Similar Languages, Varieties and Dialects","authors":["T Samardžic, N Ljubešic - Similar Languages, Varieties, and Dialects: A …, 2021"],"snippet":"… Page 146. Data Collection and Representation for Similar Languages 127 Another project that should be mentioned in this brief overview is the CommonCrawl, a project performing crawls over the whole internet for textual data since 2013 with regular data updates …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=hhA5EAAAQBAJ&oi=fnd&pg=PA121&dq=commoncrawl&ots=2XimIiF4W6&sig=XlIzLoiwAxuodhBmJeC_iS9BSeg"]} -{"year":"2021","title":"\" Short is the Road that Leads from Fear to Hate\": Fear Speech in Indian WhatsApp Groups","authors":["P Saha, B Mathew, K Garimella, A Mukherjee - arXiv preprint arXiv:2102.03870, 2021"],"snippet":"Page 1. “Short is the Road that Leads from Fear to Hate”: Fear Speech in Indian WhatsApp Groups Punyajoy Saha punyajoys@iitkgp.ac.in Indian Institute of Technology Kharagpur, West Bengal, India Binny Mathew binnymathew …","url":["https://arxiv.org/pdf/2102.03870"]} -{"year":"2021","title":"A common framework for quantifying the learnability of nouns and verbs","authors":["Y Zhou, D Yurovsky - Proceedings of the Annual Meeting of the Cognitive …, 2021"],"snippet":"… We used pre-trained 300-dimensional semantic vectors derived from the the Common Crawl corpus composed of 840 billion tokens and 2.2 million words. For our analysis, we considered only the words that corresponded to the relevant 434 images. Procedure …","url":["https://escholarship.org/content/qt8dn6k82j/qt8dn6k82j.pdf"]} -{"year":"2021","title":"A Comparative Study on Word Embeddings in Deep Learning for Text Classification","authors":["C Wang, P Nulty, D Lillis"],"snippet":"… 3https://nlp.stanford.edu/projects/glove/ 4https://commoncrawl.org/ 5https://fasttext.cc/ 6https://allennlp.org/elmo 7We additionally experimented from the fourth-to-last (-4) layer to the last layer … 300s refers to the GloVe …","url":["https://lill.is/pubs/Wang2020a.pdf"]} -{"year":"2021","title":"A Comparison Framework for Product Matching Algorithms","authors":["J Foxcroft - 2021"],"snippet":"Page 1. A Comparison Framework for Product Matching Algorithms by Jeremy Foxcroft A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science Guelph, Ontario, Canada …","url":["https://atrium.lib.uoguelph.ca/xmlui/bitstream/handle/10214/26375/Foxcroft_Jeremy_202109_Msc.pdf?sequence=3"]} -{"year":"2021","title":"A Comparison of Approaches to Document-level Machine Translation","authors":["Z Ma, S Edunov, M Auli - arXiv preprint arXiv:2101.11040, 2021"],"snippet":"… WMT17 English-German (en-de). For this benchmark, we follow the setup of Müller et al. (2018) whose training data includes the Europarl, Common Crawl, News Commentary and Rapid corpora, totaling nearly 6M sentence pairs …","url":["https://arxiv.org/pdf/2101.11040"]} -{"year":"2021","title":"A Comprehensive Assessment of Dialog Evaluation Metrics","authors":["YT Yeh, M Eskenazi, S Mehri - arXiv preprint arXiv:2106.03706, 2021"],"snippet":"… RoBERTa, which is employed in USR (Mehri and Eskenazi, 2020b), improves the training techniques in BERT and trains the model on a much larger corpus which includes the CommonCrawl News dataset (Mackenzie et al., 2020) and text ex- tracted from Reddit …","url":["https://arxiv.org/pdf/2106.03706"]} -{"year":"2021","title":"A Comprehensive Survey and Experimental Comparison of Graph-Based Approximate Nearest Neighbor Search","authors":["M Wang, X Xu, Q Yue, Y Wang - arXiv preprint arXiv:2101.12631, 2021"],"snippet":"Page 1. A Comprehensive Survey and Experimental Comparison of Graph-Based Approximate Nearest Neighbor Search Mengzhao Wang1, Xiaoliang Xu1, Qiang Yue1, Yuxiang Wang1,∗ 1Hangzhou Dianzi University …","url":["https://arxiv.org/pdf/2101.12631"]} -{"year":"2021","title":"A Comprehensive Survey of Grammatical Error Correction","authors":["Y Wang, Y Wang, K Dang, J Liu, Z Liu - ACM Transactions on Intelligent Systems and …, 2021"],"snippet":"Grammatical error correction (GEC) is an important application aspect of natural language processing techniques, and GEC system is a kind of very important intelligent system that has long been explored both in academic and industrial …","url":["https://dl.acm.org/doi/abs/10.1145/3474840"]} -{"year":"2021","title":"A Computational Framework for Slang Generation","authors":["Z Sun, R Zemel, Y Xu - arXiv preprint arXiv:2102.01826, 2021"],"snippet":"… To compare with and compute the baseline em- bedding methods M for definition sentences, we used 300-dimensional fastText embeddings (Bo- janowski et al., 2017) pre-trained with subword information on 600 billion …","url":["https://arxiv.org/pdf/2102.01826"]} -{"year":"2021","title":"A coral-reef approach to extract information from HTML tables","authors":["P Jiménez Aguirre, JC Roldán Salvador… - Applied Soft Computing …, 2022","P Jiménez, JC Roldán, R Corchuelo - Applied Soft Computing, 2021"],"snippet":"… Unfortunately, a recent analysis of the 32.04 million domains in the November 2019 Common Crawl has revealed that only 11.92 million domains provide such semantic hints [10], which argues for a method to deal with the remaining 20.12 …","url":["https://idus.us.es/bitstream/handle/11441/131990/1/1-s2.0-S1568494621009029-main.pdf?sequence=1","https://www.sciencedirect.com/science/article/pii/S1568494621009029"]} -{"year":"2021","title":"A COVID-19 news coverage mood map of Europe","authors":["F Robertson, J Lagus, K Kajava - Proceedings of the EACL Hackashop on News …, 2021"],"snippet":"… Newscrawl is a web crawl provided by the Common Crawl organisation which is updated more frequently and contains only data from news websites2. In order to keep the size of the corpus manageable and the extraction …","url":["https://www.aclweb.org/anthology/2021.hackashop-1.15.pdf"]} -{"year":"2021","title":"A data quality approach to the identification of discrimination risk in automated decision making systems","authors":["A Vetrò, M Torchiano, M Mecati - Government Information Quarterly, 2021"],"snippet":"… Similarly, a scientific experiment on the search engine Common Crawl (De-Arteaga et al., 2019) revealed an unequal treatment due to gender imbalance in the input data (almost 400,000 biographies): authors compared …","url":["https://www.sciencedirect.com/science/article/pii/S0740624X21000551"]} -{"year":"2021","title":"A data-centric review of deep transfer learning with applications to text data","authors":["S Bashath, N Perera, S Tripathi, K Manjang, M Dehmer… - Information Sciences, 2021"],"snippet":"Abstract In recent years, many applications are using various forms of deep learning models. Such methods are usually based on traditional learning paradigms requiring the consistency of properties among the feature spaces of the training and …","url":["https://www.sciencedirect.com/science/article/pii/S002002552101183X"]} -{"year":"2021","title":"A deep learning-based bilingual Hindi and Punjabi named entity recognition system using enhanced word embeddings","authors":["A Goyal, V Gupta, M Kumar - Knowledge-Based Systems, 2021"],"snippet":"… Initially, we collect Facebook’s pre-trained FastText embeddings which are trained on Wikipedia and common crawl data with 300 dimensions for our Hindi and Punjabi datasets. But after experiments, we find many of the words in our dataset are …","url":["https://www.sciencedirect.com/science/article/pii/S0950705121008637"]} -{"year":"2021","title":"A Framework for Generating Extractive Summary from Multiple Malayalam Documents","authors":["K Manju, S David Peter, SM Idicula - Information, 2021"],"snippet":"… Semantically similar words are mapped to nearby points in the vector space. In this work the vectorization of the terms in the document are performed using the pretrained word embedding model FastText for Malayalam, trained on Common Crawl and Wikipedia …","url":["https://www.mdpi.com/2078-2489/12/1/41/pdf"]} -{"year":"2021","title":"A Framework for Quality Assessment of Semantic Annotations of Tabular Data","authors":["R Avogadro, M Cremaschi, E Jiménez-Ruiz, A Rula - International Semantic Web …, 2021"],"snippet":"… 1 Introduction. Much information is conveyed within tables. A prominent example is the large set of relational databases or tabular data present on the Web. To size the spread of tabular data, 2.5M tables have been …","url":["https://link.springer.com/chapter/10.1007/978-3-030-88361-4_31"]} -{"year":"2021","title":"A Fusion Approach for Paper Submission Recommendation System","authors":["ST Huynh, N Dang, PT Huynh, DH Nguyen, BT Nguyen - International Conference on …, 2021"],"snippet":"… Finally, we use crawl-300d-2M 3 as the pre-train embedding matrix, which has 600 billion tokens and 2 million word vectors trained on Common Crawl. It can make using crawl-300d-2M more efficiently in vectorization. As depicted in Fig …","url":["https://link.springer.com/chapter/10.1007/978-3-030-79463-7_7"]} -{"year":"2021","title":"A General Language Assistant as a Laboratory for Alignment","authors":["A Askell, Y Bai, A Chen, D Drain, D Ganguli… - arXiv preprint arXiv …, 2021"],"snippet":"… For language model pre-training, these models are trained for 400B tokens on a distribution consisting mostly of filtered common crawl … The natural language dataset was composed of 55% heavily filtered common crawl data (220B tokens), 32 …","url":["https://arxiv.org/pdf/2112.00861"]} -{"year":"2021","title":"A Heuristic-driven Ensemble Framework for COVID-19 Fake News Detection","authors":["SD Das, A Basak, S Dutta - arXiv preprint arXiv:2101.03545, 2021"],"snippet":"… of model-specific special tokens. Each model also has its corresponding vocabulary associated with its tokenizer, trained on a large corpus data like GLUE, wikitext-103, CommonCrawl data etc. During training, each model …","url":["https://arxiv.org/pdf/2101.03545"]} -{"year":"2021","title":"A Heuristic-driven Uncertainty based Ensemble Framework for Fake News Detection in Tweets and News Articles","authors":["SD Das, A Basak, S Dutta - arXiv preprint arXiv:2104.01791, 2021"],"snippet":"… Each model also has its corresponding vocabulary associated with its tokenizer, trained on a large corpus data like GLUE, wikitext-103, CommonCrawl data etc. During training, each model applies the tokenization …","url":["https://arxiv.org/pdf/2104.01791"]} -{"year":"2021","title":"A Human Being Wrote This Law","authors":["AB Cyphert"],"snippet":"… GPT-3 had an impressively large data training set: it was trained on the Common Crawl dataset, a nearly trillion-word dataset,22 which includes everything from traditional news sites like the New York Times to sites like Reddit.The Common …","url":["https://lawreview.law.ucdavis.edu/issues/55/1/articles/files/55-1_Cyphert.pdf"]} -{"year":"2021","title":"A Literature Survey of Recent Advances in Chatbots","authors":["G Caldarini, S Jaf, K McGarry - 2021"],"snippet":"… This led to the development of pretrained systems such as BERT (Bidirectional Encoder Representations from transformers) [46] and GPT (Generative Pre-trained Transformer), which were trained with huge language datasets, such as Wikipedia …","url":["https://www.preprints.org/manuscript/202112.0265/download/final_file"]} -{"year":"2021","title":"A Mechanism for Producing Aligned Latent Spaces with Autoencoders","authors":["S Jain, A Radhakrishnan, C Uhler - arXiv preprint arXiv:2106.15456, 2021"],"snippet":"… 6.1 Alignment of GloVe Embeddings In this section, we apply our theory to align semantic/syntactic directions in GloVe word embeddings [21]. We use 300 dimensional GloVe vectors that were trained on Common Crawl with 840 billion tokens …","url":["https://arxiv.org/pdf/2106.15456"]} -{"year":"2021","title":"A Multi-Platform Analysis of Political News Discussion and Sharing on Web Communities","authors":["Y Wang, S Zannettou, J Blackburn, B Bradlyn… - arXiv preprint arXiv …, 2021"],"snippet":"… supported types of entities). The model re- lies on Convolutional Neural Networks (CNNs), trained on the OntoNotes dataset [90], as well as Glove vectors [62] trained on the Common Crawl dataset [17]. 2.3 News Stories Identification …","url":["https://arxiv.org/pdf/2103.03631"]} -{"year":"2021","title":"A Multi-Task Learning Model for Multidimensional Relevance Assessment","authors":["DGP Putri, M Viviani, G Pasi - International Conference of the Cross-Language …, 2021"],"snippet":"… 6 In particular, we focused on the ad-hoc retrieval subtask. The data consist of Web pages crawled by means of CommonCrawl, 7 related to the health-related domain. The data collections consider 50 topics/queries and associated documents …","url":["https://link.springer.com/chapter/10.1007/978-3-030-85251-1_9"]} -{"year":"2021","title":"A Multifactorial Approach to Crosslinguistic Constituent Orderings","authors":["Z Liu"],"snippet":"… The data for training these LMs was taken from the raw data of the CoNLL 2017 Shared Task on multilingual parsing (Ginter et al. 2017), which contains texts from Common Crawl and Wikipedia. The architecture of the LM was the same for every language …","url":["https://www.researchgate.net/profile/Zoey-Liu/publication/354204297_A_Multifactorial_Approach_to_Crosslinguistic_Constituent_Orderings/links/612c0095c69a4e487967c628/A-Multifactorial-Approach-to-Crosslinguistic-Constituent-Orderings.pdf"]} -{"year":"2021","title":"A Multitask Framework to Detect Depression, Sentiment and Multi-label Emotion from Suicide Notes","authors":["S Ghosh, A Ekbal, P Bhattacharyya - Cognitive Computation, 2021"],"snippet":"The significant rise in suicides is a major cause of concern in public health domain. Depression plays a major role in increasing suicide ideation among th.","url":["https://link.springer.com/article/10.1007/s12559-021-09828-7"]} -{"year":"2021","title":"A Novel Corpus of Discourse Structure in Humans and Computers","authors":["B Hemmatian, S Feucht, R Avram, A Wey, M Garg… - arXiv preprint arXiv …, 2021"],"snippet":"We present a novel corpus of 445 humanand computer-generated documents, comprising about 27,000 clauses, annotated for semantic clause types and coherence relations that allow for nuanced comparison of artificial and natural …","url":["https://arxiv.org/pdf/2111.05940"]} -{"year":"2021","title":"A novel fusion-based deep learning model for sentiment analysis of COVID-19 tweets","authors":["ME Basiri, S Nemati, M Abdar, S Asadi, UR Acharrya - Knowledge-Based Systems, 2021"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0950705121005049"]} -{"year":"2021","title":"A NOVEL TRILINGUAL DATASET FOR CRISIS NEWS CATEGORIZATION ACROSS LANGUAGES","authors":["K Kajava - 2021"],"snippet":"… Use Common Crawl instead • “the goal of democratizing access to web information by producing and maintaining an open repository of web crawl data that is universally accessible and analyzable” (https://commoncrawl.org/about/, accessed June 1 2021) …","url":["https://blogs.helsinki.fi/language-technology/files/2021/06/LT-Seminar-2021-06-03-Kaisla-Kajava.pdf"]} -{"year":"2021","title":"A Primer on Pretrained Multilingual Language Models","authors":["S Doddapaneni, G Ramesh, A Kunchukuttan, P Kumar… - arXiv preprint arXiv …, 2021"],"snippet":"… and they differ in the architecture (eg, number of layers, parameters, etc), objective functions used for training (eg, monolingual masked language modeling objective, translation language modeling objective, etc), data used …","url":["https://arxiv.org/pdf/2107.00676"]} -{"year":"2021","title":"A Probing Task on Linguistic Properties of Korean Sentence Embedding","authors":["A Ahn, BI Ko, D Lee, G Han, M Shin, J Nam - Annual Conference on Human and …, 2021"],"snippet":"Abstract 본 연구는 한국어 문장 임베딩 (embedding) 에 담겨진 언어적 속성을 평가 하기 위한 프로빙 태스크 (Probing Task) 를 소개한다. 프로빙 태스크는 임베딩 으로부터 문장의 표층적, 통사적, 의미적 속성을 구분하는 문제로 영어, 폴란드어 …","url":["https://www.koreascience.or.kr/article/CFKO202130060614813.pdf"]} -{"year":"2021","title":"A Residual Network Architecture for Hindi NER using Fasttext and BERT embedding layers","authors":["R Shelke, S Vanjale"],"snippet":"… It provides word embedding for Hindi (and 157 other languages) and is based on the CBOW (Continuous Bag-of-Words) model. The CBOW model learns by predicting the current word based on its context, and it was trained on Common Crawl and Wikipedia …","url":["https://www.novyimir.net/gallery/nmrj%202867f.pdf"]} -{"year":"2021","title":"A Review of Bangla Natural Language Processing Tasks and the Utility of Transformer Models","authors":["F Alam, A Hasan, T Alam, A Khan, J Tajrin, N Khan… - arXiv preprint arXiv …, 2021"],"snippet":"Page 1. A Review of Bangla Natural Language Processing Tasks and the Utility of Transformer Models FIROJ ALAM, Qatar Computing Research Institute, HBKU, Qatar MD. ARID HASAN, Cognitive Insight Limited, Bangladesh …","url":["https://arxiv.org/pdf/2107.03844"]} -{"year":"2021","title":"A Review of Public Datasets in Question Answering Research","authors":["BB Cambazoglu, M Sanderson, F Scholer, B Croft"],"snippet":"… matching the question. The sentences are selected based on their tf-idf similarity to the question. The underlying web page collection contains pages from the July 2018 archive of the Common Crawl web repository. Task. Given a …","url":["http://www.sigir.org/wp-content/uploads/2020/12/p07.pdf"]} -{"year":"2021","title":"A Semi-supervised Multi-task Learning Approach to Classify Customer Contact Intents","authors":["L Dong, MC Spencer, A Biagi"],"snippet":"… We note that this ALBERT model is trained as a multiclass classification with only positive cases. 3.2.2 SS MT D/TAPT ALBERT The pretrained language models are mostly trained on well-known corpora, such as Wikipedia, Common Crawl, BookCorpus, Reddit, etc …","url":["https://assets.amazon.science/79/22/d9237534448293405083a73b896d/a-semi-supervised-multi-task-learning-approach-to-classify-customer-contact-intents.pdf"]} -{"year":"2021","title":"A Short Survey of LSTM Models for De-identification of Medical Free Text","authors":["JL Leevy, TM Khoshgoftaar - 2020 IEEE 6th International Conference on …, 2020"],"snippet":"… The training set was obtained from the 2014 i2b2 challenge, while the test set came from the University of Florida (UF) Health Integrated Data Repository 5. Word embeddings were sourced from GoogleNews [54] …","url":["https://ieeexplore.ieee.org/abstract/document/9319017/"]} -{"year":"2021","title":"A Simple Post-Processing Technique for Improving Readability Assessment of Texts using Word Mover's Distance","authors":["JM Imperial, E Ong - arXiv preprint arXiv:2103.07277, 2021"],"snippet":"… technique described in Section 4. For the word embeddings of English, German, and Filipino needed for the technique, we downloaded the resources from the fastText website5. The word embeddings in various …","url":["https://arxiv.org/pdf/2103.07277"]} -{"year":"2021","title":"A Simple Recipe for Multilingual Grammatical Error Correction","authors":["S Rothe, J Mallinson, E Malmi, S Krause, A Severyn - arXiv preprint arXiv:2106.03830, 2021"],"snippet":"… 2.1 mT5 Pre-training mT5 has been pre-trained on mC4 corpus, a subset of Common Crawl, covering 101 languages and composed of about 50 billion documents. For details on mC4, we refer the reader to the original paper (Xue et al., 2020) …","url":["https://arxiv.org/pdf/2106.03830"]} -{"year":"2021","title":"A Spontaneous Stereotype Content Model: Taxonomy, Properties, and Prediction.","authors":["G Nicolas, X Bai, ST Fiske"],"snippet":"… model trained on the Common Crawl (600 billion words obtained from various internet sources) … the Common Crawl (600 billion words), a Glove model trained using around 840 billion words from the Common Crawl (Pennington, Socher, & Manning, 2014; …","url":["https://www.nicolaslab.org/publication/sscm/SSCM.pdf"]} -{"year":"2021","title":"A Study of Analogical Density in Various Corpora at Various Granularity","authors":["R Fam, Y Lepage - Information, 2021"],"snippet":"In this paper, we inspect the theoretical problem of counting the number of analogies between sentences contained in a text. Based on this, we measure the analogical density of the text. We focus on analogy at the sentence level …","url":["https://www.mdpi.com/2078-2489/12/8/314/pdf"]} -{"year":"2021","title":"A Study of Analogical Density in Various Corpora at Various Granularity. Information 2021, 12, 314","authors":["R Fam, Y Lepage - 2021"],"snippet":"… Table 4 shows the statistics of Multi30K corpus. • CommonCrawl (available at: commoncrawl.org accessed on 20 September 2020) is a crawled web archive and dataset … Table 5 shows the statistics on the CommonCrawl corpus …","url":["https://search.proquest.com/openview/208b192bc36d7c71728c73989c304dea/1?pq-origsite=gscholar&cbl=2032384"]} -{"year":"2021","title":"A study on performance improvement considering the balance between corpus in Neural Machine Translation","authors":["C Park, K Park, H Moon, S Eo, H Lim - Journal of the Korea Convergence Society, 2021"],"snippet":"… 1. Concept of Corpus Weight Balance GPT3도 Common Crawl, WebText2, Books1, Books2, Wikipedia 등의 데이터를 합쳐 모델을 훈련하 게 된다. 그러나 말뭉치 간의 ��성 및 특징(어투, 문체, 도메인 등)이 다름에도 하나의 데이터로 …","url":["https://www.koreascience.or.kr/article/JAKO202116954598769.pdf"]} -{"year":"2021","title":"A Survey of COVID-19 Misinformation: Datasets, Detection Techniques and Open Issues","authors":["AR Ullah, A Das, A Das, MA Kabir, K Shu - arXiv preprint arXiv:2110.00737, 2021"],"snippet":"Page 1. A Survey of COVID-19 Misinformation: Datasets, Detection Techniques and Open Issues AR Sana Ullaha, Anupam Dasa, Anik Dasb, Muhammad Ashad Kabirc,∗, Kai Shud aDepartment of Computer Science and Engineering …","url":["https://arxiv.org/pdf/2110.00737"]} -{"year":"2021","title":"A Survey of Machine Learning-Based Solutions for Phishing Website Detection","authors":["L Tang, QH Mahmoud - Machine Learning and Knowledge Extraction, 2021"],"snippet":"With the development of the Internet, network security has aroused people's attention. It can be said that a secure network environment is a basis for the rapid and sound development of the Internet. Phishing is an essential class …","url":["https://www.mdpi.com/2504-4990/3/3/34/pdf"]} -{"year":"2021","title":"A Survey of Recent Abstract Summarization Techniques","authors":["D Puspitaningrum - Proceedings of Sixth International Congress on …, 2021"],"snippet":"… For C4, taken from Common Crawl scrape from April 2019 and applied some cleansing filters, it results in a very clean 750GB text dataset of large pre-training datasets, more extensive than other pre-training datasets. 3.2 Pegasus-XSum (Pegasus) …","url":["https://hal.archives-ouvertes.fr/hal-03216381/document"]} -{"year":"2021","title":"A Survey on Bias in Deep NLP","authors":["I Garrido-Muñoz, A Montejo-Ráez, F Martínez-Santiago… - 2021"],"snippet":"… 2016 [26] Gender Word2Vec, GloVe GoogleNews corpus (w2vNEWS), Common Crawl English Analogies/Cosine Similarity Vector Space Manipulation After - 2017 [10] Gender, Ethnicity GloVe, Word2Vec Common …","url":["https://www.preprints.org/manuscript/202103.0049/download/final_file"]} -{"year":"2021","title":"A Survey on Data Augmentation for Text Classification","authors":["M Bayer, MA Kaufhold, C Reuter - arXiv preprint arXiv:2107.03158, 2021"],"snippet":"… CNN+LSTM/GRU HON RSN-1 RSN-2 Word2Vec Hate Speech FastText Wikipedia GoogleNews W2V GloVe Common Crawl GloVe Common Crawl GloVe Common Crawl -22.7 (Macro F1) +1.0 -3.3 +0.3 -0.2 0 [44] 1. Method …","url":["https://arxiv.org/pdf/2107.03158"]} -{"year":"2021","title":"A Survey on Low-Resource Neural Machine Translation","authors":["R Wang, X Tan, R Luo, T Qin, TY Liu - arXiv preprint arXiv:2107.04239, 2021"],"snippet":"Page 1. A Survey on Low-Resource Neural Machine Translation Rui Wang, Xu Tan, Renqian Luo, Tao Qin and Tie-Yan Liu Microsoft Research Asia {ruiwa, xuta, t-reluo, taoqin, tyliu}@microsoft.com Abstract Neural approaches …","url":["https://arxiv.org/pdf/2107.04239"]} -{"year":"2021","title":"A Survey On Neural Word Embeddings","authors":["E Sezerer, S Tekir - arXiv preprint arXiv:2110.01804, 2021"],"snippet":"… ivLBL/vLBL [95] 2013 100-600 Wiki LBL Performance - NCE [47] GloVe [109] 2014 300 Wiki, Gigaword, Commoncrawl LBL+coocurence Matrix Training - - DEPS [69] 2014 300 Wiki CBOW Training Stanford tagger[129] …","url":["https://arxiv.org/pdf/2110.01804"]} -{"year":"2021","title":"A Survey on Statistical Approaches for Abstractive Summarization of Low Resource Language Documents","authors":["P Deshpande, S Jahirabadkar - Smart Trends in Computing and Communications, 2022"],"snippet":"… German Wiki data is used as real data and synthetic data is a common crawl data. Synthetic data is used to increase size of data. Three settings are considered for generation of summaries: (1) Transformer model using real data for training. …","url":["https://link.springer.com/chapter/10.1007/978-981-16-4016-2_69"]} -{"year":"2021","title":"A Synthetic FACS Framework for Expanding Facial Expression Lexicons","authors":["C Butler - 2021"],"snippet":"Page 1. A Synthetic FACS Framework for Expanding Facial Expression Lexicons DISSERTATION Submitted in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY (Computer Science) at the …","url":["https://search.proquest.com/openview/ebf5a87df8275e6fc6fac0b1c0b21b44/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2021","title":"A system for proactive risk assessment of application changes in cloud operations","authors":["R Batta, L Shwartz, M Nidd, AP Azad, H Kumar - 2021 IEEE 14th International …, 2021"],"snippet":"Abstract Change is one of the biggest contributors to service outages. With more enterprises migrating their applications to cloud and using automated build and deployment the volume and rate of changes has significantly increased. Furthermore …","url":["https://www.computer.org/csdl/proceedings-article/cloud/2021/006000a112/1ymJ4TXNxUA"]} -{"year":"2021","title":"A Systematic Investigation of Commonsense Understanding in Large Language Models","authors":["XL Li, A Kuncoro, CM d'Autume, P Blunsom… - arXiv preprint arXiv …, 2021"],"snippet":"… 2019), we train our models using the cleaned version of Common Crawl corpus (C4), around 800 GB of data. Our largest model, with 32 transformer layers and 7 billion parameters, has a similar number of parameters to the open-sourced GPT-J model (Wang …","url":["https://arxiv.org/pdf/2111.00607"]} -{"year":"2021","title":"A systems-wide understanding of the human olfactory percept chemical space","authors":["J Kowalewski, B Huynh, A Ray - Chemical Senses, 2021"],"snippet":"… 2015; spaCy, 2016), and a convolutional neural network previously trained on GloVe Common Crawl (Pennington, Socher, & Manning, 2014) and OntoNotes 5. The training set is comprised of more than 1 million English …","url":["https://academic.oup.com/chemse/advance-article-abstract/doi/10.1093/chemse/bjab007/6153471"]} -{"year":"2021","title":"A Targeted Attack on Black-Box Neural Machine Translation with Parallel Data Poisoning","authors":["C Xu, J Wang, Y Tang, F Guzmán, BIP Rubinstein… - Proceedings of the Web …, 2021"],"snippet":"… 3https://commoncrawl.org/ 4We assume that these poisoned web pages are archived and to be used for parallel data extraction. This assumption is realistic as we find that the crawling services commonly used for parallel …","url":["https://dl.acm.org/doi/abs/10.1145/3442381.3450034"]} -{"year":"2021","title":"A unified approach to sentence segmentation of punctuated text in many languages","authors":["R Wicks, M Post"],"snippet":"Page 1. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 3995–4007 August 1–6, 2021 …","url":["https://aclanthology.org/2021.acl-long.309.pdf"]} -{"year":"2021","title":"A word embedding-based approach to cross-lingual topic modeling","authors":["CH Chang, SY Hwang - Knowledge and Information Systems, 2021"],"snippet":"The cross-lingual topic analysis aims at extracting latent topics from corpora of different languages. Early approaches rely on high-cost multilingual reso.","url":["https://link.springer.com/article/10.1007/s10115-021-01555-7"]} -{"year":"2021","title":"ABC: Attention with Bounded-memory Control","authors":["H Peng, J Kasai, N Pappas, D Yogatama, Z Wu, L Kong… - arXiv preprint arXiv …, 2021"],"snippet":"Page 1. Under review as a conference paper at ICLR 2022 ABC: ATTENTION WITH BOUNDED-MEMORY CONTROL Hao Peng♠ Jungo Kasai♠ Nikolaos Pappas♠ Dani Yogatama♣ Zhaofeng Wu♦∗ Lingpeng Kong♦ Roy Schwartz …","url":["https://arxiv.org/pdf/2110.02488"]} -{"year":"2021","title":"Abuse is Contextual, What about NLP? The Role of Context in Abusive Language Annotation and Detection","authors":["S Menini, AP Aprosio, S Tonelli - arXiv preprint arXiv:2103.14916, 2021"],"snippet":"… In particular, all vectors are extracted starting from the pre-trained embeddings obtained from the Common Crawl corpus.5 Since SVM takes in input sentence embeddings, we convert the context and the current tweet …","url":["https://arxiv.org/pdf/2103.14916"]} -{"year":"2021","title":"Accelerated execution via eager-release of dependencies in task-based workflows","authors":["H Elshazly, F Lordan, J Ejarque, RM Badia - The International Journal of High …, 2021"],"snippet":"Task-based programming models offer a flexible way to express the unstructured parallelism patterns of nowadays complex applications. This expressive capability is required to achieve maximum possi...","url":["https://journals.sagepub.com/doi/abs/10.1177/1094342021997558"]} -{"year":"2021","title":"Accelerating Text Communication via Abbreviated Sentence Input","authors":["J Adhikary, J Berger, K Vertanen"],"snippet":"… For our out-of-domain training set, we used one billion words of web text from Common Crawl1. We only … As shown in Table 1, random sentences from Common Crawl averaged 30 words. The cross-entropy 1https://commoncrawl …","url":["https://aclanthology.org/2021.acl-long.514.pdf"]} -{"year":"2021","title":"Accurate Word Representations with Universal Visual Guidance","authors":["Z Zhang, H Yu, H Zhao, R Wang, M Utiyama - arXiv preprint arXiv:2012.15086, 2020"],"snippet":"… WMT'14 EN-DE 4.43M bilingual sentence pairs of the WMT14 dataset were used as training data, including Common Crawl, News Commentary, and Europarl v7. The newstest2013 and newstest2014 datasets were …","url":["https://arxiv.org/pdf/2012.15086"]} -{"year":"2021","title":"Acquiring and Harnessing Verb Knowledge for Multilingual Natural Language Processing","authors":["O Majewska - 2021"],"snippet":"Advances in representation learning have enabled natural language processing models to derive non-negligible linguistic information directly from text corpora in an unsupervised fashion. However, this signal is underused in downstream tasks …","url":["https://www.repository.cam.ac.uk/bitstream/handle/1810/329292/Majewska_PhDThesis_final.pdf?sequence=4"]} -{"year":"2021","title":"Active Learning for Argument Mining: A Practical Approach","authors":["N Solmsdorf, D Trautmann, H Schütze - arXiv preprint arXiv:2109.13611, 2021"],"snippet":"… easily discernible argumentative statements. The corpus contains 1,000 sentences per topic, ie, in total 8,000 instances, which were tapped from a Common Crawl snapshot and in- dexed with Elasticsearch. The time-consuming …","url":["https://arxiv.org/pdf/2109.13611"]} -{"year":"2021","title":"Adapting Neural Machine Translation for Automatic Post-Editing","authors":["A Sharma, P Gupta, A Nelakanti"],"snippet":"… reference as the output. 3.2 Pre-training on domain-specific data FAIR's WMT'19 NMT model was trained on Newscrawl and Commoncrawl datasets while the source of this year's APE data is Wikipedia. To fix the domain mismatch …","url":["https://assets.amazon.science/dc/df/5443c00541a9b6257f6110c5bb86/adapting-neural-machine-translation-for-automatic-post-editing.pdf"]} -{"year":"2021","title":"Adaptive Ranking Relevant Source Files for Bug Reports Using Genetic Algorithm","authors":["H Fujita, H Perez-Meana - 2021"],"snippet":"Abstract. Precisely locating buggy files for a given bug report is a cumbersome and time-consuming task, particularly in a large-scale project with thousands of source files and bug reports. An efficient bug localization module is desirable to improve the …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=GYxJEAAAQBAJ&oi=fnd&pg=PA430&dq=commoncrawl&ots=IRy65Bdasc&sig=WyEQAu3sL155gzwLW6pjIX4mCwQ"]} -{"year":"2021","title":"ADEPT: An Adjective-Dependent Plausibility Task","authors":["A Emami, I Porada, A Olteanu, K Suleman, A Trischler…"],"snippet":"… 4 Dataset To construct ADEPT, we scrape text samples from English Wikipedia and Common Crawl, extracting adjectival modifier-noun pairs that occur with high frequency … We extracted 10 million pairs from English …","url":["https://aclanthology.org/2021.acl-long.553.pdf"]} -{"year":"2021","title":"ADPBC: Arabic Dependency Parsing Based Corpora for Information Extraction","authors":["S Mohamed, M Hussien, HM Mousa - 2021"],"snippet":"… Sch. Econ. Res. Pap. No. WP BRP., 2018. [27] A. Panchenko, E. Ruppert, S. Faralli, SP Ponzetto, and C. Biemann, “Building a web-scale dependency-parsed corpus from common crawl,” Lr. 2018 - 11th Int. Conf. Lang. Resour. Eval., pp. 1816–1823, 2019 …","url":["http://www.mecs-press.org/ijitcs/ijitcs-v13-n1/IJITCS-V13-N1-4.pdf"]} -{"year":"2021","title":"Advances and Trends in Artificial Intelligence. From Theory to Practice: 34th International Conference on Industrial, Engineering and Other Applications of Applied …","authors":["H Fujita"],"snippet":"Page 1. Hamido Fujita Ali Selamat Jerry Chun-Wei Lin Moonis Ali (Eds.) Advances and Trends in Artificial Intelligence From Theory to Practice 34th International Conference on Industrial, Engineering and Other Applications …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=ihg5EAAAQBAJ&oi=fnd&pg=PR5&dq=commoncrawl&ots=fSoeFLOj--&sig=NR8xQHWDWjlGUhvSgV52wEaaT6Y"]} -{"year":"2021","title":"Aggressive and Offensive Language Identification in Hindi, Bangla, and English: A Comparative Study","authors":["R Kumar, B Lahiri, AK Ojha - SN Computer Science, 2021"],"snippet":"In the present paper, we carry out a comparative study between offensive and aggressive language and attempt to understand their inter-relationship. To car.","url":["https://link.springer.com/article/10.1007/s42979-020-00414-6"]} -{"year":"2021","title":"AlephBERT: A Hebrew Large Pre-Trained Language Model to Start-off your Hebrew NLP Application With","authors":["A Seker, E Bandel, D Bareket, I Brusilovsky… - arXiv preprint arXiv …, 2021"],"snippet":"… Oscar: A deduplicated Hebrew portion of the OSCAR corpus, which is “extracted from Common Crawl via language classification, filtering and cleaning” (Ortiz Suárez et al., 2020). • Twitter: Texts of Hebrew tweets collected between 2014-09-28 and 2018-03-07 …","url":["https://arxiv.org/pdf/2104.04052"]} -{"year":"2021","title":"Alignment of Language Agents","authors":["Z Kenton, T Everitt, L Weidinger, I Gabriel, V Mikulik… - arXiv preprint arXiv …, 2021","ZKTEL Weidinger, IGVMG Irving"],"snippet":"… Large scale unlabeled datasets are collected from the web, such as the CommonCrawl dataset (Raffel et al., 2019). Input data and labels are created by chopping a sentence into … Brown et al. (2020) attempt to im- prove …","url":["https://ar5iv.labs.arxiv.org/html/2103.14659","https://arxiv.org/pdf/2103.14659"]} -{"year":"2021","title":"All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training","authors":["I Nassar, S Herath, E Abbasnejad, W Buntine, G Haffari - arXiv preprint arXiv …, 2021"],"snippet":"… A detailed description of such relations and examples thereof can be found in the ConceptNet documentation8. On the other hand, GloVe and word2vec are two prominent sets of word embeddings, the former is trained on 840 …","url":["https://arxiv.org/pdf/2104.05248"]} -{"year":"2021","title":"All NLP Tasks Are Generation Tasks: A General Pretraining Framework","authors":["Z Du, Y Qian, X Liu, M Ding, J Qiu, Z Yang, J Tang - arXiv preprint arXiv:2103.10360, 2021"],"snippet":"Page 1. All NLP Tasks Are Generation Tasks: A General Pretraining Framework Zhengxiao Du *12 Yujie Qian * 3 Xiao Liu 1 2 Ming Ding 1 2 Jiezhong Qiu 1 2 Zhilin Yang 4 2 Jie Tang 1 2 Abstract There have been various types …","url":["https://arxiv.org/pdf/2103.10360"]} -{"year":"2021","title":"Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training","authors":["B Zheng, L Dong, S Huang, S Singhal, W Che, T Liu… - arXiv preprint arXiv …, 2021"],"snippet":"… are learned on the reconstructed CommonCrawl corpus (Chi et al., 2021b; Conneau et al., 2020) using SentencePiece (Kudo and Richardson, 2018) with the unigram language model (Kudo, 2018). The unigram distributions …","url":["https://arxiv.org/pdf/2109.07306"]} -{"year":"2021","title":"ALX: Large Scale Matrix Factorization on TPUs","authors":["H Mehta, S Rendle, W Krichene, L Zhang - arXiv preprint arXiv:2112.02194, 2021"],"snippet":"We present ALX, an open-source library for distributed matrix factorization using Alternating Least Squares, written in JAX. Our design allows for efficient use of the TPU architecture and scales well to matrix factorization problems of O(B) rows/columns …","url":["https://arxiv.org/pdf/2112.02194"]} -{"year":"2021","title":"AMMUS: A Survey of Transformer-based Pretrained Models in Natural Language Processing","authors":["KS Kalyan, A Rajasekharan, S Sangeetha - arXiv preprint arXiv:2108.05542, 2021"],"snippet":"… mT6 [91], XLM-E [89] CC-Aligned [108] Parallel corpus of 292 million non-English common crawl document pairs and 100 million English common crawl document pairs. XLM-E [89] Dakshina [109] Parallel corpus containing 10K sentences for 12 In- dian languages …","url":["https://arxiv.org/pdf/2108.05542"]} -{"year":"2021","title":"An Alignment-Based Approach to Semi-Supervised Bilingual Lexicon Induction with Small Parallel Corpora","authors":["K Marchisio, C Xiong, P Koehn"],"snippet":"… learning. 6 Experimental Settings Language Corpus # of words English WaCky, BNC, Wikipedia 2.8 B Italian itWac 1.6 B German SdeWaC 0.9 B Spanish News Crawl 2007-2012 386 M Finnish Common Crawl 2016 2.8 B Table …","url":["https://aclanthology.org/2021.mtsummit-research.24.pdf"]} -{"year":"2021","title":"An analysis of full-size Russian complexly NER labelled corpus of Internet user reviews on the drugs based on deep learning and language neuron nets","authors":["AG Sboeva, SG Sboevac, IA Moloshnikova…"],"snippet":"Page 1. An analysis of full-size Russian complexly NER labelled corpus of Internet user reviews on the drugs based on deep learning and language neuron nets AG Sboeva,b,, SG Sboevac, IA Moloshnikova, AV Gryaznova, RB …","url":["https://sagteam.ru/papers/med-corpus/4.pdf"]} -{"year":"2021","title":"An Effective Deep Learning Approach for Extractive Text Summarization","authors":["MT Luu, TH Le, MT Hoang"],"snippet":"Page 1. An Effective Deep Learning Approach for Extractive Text Summarization Minh-Tuan Luu PhD. Student, School of Information and Communication Technology, Hanoi University of Science and Technology, No.1 Dai Co …","url":["http://www.ijcse.com/docs/INDJCSE21-12-02-141.pdf"]} -{"year":"2021","title":"An embedding method for unseen words considering contextual information and morphological information","authors":["MS Won, YS Choi, S Kim, CW Na, JH Lee - Proceedings of the 36th Annual ACM …, 2021"],"snippet":"… Random embeddings are assigned for OOv words. Glove is implemented by using preǦtrained 300Ǧdimensional Glove embedding, which is trained on Common Crawl with 840B word tokens. Random embeddings are assigned for OOv words …","url":["https://dl.acm.org/doi/abs/10.1145/3412841.3441982"]} -{"year":"2021","title":"An empirical evaluation of text representation schemes to filter the social media stream","authors":["S Modha, P Majumder, T Mandl - Journal of Experimental & Theoretical Artificial …, 2021"],"snippet":"… Glove pre-trained model available with different embed size and trained on the common crawl, Twitter. We have used the Glove pre-trained model with a vocabulary size of 2.2 million and trained on the common crawl. fastText …","url":["https://www.tandfonline.com/doi/full/10.1080/0952813X.2021.1907792"]} -{"year":"2021","title":"An Empirical Exploration in Quality Filtering of Text Data","authors":["L Gao - arXiv preprint arXiv:2109.00698, 2021"],"snippet":"… (2020), with a Paretodistribution thresholded filtering method and a shallow CommonCrawl-WebText classfier … (2020) has been made public, we instead use the same type of fasttext (Joulin et al., 2017) classifier between unfiltered …","url":["https://arxiv.org/pdf/2109.00698"]} -{"year":"2021","title":"An Empirical Study on Task-Oriented Dialogue Translation","authors":["S Liu - ICASSP 2021-2021 IEEE International Conference on …, 2021"],"snippet":"… consistent). We valid them with SENT-BASE model on En⇒De task. data in WMT20 news domain, which consists of CommonCrawl and NewsCommentary. We conduct data selection to select similar amount of sentences …","url":["https://ieeexplore.ieee.org/abstract/document/9413521/"]} -{"year":"2021","title":"An End-to-end Point of Interest (POI) Conflation Framework","authors":["R Low, ZD Tekler, L Cheah - arXiv preprint arXiv:2109.06073, 2021"],"snippet":"… words that did not appear in the training data [63]. For this study, the fastText model was pre-trained on 2 million word vectors with subword information from commoncrawl.org. The second advantage of using the fastText library to …","url":["https://arxiv.org/pdf/2109.06073"]} -{"year":"2021","title":"An evaluation dataset for depression detection in Arabic social media","authors":["S Elimam, M Bougeussa - International Journal of Knowledge Engineering and …, 2021"],"snippet":"Studying depression in Arabic social media has been neglected compared to other languages and the traditional way of dealing with depression (face-to-face medical diagnose) is not enough as the number of people that suffer from depression in …","url":["https://www.inderscienceonline.com/doi/abs/10.1504/IJKEDM.2021.119888"]} -{"year":"2021","title":"An Explainable Multi-Modal Hierarchical Attention Model for Developing Phishing Threat Intelligence","authors":["Y Chai, Y Zhou, W Li, Y Jiang - IEEE Transactions on Dependable and Secure …, 2021"],"snippet":"Phishing website attack, as one of the most persistent forms of cyber threats, evolves and remains a major cyber threat. Various detection methods (eg, lookup systems, fraud cue-based methods) have been proposed to identify phishing websites. The …","url":["https://ieeexplore.ieee.org/abstract/document/9568704/"]} -{"year":"2021","title":"An Exploration of Alignment Concepts to Bridge the Gap between Phrase-based and Neural Machine Translation","authors":["JT Peter"],"snippet":"Page 1. An Exploration of Alignment Concepts to Bridge the Gap between Phrase-based and Neural Machine Translation Von der Fakultät für Mathematik, Informatik und Naturwissenschaften der RWTH Aachen University zur …","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1175/PeterJan-Thorsten--ExplorationofAlignmentConceptstoBridgetheGapbetweenPhrase-basedNeuralMachineTranslation--2020.pdf"]} -{"year":"2021","title":"An Exploratory Analysis of Multilingual Word-Level Quality Estimation with Cross-Lingual Transformers","authors":["T Ranasinghe, C Orasan, R Mitkov - arXiv preprint arXiv:2106.00143, 2021"],"snippet":"… Our architecture relies on the XLM-R transformer model (Conneau et al., 2020) to derive the representations of the input sentences. XLM-R has been trained on a large-scale multilingual dataset in 104 languages, totalling …","url":["https://arxiv.org/pdf/2106.00143"]} -{"year":"2021","title":"An Exploratory Study on Utilising the Web of Linked Data for Product Data Mining","authors":["Z Zhang, X Song - arXiv preprint arXiv:2109.01411, 2021"],"snippet":"… The Web Data Commons3 (WDC) project extracts such structured data from the CommonCrawl4 as RDF n-quads5, and release them on … to create a very large training dataset for product entity linking using semantic markup data …","url":["https://arxiv.org/pdf/2109.01411"]} -{"year":"2021","title":"An extended analysis of the persistence of persistent identifiers of the scholarly web","authors":["M Klein, L Balakireva - International Journal on Digital Libraries, 2021"],"snippet":"… These findings were confirmed in a large-scale study by Thompson and Jian [22] based on two samples of the web taken from Common Crawl Footnote 3 datasets. The authors were motivated to quantify the use of HTTP DOIs versus URLs of …","url":["https://link.springer.com/article/10.1007/s00799-021-00315-w"]} -{"year":"2021","title":"An Intrinsic and Extrinsic Evaluation of Learned COVID-19 Concepts using Open-Source Word Embedding Sources","authors":["S Parikh, A Davoudi, S Yu, C Giraldo, E Schriver… - medRxiv"],"snippet":"… 8] and GloVe [9] on large corpora of texts including domain-independent texts (eg, internet web pages like Wikipedia and CommonCrawl; social media … Standard GloVe Embeddings Paper Vectors [9] Common Crawl Token 10 …","url":["https://www.medrxiv.org/content/medrxiv/early/2021/01/04/2020.12.29.20249005.full.pdf"]} -{"year":"2021","title":"An Investigation towards Differentially Private Sequence Tagging in a Federated Framework","authors":["A Jana, C Biemann"],"snippet":"… 2The hyperparameter settings to train those models are as follows: epochs- 10, batch size - 32, learning rate - 0.15, optimizer - Stochastic gradient descent (SGD) Common Crawl corpus) from spaCy library3, the dimension of which is 300 …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2021-janabiemann-privnlp-fed.pdf"]} -{"year":"2021","title":"An Overview on Evaluation Labs and Open Issues in Health-related Credible Information Retrieval","authors":["R Upadhyay, G Pasi, M Viviani - 2021"],"snippet":"… The 2020 Track used a dataset provided by Common Crawl, in particular related to different news collected in the first four months of 2020.4 On … 2https://trec-health-misinfo.github.io/2019.html 3https://lemurproject.org …","url":["http://52.178.216.184/paper31.pdf"]} -{"year":"2021","title":"Analysis and Evaluation of Language Models for Word Sense Disambiguation","authors":["D Loureiro, K Rezaee, MT Pilehvar… - Computational Linguistics, 2021"],"snippet":"Page 1. Analysis and Evaluation of Language Models for Word Sense Disambiguation Daniel Loureiro∗ LIAAD - INESC TEC Department of Computer Science - FCUP University of Porto, Portugal dloureiro@fc.up.pt Kiamehr …","url":["https://direct.mit.edu/coli/article-pdf/doi/10.1162/coli_a_00405/1900170/coli_a_00405.pdf"]} -{"year":"2021","title":"Analysis of Machine Learning and Deep Learning Frameworks for Opinion Mining on Drug Reviews","authors":["F Youbi, N Settouti - The Computer Journal, 2021"],"snippet":"… More precisely, GloVe consists of collecting word co- occurrence statistics in a form of a word co-occurrence matrix, in which its developers have provided pre-embed millions of English tokens obtained from Wikipedia data and common crawl data …","url":["https://academic.oup.com/comjnl/advance-article-abstract/doi/10.1093/comjnl/bxab084/6311550"]} -{"year":"2021","title":"Analyzing Hyperonyms of Stack Overflow Posts","authors":["L Tóth, L Vidács"],"snippet":"… They applied a similar lexico-syntactic pattern-based mining on the dataset obtained from CommonCrawl [17] using a slightly different grammar for NP identification and, therefore, a slightly different set of patterns. Despite the differences …","url":["https://www.researchgate.net/profile/Laszlo-Toth-12/publication/356192289_Analyzing_Hyperonyms_of_Stack_Overflow_Posts/links/61910421d7d1af224bea68e9/Analyzing-Hyperonyms-of-Stack-Overflow-Posts.pdf"]} -{"year":"2021","title":"Analyzing Multimodal Language via Acoustic-and Visual-LSTM with Channel-aware Temporal Convolution Network","authors":["S Mai, S Xing, H Hu - IEEE/ACM Transactions on Audio, Speech, and …, 2021"],"snippet":"Page 1. 2329-9290 (c) 2021 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/9387606/"]} -{"year":"2021","title":"Analyzing the Forgetting Problem in Pretrain-Finetuning of Open-domain Dialogue Response Models","authors":["T He, J Liu, K Cho, M Ott, B Liu, J Glass, F Peng - … of the 16th Conference of the …, 2021"],"snippet":"… 4.1 Datasets For pretraining, we use the large-scale CCNEWS data (Bakhtin et al., 2019) which is a de-duplicated subset of the English portion of the CommonCrawl news dataset1 … We tune the 1 http://commoncrawl.org/2016/10/ news-dataset-available Page 5. 1125 …","url":["https://www.aclweb.org/anthology/2021.eacl-main.95.pdf"]} -{"year":"2021","title":"Analyzing transfer learning impact in biomedical cross-lingual named entity recognition and normalization","authors":["RM Rivera-Zavala, P Martínez - BMC Bioinformatics, 2021"],"snippet":"… The FastText-2M [52] pre-trained English word embedding model trained with subword information on Common Crawl using the FastText implementation. Finally, the PubMed and PMC [53] pre-trained English word embedding model, trained on a …","url":["https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-021-04247-9"]} -{"year":"2021","title":"Annotation of Fine-Grained Geographical Entities in German Texts","authors":["J Moreno-Schneider, M Plakidis, G Rehm - 3rd Conference on Language, Data and …, 2021"],"snippet":"… The SpaCy models are trained on Ontonotes 5 and Common Crawl (English; en_core_web_md) and WikiNER and TIGER (German; de_core_news_md). The Stanford models are trained on the CoNLL 2003 data [18]. BERT-NER is trained on WikiNER [11] …","url":["https://drops.dagstuhl.de/opus/volltexte/2021/14547/pdf/OASIcs-LDK-2021-11.pdf"]} -{"year":"2021","title":"Answering questions about insurance supervision with a Neural Machine Translator","authors":["J Glowienke - 2021"],"snippet":"… Conneau et al. [3] pre-train a model based on the RoBERTa architecture to create crosslingual representations. The XLM-R model is pre-trained on a common crawl dataset of 100 languages using the masked multi-lingual language model ap- proach …","url":["https://dke.maastrichtuniversity.nl/jan.niehues/wp-content/uploads/2021/08/Glowienke-Master-thesis.pdf"]} -{"year":"2021","title":"Anticipating Attention: On the Predictability of News Headline Tests","authors":["N Hagar, N Diakopoulos, B DeWilde - Digital Journalism, 2021"],"snippet":"… These embeddings contain 300 dimensions and were trained on English language text from the OntoNotes 5.0 and GloVe Common Crawl corpora. For each headline, we computed the average embedding vector across all tokens. …","url":["https://www.tandfonline.com/doi/abs/10.1080/21670811.2021.1984266"]} -{"year":"2021","title":"Applying and Understanding an Advanced, Novel Deep Learning Approach: A Covid 19, Text Based, Emotions Analysis Study","authors":["J Choudrie, S Patil, K Kotecha, N Matta, I Pappas - Information Systems Frontiers, 2021"],"snippet":"The pandemic COVID 19 has altered individuals' daily lives across the globe. It has led to preventive measures such as physical distancing to be impo.","url":["https://link.springer.com/article/10.1007/s10796-021-10152-6"]} -{"year":"2021","title":"Applying Deep Learning Techniques for Sentiment Analysis to Assess Sustainable Transport","authors":["A Serna Nocedal, A Soroa Echave, R Agerri Gascón - 2021","A Serna, A Soroa, R Agerri - Sustainability, 2021"],"snippet":"… Thus, the multilingual version of BERT [25] was trained for 104 languages. More recently, XLM-RoBERTa [21] distributes a multilingual model which contains 100 languages trained on 2.5 TB of filtered Common Crawl text. To …","url":["https://addi.ehu.eus/bitstream/handle/10810/50497/sustainability-13-02397-v2.pdf?sequence=1&isAllowed=y","https://www.mdpi.com/2071-1050/13/4/2397/pdf"]} -{"year":"2021","title":"Applying Deep Learning Techniques for Sentiment Analysis to Assess Sustainable Transport. Sustainability 2021, 13, 2397","authors":["A Serna, A Soroa, R Agerri - 2021"],"snippet":"… Thus, the multilingual version of BERT [25] was trained for 104 languages. More recently, XLM-RoBERTa [21] distributes a multilingual model which contains 100 languages trained on 2.5 TB of filtered Common Crawl text. To …","url":["https://search.proquest.com/openview/b1ea0637935ea567d5fd68853527c980/1?pq-origsite=gscholar&cbl=2032327"]} -{"year":"2021","title":"AR-LSAT: Investigating Analytical Reasoning of Text","authors":["W Zhong, S Wang, D Tang, Z Xu, D Guo, J Wang, J Yin… - arXiv preprint arXiv …, 2021"],"snippet":"Page 1. AR-LSAT: Investigating Analytical Reasoning of Text Wanjun Zhong1∗, Siyuan Wang3∗, Duyu Tang2, Zenan Xu1∗, Daya Guo1∗ Jiahai Wang1, Jian Yin1, Ming Zhou4 and Nan Duan2 1 The School of Data and Computer Science, Sun Yat-sen University …","url":["https://arxiv.org/pdf/2104.06598"]} -{"year":"2021","title":"Arabic Offensive Language Detection in Social Media","authors":["F Husain - 2021"],"snippet":"Page 1. ARABIC OFFENSIVE LANGUAGE DETECTION IN SOCIAL MEDIA by Fatemah Ali Husain A Dissertation Submitted to the Graduate Faculty of George Mason University in Partial Fulfillment of The Requirements for the …","url":["https://search.proquest.com/openview/aefe47a620c621b1c7ed7f95196cf6ba/1?pq-origsite=gscholar&cbl=18750&diss=y"]} -{"year":"2021","title":"AraCOVID19-SSD: Arabic COVID-19 Sentiment and Sarcasm Detection Dataset","authors":["MS Hadj Ameur - Revue de l'Information Scientifique et Technique, 2023","MSH Ameur, H Aliane - arXiv preprint arXiv:2110.01948, 2021"],"snippet":"… Multilingual BERT (mBERT)6: A BERT-based model [17] pretrained on the first 104 major Wikipedia languages7. • XLM-Roberta 8: A large multi-lingual language model, trained on 2.5TB of filtered Common Crawl data [19]. 4.2.2 Bag-of-Words Models …","url":["https://arxiv.org/pdf/2110.01948","https://www.asjp.cerist.dz/index.php/en/downArticle/134/27/1/220363"]} -{"year":"2021","title":"AraStance: A Multi-Country and Multi-Domain Dataset of Arabic Stance Detection for Fact Checking","authors":["T Alhindi, A Alabdulkarim, A Alshehri, M Abdul-Mageed… - arXiv preprint arXiv …, 2021"],"snippet":"… AraStance and Khoja. This indicates the suitability of the pretraining data of ARBERT that includes Books, Gi- gawords and Common Crawl data primarily from MSA but also a small amount of Egyptian Arabic. Since half of …","url":["https://arxiv.org/pdf/2104.13559"]} -{"year":"2021","title":"ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic","authors":["M Abdul-Mageed, AR Elmadany, EMB Nagoudi - arXiv preprint arXiv:2101.01785, 2020"],"snippet":"… mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) use a small Arabic text collection from Wikipedia (153M tokens) and CommonCrawl (2.9B … XLM-R (Conneau et al., 2020) is trained on Common Crawl data, hence …","url":["https://arxiv.org/pdf/2101.01785"]} -{"year":"2021","title":"Are Multilingual Models Effective in Code-Switching?","authors":["GI Winata, S Cahyawijaya, Z Liu, Z Lin, A Madotto… - arXiv preprint arXiv …, 2021"],"snippet":"… switching tasks. 2.2.2 XLM-RoBERTa XLM-RoBERTa (XLM-R) (Conneau et al., 2020) is a multilingual language model that is pre-trained on 100 languages using more than two terabytes of filtered CommonCrawl data. Thanks to …","url":["https://arxiv.org/pdf/2103.13309"]} -{"year":"2021","title":"Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan","authors":["J Armengol-Estapé, CP Carrino, C Rodriguez-Penagos… - arXiv preprint arXiv …, 2021"],"snippet":"… Catalan Government; (2) the Catalan Open Subtitles, a collection of translated movie subtitles (Tiedemann, 2012); (3) the non-shuffled version of the Catalan part of the OSCAR corpus (Suárez et al., 2019), a collection …","url":["https://arxiv.org/pdf/2107.07903"]} -{"year":"2021","title":"Are You Really Complaining? A Multi-task Framework for Complaint Identification, Emotion, and Sentiment Classification","authors":["A Singh, S Saha - International Conference on Document Analysis and …, 2021"],"snippet":"… For deep learning baseline (MT\\(_{\\mathrm{GloVe}}\\)) we also used pre-trained GloVe 13 [16] word embedding which is trained on Common Crawl (840 billion tokens) corpus to get the word embedding representations. 4.4 Results and Discussion …","url":["https://link.springer.com/chapter/10.1007/978-3-030-86331-9_46"]} -{"year":"2021","title":"ARMAN: Pre-training with Semantically Selecting and Reordering of Sentences for Persian Abstractive Summarization","authors":["A Salemi, E Kebriaei, GN Minaei, A Shakery - arXiv preprint arXiv:2109.04098, 2021"],"snippet":"… This corpus contains around 2.8M articles and 1.4B words in all of the articles. CC100 (Conneau et al., 2020; Wenzek et al., 2020) is a monolingual dataset for 100+ languages constructed from Commoncrawl snapshots. This …","url":["https://arxiv.org/pdf/2109.04098"]} -{"year":"2021","title":"As Easy as 1, 2, 3: Behavioural Testing of NMT Systems for Numerical Translation","authors":["J Wang, C Xu, F Guzman, A El-Kishky, BIP Rubinstein… - arXiv preprint arXiv …, 2021"],"snippet":"… Our testing framework facilitates constructing test instances for new domains in the following steps: 1. Obtain a large corpus of text that contains numbers (eg,CommonCrawl); 2. Check if there is a number in the output translation; …","url":["https://arxiv.org/pdf/2107.08357"]} -{"year":"2021","title":"Aspect-based Sentiment Analysis with Graph Convolution over Syntactic Dependencies","authors":["A Zunic, P Corcoran, I Spasic","A Žunić, P Corcoran, I Spasić - Artificial Intelligence in Medicine, 2021"],"snippet":"… individual sentences into dependency graphs. Individual words representing vertices in such graphs were mapped onto their embeddings, which were pretrained 160 on web data from Common Crawl using the GloVe method [31]. Each input …","url":["https://www.researchgate.net/profile/Irena-Spasic/publication/353775596_Aspect-based_Sentiment_Analysis_with_Graph_Convolution_over_Syntactic_Dependencies/links/61112fae169a1a0103ea3e67/Aspect-based-Sentiment-Analysis-with-Graph-Convolution-over-Syntactic-Dependencies.pdf","https://www.sciencedirect.com/science/article/pii/S0933365721001317"]} -{"year":"2021","title":"ASR4REAL: An extended benchmark for speech models","authors":["M Riviere, J Copet, G Synnaeve - arXiv preprint arXiv:2110.08583, 2021"],"snippet":"… even a language model trained on a dataset as big as Common Crawl does not seem to have significant positive effect which reiterates … For all of theses models we used the a 4-gram LM trained on Common Crawl with the decoding parameters …","url":["https://arxiv.org/pdf/2110.08583"]} -{"year":"2021","title":"Assessing reasoning and world knowledge of large language models using questionized counterfactual conditionals","authors":["J Frohberg, F Binder - 2021"],"snippet":"Page 1. Assessing reasoning and world knowledge of large language models using questionized counterfactual conditionals Jörg Frohberg apergo UG Leipzig, Germany j.frohberg@apergo.ai Frank Binder Institute for Applied …","url":["https://openreview.net/pdf?id=i9XYDrUJYyP"]} -{"year":"2021","title":"Assessing the Extent and Types of Hate Speech in Fringe Communities: A Case Study of Alt-Right Communities on 8chan, 4chan, and Reddit","authors":["D Rieger, AS Kümpel, M Wich, T Kiening, G Groh - Social Media+ Society, 2021"],"snippet":"… For this article, the fastText word vectors pre-trained on the English Common Crawl dataset were used because it is trained on web data and thus an appropriate basis (Mikolov, Grave, Bojanowski, Puhrsch, & Joulin, 2019). …","url":["https://journals.sagepub.com/doi/pdf/10.1177/20563051211052906"]} -{"year":"2021","title":"AStitchInLanguageModels: Dataset and Methods for the Exploration of Idiomaticity in Pre-Trained Language Models","authors":["HT Madabushi, E Gow-Smith, C Scarton… - arXiv preprint arXiv …, 2021"],"snippet":"Page 1. AStitchInLanguageModels: Dataset and Methods for the Exploration of Idiomaticity in Pre-Trained Language Models Harish Tayyar Madabushi, Edward Gow-Smith, Carolina Scarton and Aline Villavicencio Department …","url":["https://arxiv.org/pdf/2109.04413"]} -{"year":"2021","title":"Attention-based model for predicting question relatedness on Stack Overflow","authors":["J Pei, Z Qin, Y Cong, J Guan - arXiv preprint arXiv:2103.10763, 2021"],"snippet":"… released by Stanford [22]. This word embeddings pre-trained in the Common Crawl corpus, which contains a large amount of data irrelevant to software engineering, may lead to ambiguous results [18]. Therefore, we hope that …","url":["https://arxiv.org/pdf/2103.10763"]} -{"year":"2021","title":"Attention: there is an inconsistency between android permissions and application metadata!","authors":["H Alecakir, B Can, S Sen - International Journal of Information Security"],"snippet":"Since mobile applications make our lives easier, there is a large number of mobile applications customized for our needs in the application markets. While.","url":["https://link.springer.com/article/10.1007/s10207-020-00536-1"]} -{"year":"2021","title":"Attentive Excitation and Aggregation for Bilingual Referring Image Segmentation","authors":["Q Zhou, T Hui, R Wang, H Hu, S Liu - ACM Transactions on Intelligent Systems and …, 2021"],"snippet":"… For English expression, we use GloVe1 pretrained on Common Crawl to embed each word into a 300-d vector. For Chinese expression, existing tools … GloVe word embeddings [34] pretrained on Common Crawl 840B …","url":["https://dl.acm.org/doi/abs/10.1145/3446345"]} -{"year":"2021","title":"Augmenting Poetry Composition with Verse by Verse","authors":["D Uthus, M Voitovich, RJ Mical - arXiv preprint arXiv:2103.17205, 2021"],"snippet":"… TextSETTR was shown to yield better results in transforming sentiment while preserving fluency (important aspects for our work). As described in the TextSETTR paper, we use the model that had been fine-tuned on English Common Crawl data …","url":["https://arxiv.org/pdf/2103.17205"]} -{"year":"2021","title":"Augmenting semantic lexicons using word embeddings and transfer learning","authors":["T Alshaabi, C Van Oort, M Fudolig, MV Arnold… - arXiv preprint arXiv …, 2021"],"snippet":"… words. We then pass the token embeddings to a 300dimensional embedding layer. We initialize the embedding layer with weights trained with subword information on Common Crawl and Wikipedia using FastText [59]. In …","url":["https://arxiv.org/pdf/2109.09010"]} -{"year":"2021","title":"AUGVIC: Exploiting BiText Vicinity for Low-Resource NMT","authors":["T Mohiuddin, MS Bari, S Joty - arXiv preprint arXiv:2106.05141, 2021"],"snippet":"… localization guide, respectively. For some languages, the amount of specific domain monolingual data is limited, where we added additional monolingual data of that language from Common Crawl. Following previous work …","url":["https://arxiv.org/pdf/2106.05141"]} -{"year":"2021","title":"Authorship Weightage Algorithm for Academic publications: A new calculation and ACES webserver for determining expertise","authors":["WL Wu, O Tan, KF Chan, NB Ong, D Gunasegaran… - Methods and Protocols, 2021"],"snippet":"… the back-end server. These word vectors were trained on Common Crawl (https://commoncrawl.org (last accessed on 28 April 2021)) using fastText [17], and are used to map the processed query to its corresponding values …","url":["https://www.mdpi.com/2409-9279/4/2/41/pdf"]} -{"year":"2021","title":"Automated Change Detection in Privacy Policies","authors":["A Adhikari - 2020"],"snippet":"Page 1. University of Denver Digital Commons @ DU Electronic Theses and Dissertations Graduate Studies 2020 Automated Change Detection in Privacy Policies Andrick Adhikari Follow this and additional works at: https://digitalcommons.du.edu/etd …","url":["https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=2702&context=etd"]} -{"year":"2021","title":"Automated essay scoring: A review of the field","authors":["P Lagakis, S Demetriadis - … International Conference on Computer, Information and …, 2021"],"snippet":"… Transformer models make use of those huge datasets of existing general text data, such as Wikipedia Corpus and Common Crawl, to pretrain multilayer neural networks with context-sensitive meaning of, and relations between, words, such as …","url":["https://ieeexplore.ieee.org/abstract/document/9618476/"]} -{"year":"2021","title":"Automated Grading of Exam Responses: An Extensive Classification Benchmark","authors":["A Farazouli, Z Lee, P Papapetrou, U Fors - … Science: 24th International Conference, DS 2021 …","J Ljungman, V Lislevand, J Pavlopoulos, A Farazouli… - International Conference on …, 2021"],"snippet":"… This method proves that training BERT with alternative design choices and with more data, including the CommonCrawl News dataset, … training XLM-R on one hundred languages using CommonCrawl data2, in contrast to previous works such …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=IydHEAAAQBAJ&oi=fnd&pg=PA3&dq=commoncrawl&ots=QIe2sENq0_&sig=LQ1NnDlylvNDV4-vNPAiGJEMZd4","https://link.springer.com/chapter/10.1007/978-3-030-88942-5_1"]} -{"year":"2021","title":"Automated identification of bias inducing words in news articles using linguistic and context-oriented features","authors":["T Spinde, L Rudnitckaia, J Mitrović, F Hamborg… - Information Processing & …, 2021"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0306457321000157"]} -{"year":"2021","title":"Automated methods for Question-Answering in Icelandic","authors":["V Snæbjarnarson"],"snippet":"… The source of the data is the open internet, made accessible to those with relatively modest computing resources and disk storage through the targeted use of the Common Crawl datasets that comprise petabytes of data. Prior work has focused on the …","url":["https://vesteinn.is/thesis_150921.pdf"]} -{"year":"2021","title":"Automatic Detection of Fake","authors":["BM Bažík"],"snippet":"… For the data, they created the RealNews dataset, a large corpus of news articles from Common Crawl1. Fake News Detection Using Deep Learning Techniques [11] compared Logistic Regression (LR), Naive Bayes (NB) …","url":["https://is.muni.cz/th/hk1px/Martin_Bazik_master_thesis.pdf"]} -{"year":"2021","title":"Automatic Difficulty Classification of Arabic Sentences","authors":["N Khallaf, S Sharoff - arXiv preprint arXiv:2103.04386, 2021"],"snippet":"… corpus (Common Crawl and Wikipedia for ArabicBERT vs Common Crawl XML-R vs Wikipedia for BERT, AraBert and UCS) used to train the Arabic … The corpus will be classified on the ba- sis of how difficult the sentences are …","url":["https://arxiv.org/pdf/2103.04386"]} -{"year":"2021","title":"Automatic Fully-Contextualized Recommendation Extraction from Radiology Reports","authors":["J Steinkamp, C Chambers, D Lalevic, T Cook - Journal of Digital Imaging, 2021"],"snippet":"… We evaluated a simple long short-term memory (LSTM) architecture [12] on the task. We used a combination of custom-trained fastText vectors, trained on our institution's entire repository of radiology reports, with Global …","url":["https://link.springer.com/article/10.1007/s10278-021-00423-8"]} -{"year":"2021","title":"Automatic Generic Web Information Extraction at Scale","authors":["M Aljabary - 2021"],"snippet":"Page 1. 1 Page 2. 2 Automatic Generic Web Information Extraction at Scale Master Thesis Computer Science, Data Science and Technology University of Twente. Enschede, The Netherlands An attempt to bring some structure …","url":["http://essay.utwente.nl/86153/1/Aljabary_MA_EEMCS.pdf"]} -{"year":"2021","title":"Automatic Sexism Detection with Multilingual Transformer Models","authors":["S Mina, B Jaqueline, L Daria, S Djordje, K Armin… - arXiv preprint arXiv …, 2021"],"snippet":"… XLM-R is a multilingual model trained on 100 languages, similar to mBERT. Unlike the latter, XLM-R is not trained on Wikipedia data but on monolingual CommonCrawl data. The model shows improved cross-lingual language …","url":["https://arxiv.org/pdf/2106.04908"]} -{"year":"2021","title":"Automatic Stress Detection from Facial Videos","authors":["EM de Oca - 2021"],"snippet":"… , leading to the development of pretrained systems such as BERT(Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), which were trained with large language datasets, such as Wikipedia …","url":["https://www.eduardomontesdeoca.com/s/ADSS-Project.pdf"]} -{"year":"2021","title":"Automatically Detecting Cyberbullying Comments on Online Game Forums","authors":["HHP Vo, HT Tran, ST Luu - arXiv preprint arXiv:2106.01598, 2021"],"snippet":"… on Wikipedia and Gigaword corpora. The fastText 3 is trained on Common Crawl and Wikipedia datasets using CBOW with position-weight in 300 dimensions with 5-grams features. D. Traditional machine learning models Logistic …","url":["https://arxiv.org/pdf/2106.01598"]} -{"year":"2021","title":"Autonomous Writing Futures","authors":["AH Duin, I Pedersen - Writing Futures: Collaborative, Algorithmic …, 2021"],"snippet":"… GPT-3 is 175 billion parameters. GPT-3 is trained on the Common Crawl data set, a corpus of almost a trillion words of texts scraped from the Web. “The dataset and model size are about two orders of magnitude larger than those used for GPT-2,” the authors write …","url":["https://link.springer.com/chapter/10.1007/978-3-030-70928-0_4"]} -{"year":"2021","title":"Auxiliary Bi-Level Graph Representation for Cross-Modal Image-Text Retrieval","authors":["X Zhong, Z Yang, M Ye, W Huang, J Yuan, CW Lin - 2021 IEEE International …, 2021"],"snippet":"… The scene graph features Soi and Srij are transformed by a learnable embedding layer which is initialized by GloVe [18] pre-trained on the Common-Crawl dataset, and maps Ioi and Irij into a vector of same dimension: Soi = WoIoi , Srij = WrIrij , (1) …","url":["https://ieeexplore.ieee.org/abstract/document/9428380/"]} -{"year":"2021","title":"Auxiliary Learning for Relation Extraction","authors":["S Lyu, J Cheng, X Wu, L Cui, H Chen, C Miao - IEEE Transactions on Emerging …, 2020"],"snippet":"… 7https://catalog.ldc.upenn.edu/LDC2018T24 8http://semeval2.fbk.eu/semeval2. php?location=data 9Following previous work, we choose GloVe word vectors with 300 dimensions (Common Crawl) https://nlp.stanford.edu/projects/glove …","url":["https://ieeexplore.ieee.org/abstract/document/9296307/"]} -{"year":"2021","title":"Background Knowledge in Schema Matching: Strategy vs. Data","authors":["J Portisch, M Hladik, H Paulheim - arXiv preprint arXiv:2107.00001, 2021"],"snippet":"… used. WebIsALOD is a large hypernymy graph based on the WebIsA database [37]. The latter is a dataset which consists of hypernymy relations extracted from the Common Crawl, a large set of crawled Web pages. The extraction …","url":["https://arxiv.org/pdf/2107.00001"]} -{"year":"2021","title":"Bambara Language Dataset for Sentiment Analysis","authors":["M Diallo, C Fourati, H Haddad - arXiv preprint arXiv:2108.02524, 2021"],"snippet":"… In this paper, we present the first common-crawl-based Bambara dialectal dataset dedicated for Sentiment Analysis, available freely for Natural Language Processing research purposes … Bambara V1 dataset represents …","url":["https://arxiv.org/pdf/2108.02524"]} -{"year":"2021","title":"Bandits Don't Follow Rules: Balancing Multi-Facet Machine Translation with Multi-Armed Bandits","authors":["J Kreutzer, D Vilar, A Sokolov - arXiv preprint arXiv:2110.06997, 2021"],"snippet":"Training data for machine translation (MT) is often sourced from a multitude of large corpora that are multi-faceted in nature, eg containing contents from multiple domains or different levels of quality or complexity. Naturally, these facets do not …","url":["https://arxiv.org/pdf/2110.06997"]} -{"year":"2021","title":"BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese","authors":["NL Tran, DM Le, DQ Nguyen - arXiv preprint arXiv:2109.09701, 2021"],"snippet":"… rization. Here, mBART is pre-trained on a Common Crawl dataset of 25 languages, which contains 137 GB of syllablelevel Vietnamese texts. We employ the single-document summarization dataset VNDS (Nguyen et al. 2019 …","url":["https://arxiv.org/pdf/2109.09701"]} -{"year":"2021","title":"belabBERT: a Dutch RoBERTa-based language model applied to psychiatric classification","authors":["J Wouts, J de Boer, A Voppel, S Brederoo… - arXiv preprint arXiv …, 2021"],"snippet":"… 3.1.1. Pre-training For the pre-training of belabBERT we used the OSCAR corpus which consists of a set of monolingual corpora extracted from Common Crawl snapshots … belabBERT Common Crawl Dutch (non-shuffled) BytePairEncoding 95.92 ∗ …","url":["https://arxiv.org/pdf/2106.01091"]} -{"year":"2021","title":"Benchmarking Differential Privacy and Federated Learning for BERT Models","authors":["P Basu, TS Roy, R Naidu, Z Muftuoglu, S Singh… - arXiv preprint arXiv …, 2021"],"snippet":"… It uses 160 GB of text for pre-training, including 16GB of Books Corpus and English Wikipedia used in BERT. The additional data included CommonCrawl News dataset, Web text corpus and Stories from Common Crawl. For …","url":["https://arxiv.org/pdf/2106.13973"]} -{"year":"2021","title":"BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation","authors":["H Xu, B Van Durme, K Murray - arXiv preprint arXiv:2109.04588, 2021"],"snippet":"… on 145G German text data portion of OSCAR (Or- tiz Suárez et al., 2020), a huge multilingual corpus extracted from Common Crawl … Vaswani et al., 2017) masked language model trained on 100 languages, using more than …","url":["https://arxiv.org/pdf/2109.04588"]} -{"year":"2021","title":"BERT: A Review of Applications in Natural Language Processing and Understanding","authors":["MV Koroteev - arXiv preprint arXiv:2103.11943, 2021"],"snippet":"Page 1. BERT: A Review of Applications in Natural Language Processing and Understanding Koroteev MV, Financial University under the government of the Russian Federation, Moscow, Russia mvkoroteev@fa.ru Abstract: In …","url":["https://arxiv.org/pdf/2103.11943"]} -{"year":"2021","title":"Bertinho: Galician BERT Representations","authors":["D Vilares, M Garcia, C Gómez-Rodríguez - arXiv preprint arXiv:2103.13799, 2021"],"snippet":"Page 1. Bertinho: Galician BERT Representations Bertinho: Representaciones BERT para el gallego David Vilares,1 Marcos Garcia,2 Carlos Gómez-Rodr´ıguez 1 1Universidade da Coru˜na, CITIC, Galicia, Spain 2CiTIUS, Universidade …","url":["https://arxiv.org/pdf/2103.13799"]} -{"year":"2021","title":"Better Neural Machine Translation by Extracting Linguistic Information from BERT","authors":["HS Shavarani, A Sarkar - arXiv preprint arXiv:2104.02831, 2021"],"snippet":"… clack, clack.”). 9Europarl+CommonCrawl+NewsCommentary https://www.statmt. org/wmt14/translation-task.html, please note that in the later years this training set remained the same, but ParaCrawl data was added to it. We …","url":["https://arxiv.org/pdf/2104.02831"]} -{"year":"2021","title":"Beyond Noise: Mitigating the Impact of Fine-grained Semantic Divergences on Neural Machine Translation","authors":["E Briakou, M Carpuat - arXiv preprint arXiv:2105.15087, 2021"],"snippet":"Page 1. Beyond Noise: Mitigating the Impact of Fine-grained Semantic Divergences on Neural Machine Translation Eleftheria Briakou and Marine Carpuat Department of Computer Science University of Maryland College Park …","url":["https://arxiv.org/pdf/2105.15087"]} -{"year":"2021","title":"Beyond the English Web: Zero-Shot Cross-Lingual and Lightweight Monolingual Classification of Registers","authors":["L Repo, V Skantsi, S Rönnqvist, S Hellström… - arXiv preprint arXiv …, 2021"],"snippet":"… FreCORE and SweCORE are random samples of the 2017 CoNLL datasets (Ginter et al., 2017) originally drawn from Common Crawl … XLM-R is trained on 2.5TB of filtered Common Crawl (Wenzek et al., 2020) data comprising …","url":["https://arxiv.org/pdf/2102.07396"]} -{"year":"2021","title":"Bias Silhouette Analysis: Towards Assessing the Quality of Bias Metrics for Word Embedding Models","authors":["M Spliethöver, H Wachsmuth"],"snippet":"… Word Embedding Models. As biased and unbiased models, we use GloVe CommonCrawl [Pennington et al., 2014] trained on 840 billion English tokens and the English ConceptNet Numberbatch 19.08 [Speer et al., 2017] (referred to as NBatch below), respectively …","url":["https://www.ijcai.org/proceedings/2021/0077.pdf"]} -{"year":"2021","title":"Bidirectional Language Modeling: A Systematic Literature Review","authors":["M Shah Jahan, HU Khan, S Akbar, M Umar Farooq… - Scientific Programming, 2021"],"snippet":"Page 1. Review Article Bidirectional Language Modeling: A Systematic Literature Review Muhammad Shah Jahan ,1 Habib Ullah Khan ,2 Shahzad Akbar ,3 Muhammad Umar Farooq ,1 Sarah Gul ,4 and Anam Amjad 1 1Department …","url":["https://www.hindawi.com/journals/sp/2021/6641832/"]} -{"year":"2021","title":"Bilingual Lexical Induction for Sinhala-English using Cross Lingual Embedding Spaces","authors":["A Liyanage, S Ranathunga, S Jayasena - 2021 Moratuwa Engineering Research …, 2021"],"snippet":"… Using pre-trained fastText embeddings trained on Wikipedia and Common crawl data using two different evaluation dictionaries as a preliminary experiment to identify the performance of embeddings created from non-comparable corpora …","url":["https://ieeexplore.ieee.org/abstract/document/9525667/"]} -{"year":"2021","title":"Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment","authors":["H Shi, L Zettlemoyer, SI Wang - arXiv preprint arXiv:2101.00148"],"snippet":"Page 1. Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment Haoyue Shi ∗ TTI-Chicago freda@ttic.edu Luke Zettlemoyer University of Washington Facebook AI Research lsz@fb.com …","url":["https://arxiv.org/pdf/2101.00148"]} -{"year":"2021","title":"BitextEdit: Automatic Bitext Editing for Improved Low-Resource Machine Translation","authors":["E Briakou, SI Wang, L Zettlemoyer, M Ghazvininejad - arXiv preprint arXiv …, 2021"],"snippet":"Mined bitexts can contain imperfect translations that yield unreliable training signals for Neural Machine Translation (NMT). While filtering such pairs out is known to improve final model quality, we argue that it is suboptimal in low-resource conditions …","url":["https://arxiv.org/pdf/2111.06787"]} -{"year":"2021","title":"Blank spots, critical information needs and local journalism fund-ing","authors":["S Bisiani"],"snippet":"Abstract A global business model crisis in journalism, fuelled by loss in advertising revenue, challenges the survival of local news production. In Sweden, it has led to the closure of several newspapers across the country, and the concentration of …","url":["http://compscjournalism.org/projects/simona/projects/Master_Thesis_Simona_Bisiani.pdf"]} -{"year":"2021","title":"Book genre and author's gender recognition based on titles","authors":["A Pawłowski, E Herden, T Walkowiak - … and Text: Data, models, information and …, 2021"],"snippet":""} -{"year":"2021","title":"BOSS: Bandwidth-Optimized Search Accelerator for Storage-Class Memory","authors":["J Heo, SY Lee, S Min, Y Park, SJ Jung, TJ Ham…"],"snippet":"Page 1. BOSS: Bandwidth-Optimized Search Accelerator for Storage-Class Memory Jun Heo, Seung Yul Lee, Sunhong Min, Yeonhong Park, Sung Jun Jung, Tae Jun Ham, Jae W. Lee Seoul National University {j.heo, triomphant1 …","url":["https://conferences.computer.org/iscapub/pdfs/ISCA2021-4ghucdBnCWYB7ES2Pe4YdT/333300a279/333300a279.pdf"]} -{"year":"2021","title":"Bottom-Up Shift and Reasoning for Referring Image Segmentation","authors":["S Yang, M Xia, G Li, HY Zhou, Y Yu - Proceedings of the IEEE/CVF Conference on …, 2021"],"snippet":"Page 1. Bottom-Up Shift and Reasoning for Referring Image Segmentation Sibei Yang1∗† Meng Xia2∗ Guanbin Li2 Hong-Yu Zhou3 Yizhou Yu3,4† 1ShanghaiTech University 2Sun Yat-sen University 3The University of Hong Kong 4Deepwise AI Lab Abstract …","url":["https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_Bottom-Up_Shift_and_Reasoning_for_Referring_Image_Segmentation_CVPR_2021_paper.pdf"]} -{"year":"2021","title":"bradleypallen/keras-quora-question-pairs","authors":["ONDJF Mar, AMJJA Sep, OSMTW Thu, F Sat"],"snippet":"… Model, Source of Word Embeddings, Accuracy. \"BiMPM model\" [5], GloVe Common Crawl (840B tokens, 300D), 0.88 … \"Decomposable attention\" [6], \"Quora's text corpus\", 0.86. \"LDC\" [5], GloVe Common Crawl (840B tokens, 300D), 0.86 …","url":["https://giters.com/bradleypallen/keras-quora-question-pairs?amp=1"]} -{"year":"2021","title":"Building a File Observatory for Secure Parser Development","authors":["T Allison, W Burke, C Mattmann, A Mensikova…"],"snippet":"… 3196–3200. [Online]. Available: http://www.lrec-conf.org/proceedings/lrec2012/pdf/534 Paper.pdf [10] “Common Crawl,” https://commoncrawl.org. [11] P. Wyatt, “Stressful PDF corpus grows!” https://www.pdfa.org/ stressful-pdf-corpus-grows/, November 2020.","url":["https://langsec.org/spw21/papers/Allison_LangSec21.pdf"]} -{"year":"2021","title":"Building a Question and Answer System for News Domain","authors":["S Basu, A Gaddala, P Chetan, G Tiwari, N Darapaneni… - arXiv preprint arXiv …, 2021"],"snippet":"… We have used two approaches for building the Embedding Layers for the models 3. GloVe Embedding: we used the 300 Dimension Common Crawl for the English language 4. Universal Sentence Encoder: we used the 512 …","url":["https://arxiv.org/pdf/2105.05744"]} -{"year":"2021","title":"Building Accountable Natural Language Processing Models: on Social Bias Detection and Mitigation","authors":["J Zhao - 2021"],"snippet":"Natural Language Processing (NLP) plays an important role in many applications, including resume filtering, text analysis, and information retrieval. Despite the remarkable accuracy enabled by the advances of machine learning methods, recent …","url":["https://escholarship.org/content/qt0441n1tt/qt0441n1tt.pdf"]} -{"year":"2021","title":"But how robust is RoBERTa actually?: A Benchmark of SOTA Transformer Networks for Sexual Harassment Detection on Twitter","authors":["P Basu, TS Roy, A Singhal - 2021 Fifth International Conference on I-SMAC (IoT in …, 2021"],"snippet":"Harassment, which is of sexual/physical in nature, is defined as any unwanted sexual misconduct, including the unwarranted and ill-suited promise of benefit in exchange for sexual indulgence. It also includes a span of actions from verbal …","url":["https://ieeexplore.ieee.org/abstract/document/9640861/"]} -{"year":"2021","title":"Can I Take Your Subdomain? Exploring Same-Site Attacks in the Modern Web","authors":["MSMTL Veronese, SCM Maffei"],"snippet":"Page 1. Can I Take Your Subdomain? Exploring Same-Site Attacks in the Modern Web Marco Squarcina1 Mauro Tempesta1 Lorenzo Veronese1 Stefano Calzavara2 Matteo Maffei1 1 TU Wien 2 Università Ca' Foscari Venezia & OWASP …","url":["https://minimalblue.com/data/papers/USENIX21_can_i_take_your_subdomain.pdf"]} -{"year":"2021","title":"Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color","authors":["M Abdou, A Kulmizev, D Hershcovich, S Frank… - arXiv preprint arXiv …, 2021"],"snippet":"… Word-type FastText embeddings trained on Common Crawl (Bojanowski et al., 2017) … These S contexts are either randomly sampled from common crawl (RC), or deterministically generated to allow for control over contextual variation (CC) …","url":["https://arxiv.org/pdf/2109.06129"]} -{"year":"2021","title":"Can Small and Synthetic Benchmarks Drive Modeling Innovation? A Retrospective Study of Question Answering Modeling Approaches","authors":["NF Liu, T Lee, R Jia, P Liang - arXiv preprint arXiv:2102.01065, 2021"],"snippet":"Page 1. Can Small and Synthetic Benchmarks Drive Modeling Innovation? A Retrospective Study of Question Answering Modeling Approaches Nelson F. Liu Tony Lee Robin Jia Percy Liang Computer Science Department, Stanford …","url":["https://arxiv.org/pdf/2102.01065"]} -{"year":"2021","title":"CausalBERT: Injecting Causal Knowledge Into Pre-trained Models with Minimal Supervision","authors":["Z Li, X Ding, K Liao, T Liu, B Qin - arXiv preprint arXiv:2107.09852, 2021"],"snippet":"… ambiguity and precise causal patterns to extract word level causeeffect pairs from the preprocessed English Common Crawl corpus (5.14 … (2016) for creating a causal lexical knowledge base, we reproduce a variant of their …","url":["https://arxiv.org/pdf/2107.09852"]} -{"year":"2021","title":"CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training","authors":["P Huber, A Aghajanyan, B Oğuz, D Okhonko, W Yih… - arXiv preprint arXiv …, 2021"],"snippet":"… Consequently, we propose a novel QA dataset based on the Common Crawl project in this paper. Using the readily available schema.org annotation, we extract around 130 million multilingual question-answer pairs, including about 60 million …","url":["https://arxiv.org/pdf/2110.07731"]} -{"year":"2021","title":"CDA: a Cost Efficient Content-based Multilingual Web Document Aligner","authors":["T Vu, AA AI, A Moschitti - 2021"],"snippet":"… CommonCrawl Sextet Previous datasets share the same domains that are heavily biased toward French content (see Table 3). We leverage a monthly crawl from CommonCrawl, specifically … Table 4: Parallel English tokens …","url":["https://assets.amazon.science/01/69/5f786b844c08a079eda7e6437c16/cda-a-cost-efficient-content-based-multilingual-web-document-aligner.pdf"]} -{"year":"2021","title":"Censorship of Online Encyclopedias: Implications for NLP Models","authors":["E Yang, ME Roberts - arXiv preprint arXiv:2101.09294, 2021"],"snippet":"… Word embeddings are also useful because they can be pre-trained on large corpuses of text like Wikipedia or Common Crawl, and these pre-trained embeddings can then be used as an initial layer in applications that may have less training data …","url":["https://arxiv.org/pdf/2101.09294"]} -{"year":"2021","title":"Challenges for cognitive decoding using deep learning methods","authors":["AW Thomas, C Ré, RA Poldrack - arXiv preprint arXiv:2108.06896, 2021"],"snippet":"… learning in 251 the target domain. Transfer learning has been especially successful in CV and NLP, where large 252 publicly available datasets exist (eg, [72,73] and http://www.commoncrawl.org). Here, DL 253 models are first …","url":["https://arxiv.org/pdf/2108.06896"]} -{"year":"2021","title":"Changing the World by Changing the Data","authors":["A Rogers - arXiv preprint arXiv:2105.13947, 2021"],"snippet":"… The use of uncontrolled samples (like the Common-Crawl-based corpora) would have to be justified by arguing either that the above types of bias can be safely ignored, or that the benefits outweigh the risks. 2.2.3 Might not be the best approach …","url":["https://arxiv.org/pdf/2105.13947"]} -{"year":"2021","title":"Characterizing and addressing the issue of oversmoothing in neural autoregressive sequence modeling","authors":["I Kulikov, M Eremeev, K Cho - arXiv preprint arXiv:2112.08914, 2021"],"snippet":"… We use the subset of WMT’19 training set consisting of news commentary v12 and common crawl resulting in slightly more than 1M and 2M training sentence pairs for Ru→En and De↔En pairs, respectively. We fine-tuned single model checkpoints …","url":["https://arxiv.org/pdf/2112.08914"]} -{"year":"2021","title":"Characterizing Network Infrastructure Using the Domain Name System","authors":["P Kintis - 2020"],"snippet":"Page 1. CHARACTERIZING NETWORK INFRASTRUCTURE USING THE DOMAIN NAME SYSTEM A Dissertation Presented to The Academic Faculty By Panagiotis Kintis In Partial Fulfillment of the Requirements for the Degree …","url":["https://smartech.gatech.edu/bitstream/handle/1853/64165/KINTIS-DISSERTATION-2020.pdf"]} -{"year":"2021","title":"Charformer: Fast Character Transformers via Gradient-based Subword Tokenization","authors":["Y Tay, VQ Tran, S Ruder, J Gupta, HW Chung, D Bahri… - arXiv preprint arXiv …, 2021"],"snippet":"… In addition, we compare to the byte-level models from §3.1, which we pre-train on multilingual data. Setup We pre-train CHARFORMER as well as the Byte-level T5 and Byte-level T5+LASC baselines on multilingual …","url":["https://arxiv.org/pdf/2106.12672"]} -{"year":"2021","title":"ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information","authors":["Z Sun, X Li, X Sun, Y Meng, X Ao, Q He, F Wu, J Li - arXiv preprint arXiv:2106.16038, 2021"],"snippet":"… We collected our pretraining data from CommonCrawl5. After pre-processing (such as removing the data with too much … ERNIE BERT-wwm MacBERT ChineseBERT Data Source Heterogeneous Wikipedia Heterogeneous …","url":["https://arxiv.org/pdf/2106.16038"]} -{"year":"2021","title":"Claim Detection in Biomedical Twitter Posts","authors":["A Wührl, R Klinger - arXiv preprint arXiv:2104.11639, 2021"],"snippet":"… The first three models (NB, LG, BiLSTM) use 50-dimensional FastText (Bojanowski et al., 2017) embeddings trained on the Common Crawl corpus (600 billion tokens) as input6. NB. We use a (Gaussian) naive Bayes with …","url":["https://arxiv.org/pdf/2104.11639"]} -{"year":"2021","title":"Classification of Emotions Based on Text and Qualitative Variables","authors":["J Dobša, D Šebalj, D Bužić"],"snippet":"… Experiments were done both with Common Crawl GloVe pretrained vectors with the dimensionality of 300, and without pretrained vectors. Fifteen percent of learning samples were used for validation. We constructed six neural networks models: CNN …","url":["https://www.researchgate.net/profile/Jasminka-Dobsa/publication/355461190_Classification_of_Emotions_Based_on_Text_and_Qualitative_Variables/links/61716c97750da711ac647d77/Classification-of-Emotions-Based-on-Text-and-Qualitative-Variables.pdf"]} -{"year":"2021","title":"Classification of Horror Stories from Reddit","authors":["D Zhou, C Kim, S Gatiganti"],"snippet":"… We further hypothesize that performance would still increase a little if we used the larger pre-trained vectors such as Common Crawl or Twitter sets, but they come with increased download sizes (>1 GB) and increased training time …","url":["http://cs229.stanford.edu/proj2021spr/report2/82008167.pdf"]} -{"year":"2021","title":"Classification of Texts Using a Vocabulary of Synonyms","authors":["A Giliazova - 2021 14th International Conference Management of …, 2021"],"snippet":"… This is a Transformer-based masked language model trained on one hundred languages, including Russian language, using more than two terabytes of filtered CommonCrawl data. The XLM-R model significantly outperforms multilingual BERT (mBERT) …","url":["https://ieeexplore.ieee.org/abstract/document/9600131/"]} -{"year":"2021","title":"CLASSIFICATION OF TWEETS USING MULTIPLE THRESHOLDS WITH SELF-CORRECTION AND WEIGHTED CONDITIONAL","authors":["TN Ahmad - 2020"],"snippet":"Page 1. CLASSIFICATION OF TWEETS USING MULTIPLE THRESHOLDS WITH SELF-CORRECTION AND WEIGHTED CONDITIONAL PROBABILITIES A thesis submitted to The University of Manchester for the degree of Doctor of Philosophy …","url":["https://www.research.manchester.ac.uk/portal/files/188959099/FULL_TEXT.PDF"]} -{"year":"2021","title":"Classification-based Quality Estimation: Small and Efficient Models for Real-world Applications","authors":["S Sun, A El-Kishky, V Chaudhary, J Cross, F Guzmán… - arXiv preprint arXiv …, 2021"],"snippet":"… Current state of the art QE systems (Fomicheva et al., 2020b; Ranasinghe et al., 2020a; Sun et al., 2020). are built on XLM-R (Conneau et al., 2019), a contextualized language model pre-trained on more than 2 terabytes of …","url":["https://arxiv.org/pdf/2109.08627"]} -{"year":"2021","title":"Classifying Fake and Real Neurally Generated News","authors":["A Govindaraju, J Griffith - 2021 Swedish Workshop on Data Science (SweDS), 2021"],"snippet":"… In order to train and test the model, 3 datasets have been created: One containing real news extracted from a common crawl; the second comprises a neural fake news dataset generated using language modelling techniques; the third comprises a …","url":["https://ieeexplore.ieee.org/abstract/document/9638268/"]} -{"year":"2021","title":"CLEF eHealth Evaluation Lab 2021","authors":["L Kelly, LA Alemany, N Brew-Sam, V Cotik, D Filippo…"],"snippet":"… This collection consists of Web pages acquired from Common Crawl,14 which is augmented with additional pages collected from a number of known reliable health Websites and other known unreliable health Websites [9]. The topics …","url":["https://www.researchgate.net/profile/Marco-Viviani/publication/350569762_CLEF_eHealth_Evaluation_Lab_2021/links/6073f32e92851c8a7bbea835/CLEF-eHealth-Evaluation-Lab-2021.pdf"]} -{"year":"2021","title":"Click This, Not That: Extending Web Authentication with Deception","authors":["T Barron, J So, N Nikiforakis - Proceedings of the 2021 ACM Asia Conference on …, 2021"],"snippet":"… after creation. References. 2020. Common Crawl. https://commoncrawl.org/the-data/ get-started/Google Scholar Google Scholar; 2020. Mouseflow: Session Replay, Heatmaps, Funnels, Forms & User Feedback. https://mouseflow …","url":["https://dl.acm.org/doi/abs/10.1145/3433210.3453088"]} -{"year":"2021","title":"ClimateBert: A Pretrained Language Model for Climate-Related Text","authors":["N Webersinke, M Kraus, JA Bingler, M Leippold - arXiv preprint arXiv:2110.12010, 2021"],"snippet":"… 2019), and a subset of CommonCrawl that is said to resemble the storylike style of WINOGRAD schemas (Trinh and Le, 2019). While these sources are valuable to build a model working on general language, it has been shown that domain-specific …","url":["https://arxiv.org/pdf/2110.12010"]} -{"year":"2021","title":"CLIP2StyleGAN: Unsupervised Extraction of StyleGAN Edit Directions","authors":["R Abdal, P Zhu, J Femiani, NJ Mitra, P Wonka - arXiv preprint arXiv:2112.05219, 2021"],"snippet":"… The CLIP image encoder [34] is trained on the common-crawl dataset, an internet-scale set of images that encompasses a broad range of visual concepts. However, a typical high-quality GAN would be trained on a more specific set of images, for example …","url":["https://arxiv.org/pdf/2112.05219"]} -{"year":"2021","title":"Cluster analysis of agricultural household production of self-employed","authors":["AV Plotnikov - IOP Conference Series: Earth and Environmental …, 2021"],"snippet":"… To train this model, we used a sample of Russian-language documents from the CommonCrawl dump, balanced by geography, compiled by Jonathan Dunn and Ben Adams; the corpus Size is 2.1 billion words. Page 5. AGRITECH-IV-2020 IOP Conf …","url":["https://iopscience.iop.org/article/10.1088/1755-1315/677/2/022080/pdf"]} -{"year":"2021","title":"Cluster-Based Antiphishing (CAP) Model for Smart Phones","authors":["M Faisal, S Abed - Scientific Programming"],"snippet":"… latest techniques tested on UCI datasets. 4.4.2. Dataset Taken from Mendeley. Source Phishing web page: Phish Tank, Legitimate web page source: Alexa, Common Crawl (1) Dataset Information. In this scenario, the dataset …","url":["https://www.hindawi.com/journals/sp/2021/9957323/"]} -{"year":"2021","title":"Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots","authors":["S Tan, S Joty - arXiv preprint arXiv:2103.09593, 2021"],"snippet":"… However, the latter trend is replicated for BUMBLEBEE if we remove this constraint (Table 14 in Appendix G). A possible explanation is that XLM-R and Unicoder were trained on monolingual CommonCrawl (CC) data, while …","url":["https://arxiv.org/pdf/2103.09593"]} -{"year":"2021","title":"CoDesc: A Large Code–Description Parallel Dataset","authors":["M Hasan, T Muttaqueen, A Al Ishtiaq, KS Mehrab…"],"snippet":"… 9052–9065, Online. Association for Computational Linguistics. CommonCrawl Common crawl. https:// commoncrawl.org/. Accessed: 2021-01-31. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019 …","url":["http://masumhasan.net/files/CoDesc.pdf"]} -{"year":"2021","title":"Combining Natural Language Processing and Machine Learning for Profiling and Fake News Detection","authors":["A Bondielli"],"snippet":"Page 1. PHD PROGRAM IN SMART COMPUTING DIPARTIMENTO DI INGEGNERIA DELL'INFORMAZIONE (DINFO) Combining Natural Language Processing and Machine Learning for Profiling and Fake News Detection Alessandro Bondielli …","url":["https://flore.unifi.it/bitstream/2158/1244287/1/PhDThesis_AlessandroBondielli.pdf"]} -{"year":"2021","title":"Combining Pre-trained Word Embeddings and Linguistic Features for Sequential Metaphor Identification","authors":["R Mao, C Lin, F Guerin - arXiv preprint arXiv:2104.03285, 2021"],"snippet":"… For instance, GloVe was trained on Common Crawl2, from billions of web pages (840 billion tokens); ELMo was trained on WMT 2011 News Crawl data3 (800 million tokens); BERT was trained on Wikipedia4 (2.5 billion tokens) …","url":["https://arxiv.org/pdf/2104.03285"]} -{"year":"2021","title":"Combining word embeddings as a tool for subject identification","authors":["A Hamm - Wissensaustauschworkshop\" Maschinelles Lernen VII\", 2021"],"snippet":"This talk shows ungoing work aiming at finding subject matter relations between text documents and word clouds. A number of increasingly successful semantic word embedding procedures - learning semantic relations from contextual distributions …","url":["https://elib.dlr.de/147623/1/Combining%20word%20embeddings.pdf"]} -{"year":"2021","title":"Comparative Analysis of Bengali Stop Word Detection Using Different Approaches","authors":["RJ Rupa, JF Sohana, M Rahman - … on Automation, Control and Mechatronics for …, 2021"],"snippet":"… In this paper, the pretrained Bengali FastText CBOW model is utilized to produce word vectors trained on the common crawl and Wikipedia [27] and both logistic regression and support vector machine classifiers acquire a performance score of 86%. TABLE XII …","url":["https://ieeexplore.ieee.org/abstract/document/9528279/"]} -{"year":"2021","title":"Comparative Analysis of Different Transformer Based Architectures Used in Sentiment Analysis","authors":["K Pipalia, R Bhadja, M Shukla - 2020 9th International Conference System Modeling …, 2020"],"snippet":"… Distill BERT Base:66 BookCorpus wiki BERT Distillation T5 Base:220 large:770 Colossal Clean Crawled Corpus (C4) Text Infilling XLNet Base:~110 Large:~340 BookCorpus Wiki, Giga5 ClueWeb, Common Crawl …","url":["https://ieeexplore.ieee.org/abstract/document/9337081/"]} -{"year":"2021","title":"Comparing Apples and Oranges: Human and Computer Clustered Affinity Diagrams Under the Microscope","authors":["P Borlinghaus, S Huber - 26th International Conference on Intelligent User …, 2021"],"snippet":"… training corpora WSD no OOV LSI [9] − NMF [18] − LDA [5] − GloVe [26] Wiki word2vec [22] Google News corpus doc2vec [17] 900k sentences from qualitative survey ◦ fastText [6] Common Crawl, Wiki • … FastText was trained on Common Crawl and Wikipedia corpus …","url":["https://dl.acm.org/doi/abs/10.1145/3397481.3450674"]} -{"year":"2021","title":"Comparing Contextualised Embeddings for Predicting the (Graded) Effect of Context in Word Similarity","authors":["JM Albers - 2021"],"snippet":"… As data set XLM-RoBERTa uses CommonCrawl instead of Wikipedia, which provides limited scale for low resource languages. 4 Page 5 … The CommonCrawl data set is designed to be more diverse than other data sets, which mainly use Wikipedia and books …","url":["https://dspace.library.uu.nl/bitstream/handle/1874/406113/6400507_JorisAlbers_Thesis.pdf?sequence=1"]} -{"year":"2021","title":"Comparing Encoder-Decoder Architectures for Neural Machine Translation: A Challenge Set Approach","authors":["C Doan - 2021"],"snippet":"Machine translation (MT) as a field of research has known significant advances in recent years, with the increased interest for neural machine translation (NMT). By combining deep learning with translation, researchers have been able to deliver …","url":["https://ruor.uottawa.ca/bitstream/10393/42936/1/Doan_Coraline_2021_thesis.pdf"]} -{"year":"2021","title":"Comparing general and specialized word embeddings for biomedical named entity recognition","authors":["RE Ramos-Vargas, I Román-Godínez, S Torres-Ramos - PeerJ Computer Science, 2021"],"snippet":"… 01-14 Received 2020-11-05 Academic Editor Susan Gauch Subject Areas Bioinformatics, Artificial Intelligence, Computational Linguistics Keywords Word embeddings, BioNER, BiLSTM-CRF, DrugBank, MedLine, Pyysalo …","url":["https://peerj.com/articles/cs-384/"]} -{"year":"2021","title":"Comparing the Performance of NLP Toolkits and Evaluation measures in Legal Tech","authors":["MZ Khan, J Mitrovic, JMPDM Granitzer - 2021"],"snippet":"Page 1. Lehrstuhl für Data Science Comparing the Performance of NLP Toolkits and Evaluation measures in Legal Tech Masterarbeit von Muhammad Zohaib Khan Supervised By: Prof. Dr. Jelena Mitrovic 1. Prüfer 2. Prüfer …","url":["https://www.academia.edu/download/65887417/Deep_Neural_Language_Modelling_in_Law.pdf"]} -{"year":"2021","title":"Comparing Traditional and Neural Approaches for Detecting Health-Related Misinformation","authors":["D Elsweiler - … IR Meets Multilinguality, Multimodality, and Interaction …","M Fernández-Pichel, DE Losada, JC Pichel…"],"snippet":"… Table 1 reports the main statistics of the resulting datasets. We also tested classifiers for the task of distinguishing between useful documents for non-expert end users (ie, trustworthy and readable) and non-useful …","url":["http://persoal.citius.usc.es/jcpichel/docs/2021_CLEF_MFernandezPichel.pdf","https://books.google.de/books?hl=en&lr=lang_en&id=p9FCEAAAQBAJ&oi=fnd&pg=PA78&dq=commoncrawl&ots=eNycpv3vEv&sig=v7CAPFEmV26pL2Lhj2R2t581gZ0"]} -{"year":"2021","title":"Comparison of Czech Transformers on Text Classification Tasks","authors":["J Lehečka, J Švec - arXiv preprint arXiv:2107.10042, 2021"],"snippet":"… Researchers from Facebook have published multilingual XLM-RoBERTa model [3] pre-trained on one hundred languages (including Czech), using more than two terabytes of filtered Common Crawl data … 2. 1https …","url":["https://arxiv.org/pdf/2107.10042"]} -{"year":"2021","title":"Compilation and Validation of a Large Fake News Dataset in Hungarian","authors":["M Gencsi, Z Bodó, A Szenkovits - 2021 IEEE 19th International Symposium on …, 2021"],"snippet":"… The huBERT model was trained on the Hungarian subset of the Common Crawl and a snapshot of the Hungarian Wikipedia, while the multilingual model was trained on the top 104 languages with the largest Wikipedias, among them also …","url":["https://ieeexplore.ieee.org/abstract/document/9582484/"]} -{"year":"2021","title":"Comprehensive analysis of embeddings and pre-training in NLP","authors":["JK Tripathy, SC Sethuraman, MV Cruz, A Namburu… - Computer Science Review, 2021"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S1574013721000733"]} -{"year":"2021","title":"Comprehensive Evaluation of Word Embeddings for Highly Inflectional Language","authors":["P Drozda, K Sopyla, J Lewalski - International Conference on Computational …, 2021"],"snippet":"… The obtained results showed that in terms of accuracy the Facebook fasttext model learned on the Common Crawl collection should be considered the best model under assumptions of experimental session. Keywords. Word …","url":["https://link.springer.com/chapter/10.1007/978-3-030-88113-9_48"]} -{"year":"2021","title":"Comprehensive Multi-Modal Interactions for Referring Image Segmentation","authors":["K Jain, V Gandhi - arXiv preprint arXiv:2104.10412, 2021"],"snippet":"… 576. At 448 × 448 resolution, H = W = 14 and at 576 × 576 resolution, H = W = 18. We use GLoVe embeddings [17] pre-trained on Common Crawl 840B tokens to initialize word embedding for words in the expressions. The …","url":["https://arxiv.org/pdf/2104.10412"]} -{"year":"2021","title":"Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation","authors":["M Deng, B Tan, Z Liu, EP Xing, Z Hu - arXiv preprint arXiv:2109.06379, 2021"],"snippet":"Page 1. Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation Mingkai Deng1∗, Bowen Tan1∗, Zhengzhong Liu1,2, Eric P. Xing1,2,3, Zhiting Hu4 1Carnegie Mellon University …","url":["https://arxiv.org/pdf/2109.06379"]} -{"year":"2021","title":"Computational analysis and synthesis of song lyrics","authors":["P Březinová - 2021"],"snippet":"… It uses CMUdict for phonetic transcription, analyzes CommonCrawl8 web data repository for forced rhymes, Google Books Ngrams (Weiss [2015]) for building language model, and WordNet 3.0 (Pearson et al. [2005]) for semantic relations …","url":["https://dspace.cuni.cz/bitstream/handle/20.500.11956/147665/120397406.pdf?sequence=1"]} -{"year":"2021","title":"Computational Challenges for Artificial Intelligence and Machine Learning in Environmental Research","authors":["M Werner, G Dax, M Laass - INFORMATIK 2020, 2021"],"snippet":"… This includes news streams, social media messages, human-curated knowledge such as OpenStreetMap and Wikipedia, opinionated data sources such as blog posts from certain platforms, or blind web scale data collections such as common crawl …","url":["https://dl.gi.de/bitstream/handle/20.500.12116/34809/C21-1.pdf?sequence=1&isAllowed=y"]} -{"year":"2021","title":"Computational filling of curatorial gaps in a fine arts exhibition","authors":["A Flexer"],"snippet":"… Please note that we translate all keywords from German to English for this paper. We use the German fasttext5 word em- bedding, which has been trained on about 3 million words from the Wikipediaand 19 million words …","url":["https://computationalcreativity.net/iccc21/wp-content/uploads/2021/09/ICCC_2021_paper_75reduced.pdf"]} -{"year":"2021","title":"Computational methods to understand the association between emojis and emotions","authors":["AAM Shoeb - 2021"],"snippet":"Page 1. © 2021 Abu Awal Md Shoeb ALL RIGHTS RESERVED Page 2. COMPUTATIONAL METHODS TO UNDERSTAND THE ASSOCIATION BETWEEN EMOJIS AND EMOTIONS By ABU AWAL MD SHOEB A dissertation submitted to the School of Graduate Studies …","url":["https://rucore.libraries.rutgers.edu/rutgers-lib/65975/PDF/1/"]} -{"year":"2021","title":"Computer Science Review","authors":["JK Tripathy, SC Sethuraman, MV Cruz, V Vijayakumar - 2021"],"snippet":"abstract The amount of data and computing power has drastically increased over the last decade, which leads to the development of several new fronts in the field of Natural Language Processing (NLP). In addition to that, the entanglement of …","url":["https://www.researchgate.net/profile/Mangalraj-Poobalasubramanian/publication/355132427_Comprehensive_analysis_of_embeddings_and_pre-training_in_NLP/links/6164f98e1eb5da761e836888/Comprehensive-analysis-of-embeddings-and-pre-training-in-NLP.pdf"]} -{"year":"2021","title":"CoMSum and SIBERT: A Dataset and Neural Model for Query-Based Multi-document Summarization","authors":["S Kulkarni, S Chammas, W Zhu, F Sha, E Ie - International Conference on Document …, 2021"],"snippet":"… We use the cleaned Common Crawl (CC) corpus [32] to source relevant web documents that are diverse and multi-faceted for generating Natural Questions (NQ) (long-form) answers [21]. Figure 1 illustrates the overall procedure …","url":["https://link.springer.com/chapter/10.1007/978-3-030-86331-9_6"]} -{"year":"2021","title":"Concept-Based Label Embedding via Dynamic Routing for Hierarchical Text Classification","authors":["X Wang, L Zhao, B Liu, T Chen, F Zhang, D Wang"],"snippet":"… Hyper-parameters are tuned on a validation set by grid search. We take Stanford's publicly available GloVe 300-dimensional embeddings trained on 42 billion tokens from Common Crawl (Pennington et al., 2014) as initialization for word em- beddings …","url":["https://aclanthology.org/2021.acl-long.388.pdf"]} -{"year":"2021","title":"Confused by Path: Analysis of Path Confusion Based Attacks","authors":["SA Mirheidari - 2020"],"snippet":"… 93 iii Page 10. Page 11. List of Tables 4.1 Sample Grouped Web pages. . . . . 29 4.2 Narrowing down the Common Crawl to the candidate set used in our analysis (from left to right). . . . 36 4.3 Vulnerable pages and sites in the candidate set …","url":["https://iris.unitn.it/retrieve/handle/11572/280512/382175/phd_unitn_Seyed%20Ali_Mirheidari.pdf"]} -{"year":"2021","title":"ConRPG: Paraphrase Generation using Contexts as Regularizer","authors":["Y Meng, X Ao, Q He, X Sun, Q Han, F Wu, J Li - arXiv preprint arXiv:2109.00363, 2021"],"snippet":"… We implement the above models, ie p(−→ci|ci), p(←−ci|ci), p(ci), p(c>i|c