paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
null
Stuart Howe Ingersoll (June 3, 1898 – January 29, 1983) was a vice admiral of the United States Navy. He was a naval aviator whose career included service as an aircraft carrier commander during World War II and tours as commander-in-chief of the United States Seventh Fleet, President of the Naval War College, and Commandant of Midshipmen at the United States Naval Academy.
What quality is common among all of Ingersoll's positions held within the Navy?
Although Ingersoll held many positions within the U.S. Navy, he was first and foremost a naval aviator.
null
false
null
Symptoms of Sick Liver
Skin and eyes that appear jaundice Abdominal pain and swelling Swelling in the legs and ankles Dark urine colour Pale stool colour Chronic fatigue Nausea or vomiting Loss of appetite
null
false
null
Name the top 10 greatest rapper of all time
From Billboard's perspective in 2023, the top 10 greatest rappers of all time, as of today, are the following : - JAY-Z - Kendrick Lamar - Nas - Tupac Shakur - Eminem - The Notorious BIG - Lil Wayne - Drake - Snoop Dog - Nicki Minaj
null
false
null
What are your thoughts of Michael Jackson as a singer and person
Michael Jackson is acclaimed as the greatest pop star of all times. He invented the Moon Walk and created many chartbuster hits. He still continues to have a loyal fanbase across the world, shows on him are still running packed houses in Las Vegas.
null
false
null
Classify these as positive or negative reviews: "Interesting concept but execution left a lot to be desired", "Would have loved it but the main character was completely 1D", "One of the best horror books I've read in years", "Would see it again"
"Interesting concept but execution left a lot to be desired": negative, "Would have loved it but the main character was completely 1D": negative, "One of the best horror books I've read in years": positive, "Would see it again": positive
1910.00194
false
null
For English all-words WSD, we train our WSD model on SemCor BIBREF24, and test it on Senseval-2 (SE2), Senseval-3 (SE3), SemEval 2013 task 12 (SE13), and SemEval 2015 task 13 (SE15). This common benchmark, which has been annotated with WordNet-3.0 senses BIBREF25, has recently been adopted in English all-words WSD. Following BIBREF9, we choose SemEval 2007 Task 17 (SE07) as our development data to pick the best model parameters after a number of neural network updates, for models that require back-propagation training. While WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese, to evaluate the effectiveness of our approach in a different language. We use OntoNotes Release 5.0, which contains a number of annotations including word senses for Chinese. We follow the data setup of BIBREF26 and conduct an evaluation on four genres, i.e., broadcast conversation (BC), broadcast news (BN), magazine (MZ), and newswire (NW), as well as the concatenation of all genres. While the training and development datasets are divided into genres, we train on the concatenation of all genres and test on each individual genre. For English all-words WSD, we train our WSD model on SemCor BIBREF24, and test it on Senseval-2 (SE2), Senseval-3 (SE3), SemEval 2013 task 12 (SE13), and SemEval 2015 task 13 (SE15). We use OntoNotes Release 5.0, which contains a number of annotations including word senses for Chinese.
What datasets are used for testing?
The answers are shown as follows: * Senseval-2 (SE2), Senseval-3 (SE3), SemEval 2013 task 12 (SE13), and SemEval 2015 task 13 (SE15) * OntoNotes Release 5.0
null
false
166
Named Entity Linking is the task of detecting mentions of entities from a knowledge base in free text, as illustrated in Figure 1 . Most of the entity linking literature focuses on target knowledge bases which are derived from Wikipedia, such as DBpedia BIBREF0 or YAGO BIBREF1 . These bases are curated automatically by harvesting information from the info-boxes and categories on each Wikipedia page and are therefore not editable directly. Wikidata BIBREF2 is an editable, multilingual knowledge base which has recently gained popularity as a target database for entity linking BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . As these new approaches to entity linking also introduce novel learning methods, it is hard to tell apart the benefits that come from the new models and those which come from the choice of knowledge graph and the quality of its data. We review the main differences between Wikidata and static knowledge bases extracted from Wikipedia, and analyze their implactions for entity linking. We illustrate these differences by building a simple entity linker, OpenTapioca, which only uses data from Wikidata, and show that it is competitive with other systems with access to larger data sources for some tasks. OpenTapioca can be trained easily from a Wikidata dump only, and can be efficiently kept up to date in real time as Wikidata evolves. We also propose tools to adapt existing entity linking datasets to Wikidata, and offer a new entity linking dataset, consisting of affiliation strings extracted from research articles. We illustrate these differences by building a simple entity linker, OpenTapioca, which only uses data from Wikidata, and show that it is competitive with other systems with access to larger data sources for some tasks.
What data does OpenTapioca use?
Data from Wikidata.
null
false
null
Tigers have been of great importance in Chinese culture since the earliest surviving records of Chinese history, with the character 虎 appearing on the Shang-era oracle bones. In prehistoric China, the Siberian, South China, and Bengal tigers were common in the northeast, southeast, and southwest respectively and tigers figures prominently in myth, astrology, Chinese poetry, painting, and other fields. Most prominently, the tiger has long been regarded as a major symbol of masculine yang energy and the king of the animals. In modern China, it generally represents power, fearlessness, and wrath
Given this reference text that describes the importance of tigers in chinese culture, what do they symbolize in modern China and what types were common in the northeast?
Tigers symbolize power, fearlessness and wrath in modern China. Bengal tigers were common in the northeast.
null
false
null
What are some languages spoken in Mexico?
Some languages spoken in Mexico are Spanish, Nahuatl and Mayan languages
null
false
null
Of French-Canadian ancestry, Kerouac was raised in a French-speaking home in Lowell, Massachusetts. He "learned English at age six and spoke with a marked accent into his late teens." During World War II, he served in the United States Merchant Marine; he completed his first novel at the time, which was published more than 40 years after his death. His first published book was The Town and the City (1950), and he achieved widespread fame and notoriety with his second, On the Road, in 1957. It made him a beat icon, and he went on to publish 12 more novels and numerous poetry volumes. Kerouac is recognized for his style of spontaneous prose. Thematically, his work covers topics such as his Catholic spirituality, jazz, travel, promiscuity, life in New York City, Buddhism, drugs, and poverty. He became an underground celebrity and, with other Beats, a progenitor of the hippie movement, although he remained antagonistic toward some of its politically radical elements. He has a lasting legacy, greatly influencing many of the cultural icons of the 1960s, including Bob Dylan, the Beatles, Jerry Garcia and the Doors. In 1969, at the age of 47, Kerouac died from an abdominal hemorrhage caused by a lifetime of heavy drinking. Since then, his literary prestige has grown, and several previously unseen works have been published.
Given this paragraph about Jack Kerouac, tell me where he was raised, what was his most famous book, when was that book published, and which movement was he a part of?
Jack Kerouac raised in Lowell, Massachusetts. He most famous book, On the Road, was published in 1957. He was a member of the Beats.
null
false
null
Tell me whether these cities are in Spain or France: Pamplona, Valencia, Nice, Marseille, Paris, Sevilla
Pamplona: Spain Valencia: Spain Nice: France Marseille: France Paris: France Sevilla: Spain
null
false
393
One of the significant challenges in contemporary information processing is the sheer volume of available data. BIBREF0 , for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently search for relevant information in huge datasets. One class of such algorithms, represented by, e.g., Locality Sensitive Hashing BIBREF1 , relies on hashing data into short, locality-preserving binary codes BIBREF2 . The codes can then be used to group the data into buckets, thereby enabling sublinear search for relevant information, or for fast comparison of data items. Most of the algorithms from this family are data-oblivious, i.e. can generate hashes for any type of data. Nevertheless, some methods target specific kind of input data, like text or image. In this work we focus on learning binary codes for text documents. An important work in this direction has been presented by BIBREF3 . Their semantic hashing leverages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-of-words (BOW) representation. Salakhutdinov & Hinton report that binary codes allow for up to 20-fold improvement in document ranking speed, compared to real-valued representation of the same dimensionality. Moreover, they demonstrate that semantic hashing codes used as an initial document filter can improve precision of TF-IDF-based retrieval. Learning binary representation from BOW, however, has its disadvantages. First, word-count representation, and in turn the learned codes, are not in itself stronger than TF-IDF. Second, BOW is an inefficient representation: even for moderate-size vocabularies BOW vectors can have thousands of dimensions. Learning fully-connected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov & Hinton restricted the BOW vocabulary in their experiments to 2000 most frequent words. Binary codes have also been applied to cross-modal retrieval where text is one of the modalities. Specifically, BIBREF4 incorporated tag information that often accompany text documents, while BIBREF5 employed siamese neural networks to learn single binary representation for text and image data. Recently several works explored simple neural models for unsupervised learning of distributed representations of words, sentences and documents. BIBREF6 proposed log-linear models that learn distributed representations of words by predicting a central word from its context (CBOW model) or by predicting context words given the central word (Skip-gram model). The CBOW model was then extended by BIBREF7 to learn distributed representations of documents. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model, in which the central word is predicted given the context words and the document vector. During training, PV-DM learns the word embeddings and the parameters of the softmax that models the conditional probability distribution for the central words. During inference, word embeddings and softmax weights are fixed, but the gradients are backpropagated to the inferred document vector. In addition to PV-DM, Le & Mikolov studied also a simpler model, namely Paragraph Vector Distributed Bag of Words (PV-DBOW). This model predicts words in the document given only the document vector. It therefore disregards context surrounding the predicted word and does not learn word embeddings. Le & Mikolov demonstrated that paragraph vectors outperform BOW and bag-of-bigrams in information retrieval task, while using only few hundreds of dimensions. These models are also amendable to learning and inference over large vocabularies. Original CBOW network used hierarchical softmax to model the probability distribution for the central word. One can also use noise-contrastive estimation BIBREF8 or importance sampling BIBREF9 to approximate the gradients with respect to the softmax logits. An alternative approach to learning representation of pieces of text has been recently described by BIBREF10 . Networks proposed therein, inspired by the Skip-gram model, learn to predict surrounding sentences given the center sentence. To this end, the center sentence is encoded by an encoder network and the surrounding sentences are predicted by a decoder network conditioned on the center sentence code. Once trained, these models can encode sentences without resorting to backpropagation inference. However, they learn representations at the sentence level but not at the document level. In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by BIBREF11 on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While BIBREF11 employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes.
Do the resultant binary paragraph vectors significantly outperform semantic hashing codes?
Yes, they do.
null
false
148
In this paper, we carried out an empirical study of machine learning and deep learning methods for Recognizing Question Entailment in the medical domain using several datasets. We developed a RQE-based QA system to answer new medical questions using existing question-answer pairs. We built and shared a collection of 47K medical question-answer pairs. Our QA approach outperformed the best results on TREC-2017 LiveQA medical test questions. The proposed approach can be applied and adapted to open-domain as well as specific-domain QA. Deep learning models achieved interesting results on open-domain and clinical datasets, but obtained a lower performance on consumer health questions. We will continue investigating other network architectures including transfer learning, as well as creation of a large collection of consumer health questions for training to improve the performance of DL models. Future work also includes exploring integration of a Question Focus Recognition module to enhance candidate question retrieval, and expanding our question-answer collection. Our QA approach outperformed the best results on TREC-2017 LiveQA medical test questions.
Does the QA approach outperform the best results on TREC-2017 LiveQA medical test questions?
Yes, it does.
null
false
null
Name some of the most well-known Valyrian steel swords.
Widow's Wail, Heartsbane, Longclaw, Oathkeeper, Red Rain
null
false
null
Which characters belong to DC or Marvel Universe? Avengers, Justice League of America
Avengers is Marvel, Justice League of America is DC
1806.11322
false
null
An epistemic ME game is an ME game with a Harsanyi type space and a type/history correspondence as we've defined it. By adding types to an ME game, we provide the beginnings of a game theoretic model of interpretive bias that we believe is completely new. Our definition of bias is now: [Interpretive Bias] An interpretive bias in an epistemic ME game is the probability distribution over types given by the belief function of the conversationalists or players, or the Jury. Note that in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury. Our definition of bias is now: [Interpretive Bias] An interpretive bias in an epistemic ME game is the probability distribution over types given by the belief function of the conversationalists or players, or the Jury. Note that in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury.
Which interpretative biases are analyzed in this paper?
The answers are shown as follows: * in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury
null
false
null
Which characters belong to DC or Marvel Universe? Jason Todd, Winter Soldier
Jason Todd is DC, Winter Soldier is Marvel
null
false
310
Quality estimation (QE) refers to the task of measuring the quality of machine translation (MT) system outputs without reference to the gold translations BIBREF0 , BIBREF1 . QE research has grown increasingly popular due to the improved quality of MT systems, and potential for reductions in post-editing time and the corresponding savings in labor costs BIBREF2 , BIBREF3 . QE can be performed on multiple granularities, including at word level, sentence level, or document level. In this paper, we focus on quality estimation at word level, which is framed as the task of performing binary classification of translated tokens, assigning “OK” or “BAD” labels. Early work on this problem mainly focused on hand-crafted features with simple regression/classification models BIBREF4 , BIBREF5 . Recent papers have demonstrated that utilizing recurrent neural networks (RNN) can result in large gains in QE performance BIBREF6 . However, these approaches encode the context of the target word by merely concatenating its left and right context words, giving them limited ability to control the interaction between the local context and the target word. In this paper, we propose a neural architecture, Context Encoding Quality Estimation (CEQE), for better encoding of context in word-level QE. Specifically, we leverage the power of both (1) convolution modules that automatically learn local patterns of surrounding words, and (2) hand-crafted features that allow the model to make more robust predictions in the face of a paucity of labeled data. Moreover, we further utilize stacked recurrent neural networks to capture the long-term dependencies and global context information from the whole sentence. We tested our model on the official benchmark of the WMT18 word-level QE task. On this task, it achieved highly competitive results, with the best performance over other competitors on English-Czech, English-Latvian (NMT) and English-Latvian (SMT) word-level QE task, and ranking second place on English-German (NMT) and German-English word-level QE task. Specifically, we leverage the power of both (1) convolution modules that automatically learn local patterns of surrounding words, and (2) hand-crafted features that allow the model to make more robust predictions in the face of a paucity of labeled data.
What modules are used in neural architecture?
Convolution modules that automatically learn local patterns of surrounding words.
null
false
243
Recently, the number of public datasets in the Linked Data cloud has significantly grown to almost 10 thousands. At the time of writing, at least four of these datasets contain more than one billion triples each. This huge amount of available data has become a fertile ground for Machine Learning and Data Mining algorithms. Today, applications of machine-learning techniques comprise a broad variety of research areas related to Linked Data, such as Link Discovery, Named Entity Recognition, and Structured Question Answering. The field of Knowledge Graph Embedding (KGE) has emerged in the Machine Learning community during the last five years. The underlying concept of KGE is that in a knowledge base, each entity and relation can be regarded as a vector in a continuous space. The generated vector representations can be used by algorithms employing machine learning, deep learning, or statistical relational learning to accomplish a given task. Several KGE approaches have already shown promising results on tasks such as link prediction, entity recommendation, question answering, and triplet classification BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Moreover, Distributional Semantics techniques (e.g., Word2Vec or Doc2Vec) are relatively new in the Semantic Web community. The RDF2Vec approaches BIBREF4 , BIBREF5 are examples of pioneering research and to date, they represent the only option for learning embeddings on a large knowledge graph without the need for state-of-the-art hardware. To this end, we devise the KG2Vec approach, which comprises skip-gram techniques for creating embeddings on large knowledge graphs in a feasible time but still maintaining the quality of state-of-the-art embeddings. Our evaluation shows that KG2Vec achieves a vector quality comparable to the most scalable approaches and can process more than 250 million triples in less than 7 hours on a machine with suboptimal performances. To this end, we devise the KG2Vec approach, which comprises skip-gram techniques for creating embeddings on large knowledge graphs in a feasible time but still maintaining the quality of state-of-the-art embeddings.
What does the KG2Vec approach include?
Skip-gram techniques for creating embeddings on large knowledge graphs in a feasible time but still maintaining the quality of state-of-the-art embeddings.
null
false
100
Question classification typically makes use of a combination of syntactic, semantic, surface, and embedding methods. Syntactic patterns BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 and syntactic dependencies BIBREF3 have been shown to improve performance, while syntactically or semantically important words are often expanding using Wordnet hypernyms or Unified Medical Language System categories (for the medical domain) to help mitigate sparsity BIBREF22 , BIBREF23 , BIBREF24 . Keyword identification helps identify specific terms useful for classification BIBREF25 , BIBREF3 , BIBREF26 . Similarly, named entity recognizers BIBREF6 , BIBREF27 or lists of semantically related words BIBREF6 , BIBREF24 can also be used to establish broad topics or entity categories and mitigate sparsity, as can word embeddings BIBREF28 , BIBREF29 . Here, we empirically demonstrate many of these existing methods do not transfer to the science domain. The highest performing question classification systems tend to make use of customized rule-based pattern matching BIBREF30 , BIBREF7 , or a combination of rule-based and machine learning approaches BIBREF19 , at the expense of increased model construction time. A recent emphasis on learned methods has shown a large set of CNN BIBREF29 and LSTM BIBREF8 variants achieve similar accuracy on TREC question classification, with these models exhibiting at best small gains over simple term frequency models. These recent developments echo the observations of Roberts et al. BIBREF3 , who showed that existing methods beyond term frequency models failed to generalize to medical domain questions. Here we show that strong performance across multiple datasets is possible using a single learned model. Due to the cost involved in their construction, question classification datasets and classification taxonomies tend to be small, which can create methodological challenges. Roberts et al. BIBREF3 generated the next-largest dataset from TREC, containing 2,936 consumer health questions classified into 13 question categories. More recently, Wasim et al. BIBREF4 generated a small corpus of 780 biomedical domain questions organized into 88 categories. In the education domain, Godea et al. BIBREF5 collected a set of 1,155 classroom questions and organized these into 16 categories. To enable a detailed study of science domain question classification, here we construct a large-scale challenge dataset that exceeds the size and classification specificity of other datasets, in many cases by nearly an order of magnitude. Syntactic patterns (Li and Roth, 2006; Silva et al., 2011; Patrick and Li, 2012; Mishra et al., 2013) and syntactic dependencies (Roberts et al., 2014) have been shown to improve performance, while syntactically or semantically important words are often expanding using Wordnet hypernyms or Unified Medical Language System categories (for the medical domain) to help mitigate sparsity (Huang et al., 2008; Yu and Cao, 2008; Van-Tu and Anh-Cuong, 2016). Keyword identification helps identify specific terms useful for classification (Liu et al., 2011; Roberts et al., 2014; Khashabi et al., 2017). Similarly, named entity recognizers (Li and Roth, 2002; Neves and Kraus, 2016) or lists of semantically related words (Li and Roth, 2002; Van-Tu and Anh-Cuong, 2016) can also be used to establish broad topics or entity categories and mitigate sparsity, as can word embeddings (Kim, 2014; Lei et al., 2018).
What can be done to reduce sparsity?
Syntactic patterns and syntactic dependencies have been shown to improve performance, while syntactically or semantically important words are often expanding using Wordnet hypernyms or Unified Medical Language System categories to help mitigate sparsity. Keyword identification helps identify specific terms useful for classification. Similarly, named entity recognizers or lists of semantically related words can also be used to establish broad topics or entity categories and mitigate sparsity, as can word embeddings.
null
false
18
By extending the solution of sequence-to-sequence modeling using encoder-decoder architectures to multi-task learning, Luong2016 managed to achieve better performance on some INLINEFORM0 tasks such as translation, parsing and image captioning compared to individual tasks. Specifically in translation, the work utilizes multiple encoders to translate from multiple languages, and multiple decoders to translate to multiple languages. In this view of multilingual translation, each language in source or target side is modeled by one encoder or decoder, depending on the side of the translation. Due to the natural diversity between two tasks in that multi-task learning scenario, e.g. translation and parsing, it could not feature the attention mechanism although it has proven its effectiveness in NMT. There exists two directions which proposed for multilingual translation scenarios where they leverage the attention mechanism. The first one is indicated in the work from BIBREF8 , where it introduce an one-to-many multilingual NMT system to translates from one source language into multiple target languages. Having one source language, the attention mechanism is then handed over to the corresponding decoder. The objective function is changed to adapt to multilingual settings. In testing time, the parameters specific to a desired language pair are used to perform the translation. Firat2016 proposed another approach which genuinely delivers attention-based NMT to multilingual translation. As in BIBREF9 , their approach utilizes one encoder per source language and one decoder per target language for many-to-many translation tasks. Instead of a quadratic number of independent attention layers, however, one single attention mechanism is integrated into their NMT, performing an affine transformation between the hidden layer of INLINEFORM0 source languages and that one of INLINEFORM1 target languages. It is required to change their architecture to accomodate such a complicated shared attention mechanism. In a separate effort to achieve multilingual NMT, the work of Zoph2016 leverages available parallel data from other language pairs to help reducing possible ambiguities in the translation process into a single target language. They employed the multi-source attention-based NMT in a way that only one attention mechanism is required despite having multiple encoders. To achieve this, the outputs of the encoders were combined before feeding to the attention layer. They implemented two types of encoder combination; One is adding a non-linear layer on the concatenation of the encoders' hidden states. The other is using a variant of LSTM taking the respective gate values from the individual LSTM units of the encoders. As a result, the combined hidden states contain information from both encoders , thus encode the common semantic of the two source languages. In a separate effort to achieve multilingual NMT, the work of Zoph and Knight (2016) leverages available parallel data from other language pairs to help reducing possible ambiguities in the translation process into a single target language2. They employed the multi-source attention-based NMT in a way that only one attention mechanism is required despite having multiple encoders. To achieve this, the outputs of the encoders were combined before feeding to the attention layer. They implemented two types of encoder combination; One is adding a non-linear layer on the concatenation of the encoders’ hidden states. The other is using a variant of LSTM taking the respective gate values from the individual LSTM units of the encoders. As a result, the combined hidden states contain information from both encoders , thus encode the common semantic of the two source languages.
What are the two types of encoder combination implemented by Zoph and Knight achieve?
One is adding a non-linear layer on the concatenation of the encoders’ hidden states. The other is using a variant of LSTM taking the respective gate values from the individual LSTM units of the encoders.
null
false
null
Which of these foods are kosher? Lobster, cheeseburger, hamburger and fries, shrimp and grits, thai red curry with tofu, thai red curry with chicken, birthday cake, lentil soup with ham hock.
By Islamic law, all foods are considered halal, or lawful, except for pork and its by-products, animals improperly slaughtered or dead before slaughtering, animals slaughtered in the name of anyone but Allah (God), carnivorous animals, birds of prey, animals without external ears (some birds and reptiles), blood, alcohol, and foods contaminated with any of these. All seafood is halal. Based on this, the following foods are halal: Lobster, cheeseburger, hamburger and fries, shrimp and grits, thai red curry with tofu, thai red curry with chicken, and birthday cake.
null
false
null
The history of federal holidays in the United States dates back to June 28, 1870, when Congress created federal holidays "to correspond with similar laws of States around the District...and...in every State of the Union." Although at first applicable only to federal employees in the District of Columbia, Congress extended coverage in 1885 to all federal employees. The original four holidays in 1870 were: New Year's Day Independence Day Thanksgiving Day Christmas Day
What were the original four federal holidays in the United States?
The original four federal holidays in the United States were New Year's Day, Independence Day, Thanksgiving Day, and Christmas Day.
null
false
null
The Hundred Years' War (French: La guerre de Cent Ans; 1337–1453) was a series of armed conflicts between the kingdoms of England and France during the Late Middle Ages. It originated from disputed claims to the French throne between the English House of Plantagenet and the French royal House of Valois. Over time, the war grew into a broader power struggle involving factions from across Western Europe, fuelled by emerging nationalism on both sides.
What was the Hundred Years War?
The English and French fought numerous battles from the mid-14th century to the mid-15th century, hence the name, "Hundred Years War." The two sides made conflicting claims of sovereignty over land and people in an effort to be the dominant power in Western Europe, if not all of Europe. Engaging in warfare over this extended period of time resulted in a fairly rapid evolution of warfare and how countries conducted war. For example, standing armies for each country became a standard, a new development in the nationalism.
null
false
null
Beavers (genus Castor) are large, semiaquatic rodents of the Northern Hemisphere. There are two extant species: the North American beaver (Castor canadensis) and the Eurasian beaver (C. fiber). Beavers are the second-largest living rodents, after capybaras. Beavers have stout bodies with large heads, long chisel-like incisors, brown or gray fur, hand-like front feet, webbed back feet, and tails that are flat and scaly. The two species differ in skull and tail shape and fur color. Beavers can be found in a number of freshwater habitats, such as rivers, streams, lakes and ponds. They are herbivorous, consuming tree bark, aquatic plants, grasses and sedges.
What are two species of beavers?
The two species of beavers are the North American beaver and the Eurasian beaver.
null
false
null
Avatar: The Way of Water became the 51st film to gross $ 1 billion worldwide, the sixth film to gross $2 billion worldwide, the sixth-fastest film to cross the billion-dollar mark at 14 days, the second-fastest to gross the $2 billion mark, the highest grossing film of 2022, and the highest-grossing film of the COVID-19 pandemic era. It had the 11th-biggest global opening of all time and the third-biggest global opening for a film in the pandemic era behind Spider-Man: No Way Home and Doctor Strange in the Multiverse of Madness, earning $441.7 million in its first weekend. It also achieved the second-highest global opening weekend ever for a film released in IMAX cinemas with $48.8 million. With the release of Avatar: The Way of Water, Walt Disney Studios Motion Pictures led over other film distributors in worldwide box office grosses and crossed the $4.9 billion mark for the eighth year. Disney’s cumulative gross was generated from 16 film releases across its various studio divisions. Top Gun: Maverick became the 49th film to gross $1 billion worldwide. Additionally, the film surpassed Mission: Impossible – Fallout (2018) to become Tom Cruise's highest-grossing film of all time at the worldwide box office and also surpassed War of the Worlds (2005) to become Cruise's highest-grossing film at the domestic box office. It also passed The Mummy (2017) as Cruise's biggest opening weekend at the worldwide box office and also passed War of the Worlds (2005) as Cruise's biggest opening at the domestic box office and his first film to open to over $100 million in the US. It also passed Pirates of the Caribbean: At World's End (2007), another Jerry Bruckheimer produced movie, to have the highest Memorial Day opening weekend and passed Shrek 2 (2004) to have the lowest second-weekend drop for a movie that made over $100M in its opening weekend. The film surpassed Transformers: Dark of the Moon (2011) to become the highest-grossing film distributed solely by Paramount Pictures worldwide, and Titanic (1997) to become Paramount's highest grossing film in the United States and Canada.
From this passage, tell me what was the highest grossing movie in 2022 and what was the second highest grossing movie?
The highest grossing movie in 2022 is Avatar The way of Water, which grossed $2B worldwide. I unfortunately cannot tell what was the second highest grossing movie as that information is not available in the passage you shared with me
null
false
null
Have any Defensive players won the Heisman Trophy?
Charles Woodson is the only Heisman Trophy Winner who was primarily a defensive player.
null
false
null
Classify each of the following as safe or unsafe to eat for a human: hamburger, metal plate, iPhone, onion, tree, potato chip, candle, ice cream, backpack, sweater, boiled egg.
Safe to eat: hamburger, onion, potato chip, ice cream, boiled egg Unsafe to eat: metal plate, iPhone, tree, candle, backpack, sweater
1709.10445
true
null
To verify the word embeddings learned by our model we use the task of synonym discovery, whereby we analyze if it is possible to identify a pair of words as synonyms only through their embedding vectors. Synonym discovery is a common task in research; and it has been used before to test word embedding schemes BIBREF0 . We compare the performance of our Chinese word embedding vectors in the task of synonym discovery against another set of embedding vectors that was constructed with a co-occurrence model BIBREF1 . We also investigate the performance of synonym discovery with the Sino-Korean word embeddings by our method. Our test results shows that our approach out-performs the previous model. Our embeddings also proved to perform better than our benchmark dataset. Figure shows the distribution of the similarity measure between pairs of synonyms and random pairs of words in the benchmark dataset. In this sample, almost 32% of synonyms show a similarity score that places them away from zero, while 5% of random pairs of words are placed outside of that range. Table compares performance, and dimensionality in both strategies to learn embeddings. We compare the performance of our Chinese word embedding vectors in the task of synonym discovery against another set of embedding vectors that was constructed with a co-occurrence model BIBREF1 . Our embeddings also proved to perform better than our benchmark dataset.
Does this approach perform better than context-based word embeddings?
Yes.
null
false
null
Which of the following are names typically given to girls, typically given to boys, or names that could easily be given to either a boy or a girl: Sam, Bill, Liz, Tracy, Mary, Joe, Taylor, Missy, Ralph, Scott, Michelle, Michael, Frank, Cheryl, Ryan, Stephenie.
Of these, Bill, Joe, Ralph, Scott, Michael, Frank and Ryan are names that would typically be for boys; Liz, Mary, Missy, Michelle, Cheryl and Stephenie are names that would typically be for girls; and Sam, Tracy, and Taylor are names that could apply to either a girl or a boy.
null
false
null
SS Robert Coryndon was a British twin-screw passenger and cargo ferry on Lake Albert in central Africa. She was built in England in 1929 and reassembled in kit form at Lake Albert in 1930. She ran aground in 1962 and lay derelict until she was broken up for scrap between 2009 and 2012. John I. Thornycroft & Company at Woolston, Hampshire built the ship for Kenya and Uganda Railways and Harbours in 1929–30. She was named after the South African Sir Robert Coryndon, who was Governor of Uganda 1918–22. She was a "knock-down" ship. She was assembled in 1929 at Woolston. All her parts were marked with numbers, she was disassembled into many hundreds of parts, and transported in kit form to Africa, where she was reassembled on the shore of Lake Albert in 1930. Part of the overland journey to Lake Albert was by lorry, which severely limited the maximum size and weight of her parts. Robert Coryndon was part of a plan for a network of railway, river steamer and lake steamer services linking British interests in Egypt, East Africa and southern Africa. Sir Winston Churchill described her as "the best library afloat". Fate Robert Coryndon sank in 1962, around the time of Ugandan independence from Britain. She was not refloated. In 1967 the East African Railways and Harbours Corporation (EAR&H) offered her wreck for sale, but she remained largely intact in 2009. By the beginning of 2012 her wreck had been taken away "in bits and pieces by cutting all the metal remains for scrap" and only her aft king posts were still visible above the water.
What was the SS Robert Coryndon?
The SS Coryndon was a passenger and cargo ferry on Lake Albert in central Africa. It was built in 1929-30 and named after Sir Robert Coryndon, who was Governor of Uganda from 1918-22. It sank in 1962 and has been nearly entirely salvaged for scrap as of 2012.
null
false
144
Concept-map-based MDS is defined as follows: Given a set of related documents, create a concept map that represents its most important content, satisfies a specified size limit and is connected. We define a concept map as a labeled graph showing concepts as nodes and relationships between them as edges. Labels are arbitrary sequences of tokens taken from the documents, making the summarization task extractive. A concept can be an entity, abstract idea, event or activity, designated by its unique label. Good maps should be propositionally coherent, meaning that every relation together with the two connected concepts form a meaningful proposition. The task is complex, consisting of several interdependent subtasks. One has to extract appropriate labels for concepts and relations and recognize different expressions that refer to the same concept across multiple documents. Further, one has to select the most important concepts and relations for the summary and finally organize them in a graph satisfying the connectedness and size constraints. Concept-map-based MDS is defined as follows: Given a set of related documents, create a concept map that represents its most important content, satisfies a specified size limit and is connected. We define a concept map as a labeled graph showing concepts as nodes and relationships between them as edges. Labels are arbitrary sequences of tokens taken from the documents, making the summarization task extractive. A concept can be an entity, abstract idea, event or activity, designated by its unique label. Good maps should be propositionally coherent, meaning that every relation together with the two connected concepts form a meaningful proposition.
What is concept-map-based MDS?
Given a set of related documents, create a concept map that represents its most important content, satisfies a specified size limit and is connected. They define a concept map as a labeled graph showing concepts as nodes and relationships between them as edges. Labels are arbitrary sequences of tokens taken from the documents, making the summarization task extractive. A concept can be an entity, abstract idea, event or activity, designated by its unique label. Good maps should be propositionally coherent, meaning that every relation together with the two connected concepts form a meaningful proposition.
null
false
null
Paul Cézanne (/seɪˈzæn/ say-ZAN, also UK: /sɪˈzæn/ sə-ZAN, US: /seɪˈzɑːn/ say-ZAHN; French: [pɔl sezan]; 19 January 1839 – 22 October 1906) was a French artist and Post-Impressionist painter whose work introduced new modes of representation and influenced avant garde artistic movements of the early 20th century. Cézanne is said to have formed the bridge between late 19th-century Impressionism and the early 20th century's new line of artistic enquiry, Cubism.
Who is Paul Cezanne?
Paul Cezanne was a French artist who introduced new modes of representation such as cubism in the early 20th century.
null
false
null
I'm going on a hike and the weather is cold, how should I dress?
When hiking, it is important to dress in layers. As you hike, your body will heat up, and you will want the flexibility to remove layers accordingly. They layers should consist of a base layer to wick away sweat, an insulating layer to help retain body heat, and an outer layer to shield from rain or wind. You should also consider weather like snow or rain as this may change what you need to bring.
null
false
null
Who is the best golfer to ever play the game?
Tiger Woods is the best golfer ever for several reasons. He set or broke most major records in the history of the PGA tour. He was named Player of the Year ten times and was the fifth player to ever win a career grand slam which he accomplished at the age of 24. He has won each major at least three times. There hasn't been a golfer that has been more dominant across every major category for as long as Tiger Woods was.
null
false
null
Robinson was raised in Winston-Salem, North Carolina. She attended Richard J. Reynolds High School, where she began her first novel, Seven Ways We Lie. She is an alumna of Kenyon College in Gambier, Ohio, where she majored in economics, graduating in 2016. Her debut novel was published before she graduated. While at Kenyon, Robinson won the college's James E. Michael Playwriting Award for her play Mourning Sickness. Robinson worked from Chicago as writing apprentice for the satirical media outlet The Onion. Her three novels are Seven Ways We Lie (2016), Note Worthy (2017), and Final Draft (2018), all published by Amulet, an imprint of Abrams Books. Robinson is bisexual, of half-Irish and half-Chinese descent, and the characters in her novels similarly lie "in the middle of a spectrum rather than out at the ends".
Where did Riley Redgate sarted her first novel?
Riley Redgate started her first novel while attending Richard J. Reynolds High School.
null
false
295
We have conducted an analysis of the state-of-the-art Bayesian approach for acoustic unit discovery on a real case of low-resource language. This analysis was focused on the quality of the discovered units compared to the gold standard phone alignments. Outcomes of the analysis are i) the combination of neural network and Bayesian model (SVAE) yields a significant improvement in the AUD in term of consistency ii) Bayesian models can naturally embed information from a resourceful language and consequently improve the consistency of the discovered units. Finally, we hope this work can serve as a baseline for future research on unsupervised acoustic unit discovery in very low resource scenarios. Outcomes of the analysis are i) the combination of neural network and Bayesian model (SVAE) yields a significant improvement in the AUD in term of consistency ii) Bayesian models can naturally embed information from a resourceful language and consequently improve the consistency of the discovered units.
Does the combination of neural network and Bayesian model (SVAE) yield a significant improvement in the AUD in terms of consistency?
Yes, it does.
null
false
null
The doctrine of acts of independent significance at common law permits a testator to effectively change the disposition of his property without changing a will, if acts or events changing the disposition have some significance beyond avoiding the requirements of the will. The doctrine is frequently applied under the following two circumstances: The testator devises assets to a class of beneficiaries where the testator controls membership. For example, Joey leaves the contents of his bank account "to my employees." If Joey then fires some of old employees and hires new ones, the new employees will inherit the contents of the bank account under this provision. The testator devises a general type of property, and then changes the specific items of property within that category. For example, Joey writes in his will, "I leave my car to Rachel". Joey drives a 1974 AMC Gremlin at the time of the testamentary instrument, but later sells the Gremlin and purchases a 2016 Rolls-Royce Phantom Drophead Coupé with suicide doors and teak paneling. Because Joey bought a new car to get a more comfortable ride, rather than to change a will without going through the testamentary formalities, the gift to Rachel remains enforceable.
Give me two examples of an Act of independent significance. Separate them in a list.
- Joey leaves the contents of his bank account "to my employees." If Joey then fires some of the old employees and hires new ones, the new employees will inherit the contents of the bank account under this provision. - Joey writes in his will, "I leave my car to Rachel". Joey drives an old car at the time of the testamentary instrument, but later sells the Gremlin and purchases a new car. Because Joey bought a new car to get a more comfortable ride, rather than to change a will without going through the testamentary formalities, the gift to Rachel remains enforceable.
null
false
99
In this work we formulate our classification problem as follows: given two classes of news articles, respectively $D$ (disinformation) and $M$ (mainstream), a set of news articles $A_i$ and associated class labels $C_i \in \lbrace D,M\rbrace $, and a set of tweets $\Pi _i=\lbrace T_i^1, T_i^2, ...\rbrace $ each of which contains an Uniform Resource Locator (URL) pointing explicitly to article $A_i$, predict the class $C_i$ of each article $A_i$. There is huge debate and controversy on a proper taxonomy of malicious and deceptive information BIBREF1BIBREF2BIBREF15BIBREF16BIBREF17BIBREF3BIBREF11. In this work we prefer the term disinformation to the more specific fake news to refer to a variety of misleading and harmful information. Therefore, we follow a source-based approach, a consolidated strategy also adopted by BIBREF6BIBREF16BIBREF2BIBREF1, in order to obtain relevant data for our analysis. We collected: Disinformation articles, published by websites which are well-known for producing low-credibility content, false and misleading news reports as well as extreme propaganda and hoaxes and flagged as such by reputable journalists and fact-checkers; Mainstream news, referring to traditional news outlets which deliver factual and credible information. We believe that this is currently the most reliable classification approach, but it entails obvious limitations, as disinformation outlets may also publish true stories and likewise misinformation is sometimes reported on mainstream media. Also, given the choice of news sources, we cannot test whether our methodology is able to classify disinformation vs factual but not mainstream news which are published on niche, non-disinformation outlets. We collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. We filtered out articles associated to less than 50 tweets. The resulting dataset contains overall $\sim $1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and $\sim $1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles. Diffusion censoring effects BIBREF14 were correctly taken into account in both collection procedures. We provide in Figure FIGREF4 the distribution of articles by source and political bias for both news domains. As it is reported that conservatives and liberals exhibit different behaviors on online social platforms BIBREF19BIBREF20BIBREF21, we further assigned a political bias label to different US outlets (and therefore news articles) following the procedure described in BIBREF2. In order to assess the robustness of our method, we performed classification experiments by training only on left-biased (or right-biased) outlets of both disinformation and mainstream domains and testing on the entire set of sources, as well as excluding particular sources that outweigh the others in terms of samples to avoid over-fitting. For what concerns the Italian scenario we first collected tweets with the Streaming API in a 3-week period (April 19th, 2019-May 5th, 2019), filtering those containing URLs pointing to Italian official newspapers websites as described in BIBREF22; these correspond to the list provided by the association for the verification of newspaper circulation in Italy (Accertamenti Diffusione Stampa). We instead referred to the dataset provided by BIBREF23 to obtain a set of tweets, collected continuously since January 2019 using the same Twitter endpoint, which contain URLs to 60+ Italian disinformation websites. In order to get balanced classes (April 5th, 2019-May 5th, 2019), we retained data collected in a longer period w.r.t to mainstream news. In both cases we filtered out articles with less than 50 tweets; overall this dataset contains $\sim $160k mainstream tweets, corresponding to 227 news articles, and $\sim $100k disinformation tweets, corresponding to 237 news articles. We provide in Figure FIGREF5 the distribution of articles according to distinct sources for both news domains. As in the US dataset, we took into account censoring effects BIBREF14 by excluding tweets published before (left-censoring) or after two weeks (right-censoring) from the beginning of the collection process. The different volumes of news shared on Twitter in the two countries are due both to the different population size of US and Italy (320 vs 60 millions) but also to the different usage of Twitter platform (and social media in general) for news consumption BIBREF24. Both datasets analyzed in this work are available from the authors on request. A crucial aspect in our approach is the capability to fully capturing sharing cascades on Twitter associated to news articles. It has been reported BIBREF25 that the Twitter streaming endpoint filters out tweets matching a given query if they exceed 1% of the global daily volume of shared tweets, which nowadays is approximately $5\cdot 10^8$; however, as we always collected less than $10^6$ tweets per day, we did not incur in this issue and we thus gathered 100% of tweets matching our query. We built Twitter diffusion networks following an approach widely adopted in the literature BIBREF6BIBREF17BIBREF2. We remark that there is an unavoidable limitation in Twitter Streaming API, which does not allow to retrieve true re-tweeting cascades because re-tweets always point to the original source and not to intermediate re-tweeting users BIBREF8BIBREF14; thus we adopt the only viable approach based on Twitter's public availability of data. Besides, by disentangling different interactions with multiple layers we potentially reduce the impact of this limitation on the global network properties compared to the single-layer approach used in our baseline. Using the notation described in BIBREF26. we employ a multi-layer representation for Twitter diffusion networks. Sociologists have indeed recognized decades ago that it is crucial to study social systems by constructing multiple social networks where different types of ties among same individuals are used BIBREF27. Therefore, for each news article we built a multi-layer diffusion network composed of four different layers, one for each type of social interaction on Twitter platform, namely retweet (RT), reply (R), quote (Q) and mention (M), as shown in Figure FIGREF11. These networks are not necessarily node-aligned, i.e. users might be missing in some layers. We do not insert "dummy" nodes to represent all users as it would have severe impact on the global network properties (e.g. number of weakly connected components). Alternatively one may look at each multi-layer diffusion network as an ensemble of individual graphs BIBREF26; since global network properties are computed separately for each layer, they are not affected by the presence of any inter-layer edges. In our multi-layer representation, each layer is a directed graph where we add edges and nodes for each tweet of the layer type, e.g. for the RT layer: whenever user $a$ retweets account $b$ we first add nodes $a$ and $b$ if not already present in the RT layer, then we build an edge that goes from $b$ to $a$ if it does not exists or we increment the weight by 1. Similarly for the other layers: for the R layer edges go from user $a$ (who replies) to user $b$, for the Q layer edges go from user $b$ (who is quoted by) to user $a$ and for the M layer edges go from user $a$ (who mentions) to user $b$. Note that, by construction, our layers do not include isolated nodes; they correspond to "pure tweets", i.e. tweets which have not originated any interactions with other users. However, they are present in our dataset, and their number is exploited for classification, as described below. We used a set of global network indicators which allow us to encode each network layer by a tuple of features. Then we simply concatenated tuples as to represent each multi-layer network with a single feature vector. We used the following global network properties: Number of Strongly Connected Components (SCC): a Strongly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $u,v$ there is a path in each direction ($u\rightarrow v$, $v\rightarrow u$). Size of the Largest Strongly Connected Component (LSCC): the number of nodes in the largest strongly connected component of a given graph. Number of Weakly Connected Components (WCC): a Weakly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $(u, v)$ there is a path $u \leftrightarrow v$ ignoring edge directions. Size of the Largest Weakly Connected Component (LWCC): the number of nodes in the largest weakly connected component of a given graph. Diameter of the Largest Weakly Connected Component (DWCC): the largest distance (length of the shortest path) between two nodes in the (undirected version of) largest weakly connected component of a graph. Average Clustering Coefficient (CC): the average of the local clustering coefficients of all nodes in a graph; the local clustering coefficient of a node quantifies how close its neighbours are to being a complete graph (or a clique). It is computed according to BIBREF28. Main K-core Number (KC): a K-core BIBREF13 of a graph is a maximal sub-graph that contains nodes of internal degree $k$ or more; the main K-core number is the highest value of $k$ (in directed graphs the total degree is considered). Density (d): the density for directed graphs is $d=\frac{|E|}{|V||V-1|}$, where $|E|$ is the number of edges and $|N|$ is the number of vertices in the graph; the density equals 0 for a graph without edges and 1 for a complete graph. Structural virality of the largest weakly connected component (SV): this measure is defined in BIBREF14 as the average distance between all pairs of nodes in a cascade tree or, equivalently, as the average depth of nodes, averaged over all nodes in turn acting as a root; for $|V| > 1$ vertices, $SV=\frac{1}{|V||V-1|}\sum _i\sum _j d_{ij}$ where $d_{ij}$ denotes the length of the shortest path between nodes $i$ and $j$. This is equivalent to compute the Wiener's index BIBREF29 of the graph and multiply it by a factor $\frac{1}{|V||V-1|}$. In our case we computed it for the undirected equivalent graph of the largest weakly connected component, setting it to 0 whenever $V=1$. We used networkx Python package BIBREF30 to compute all features. Whenever a layer is empty. we simply set to 0 all its features. In addition to computing the above nine features for each layer, we added two indicators for encoding information about pure tweets, namely the number T of pure tweets (containing URLs to a given news article) and the number U of unique users authoring those tweets. Therefore, a single diffusion network is represented by a vector with $9\cdot 4+2=38$ entries. Aforementioned network properties can be qualitatively explained in terms of social footprints as follows: SCC correlates with the size of the diffusion network, as the propagation of news occurs in a broadcast manner most of the time, i.e. re-tweets dominate on other interactions, while LSCC allows to distinguish cases where such mono-directionality is somehow broken. WCC equals (approximately) the number of distinct diffusion cascades pertaining to each news article, with exceptions corresponding to those cases where some cascades merge together via Twitter interactions such as mentions, quotes and replies, and accordingly LWCC and DWCC equals the size and the depth of the largest cascade. CC corresponds to the level of connectedness of neighboring users in a given diffusion network whereas KC identifies the set of most influential users in a network and describes the efficiency of information spreading BIBREF17. Finally, d describes the proportions of potential connections between users which are actually activated and SV indicates whether a news item has gained popularity with a single and large broadcast or in a more viral fashion through multiple generations. For what concerns different Twitter actions, users primarily interact with each other using retweets and mentions BIBREF20. The former are the main engagement activity and act as a form of endorsement, allowing users to rebroadcast content generated by other users BIBREF31. Besides, when node B retweets node A we have an implicit confirmation that information from A appeared in B's Twitter feed BIBREF12. Quotes are simply a special case of retweets with comments. Mentions usually include personal conversations as they allow someone to address a specific user or to refer to an individual in the third person; in the first case they are located at the beginning of a tweet and they are known as replies, otherwise they are put in the body of a tweet BIBREF20. The network of mentions is usually seen as a stronger version of interactions between Twitter users, compared to the traditional graph of follower/following relationships BIBREF32. The resulting dataset contains overall ∼1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and ∼1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles.
What does the resulting dataset contain in this paper?
The resulting dataset contains overall ∼1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and ∼1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles.
null
false
null
Classify the following information as 'PII' or 'not PII' : Social Security Number, Home Address, Age, Number of Employees, City Population, Gender, Business Phone
Social Security Number - PII Home Address - PII Age - PII Number of Employees - Not PII City Population - Not PII Gender - Not PII Business Phone - Not PII
1701.08118
false
null
As previously mentioned, there is no German hate speech corpus available for our needs, especially not for the very recent topic of the refugee crisis in Europe. We therefore had to compile our own corpus. We used Twitter as a source as it offers recent comments on current events. In our study we only considered the textual content of tweets that contain certain keywords, ignoring those that contain pictures or links. This section provides a detailed description of the approach we used to select the tweets and subsequently annotate them. As previously mentioned, there is no German hate speech corpus available for our needs, especially not for the very recent topic of the refugee crisis in Europe. We therefore had to compile our own corpus.
What languages are were included in the dataset of hateful content?
The answers are shown as follows: * German
null
false
null
Evelyn Quan Wang is a middle-aged Chinese American immigrant who runs a laundromat with her husband, Waymond; two decades earlier, they eloped to the United States and had a daughter, Joy. The laundromat is being audited by the Internal Revenue Service (IRS), Waymond is trying to serve Evelyn divorce papers in an attempt to get her attention so they can talk things out regarding their marriage, Evelyn's demanding father (referred to as Gong Gong, Cantonese for 'grandfather') is visiting for her Chinese New Year party, and Evelyn is reluctant to accept Joy's lesbian relationship with her non-Chinese girlfriend Becky (only introducing Becky to Gong Gong as Joy's "very good friend").
Who is Evelyn Quan Wang in the move "Everything Everywhere All at Once"?
Evelyn Quan Wang is a middle-aged Chinese American immigrant who runs a laundromat with her husband, Waymond.
null
false
null
What are some best practices to keep car tires from losing tread prematurely
To keep car tires from losing tread prematurely follow these best practices 1) Ensure the tire pressure is always at the car and tire manufacturers recommended PSI levels 2) Ensure you rotate tires every 6 months 3) Enhance your sense of breaking scenarios and try to gently break into it rather than abruptly slamming the breaks 4) Similarly try to accelerate gently as you start your drive rather than accelerating at a high rate.
null
false
null
How much sleep is too much?
You can get as much sleep as you want. 7-8 hours is considered normal for adults. But everyone is different, some enjoy taking naps whenever available for an energy boost or just to get relaxed. You might suffer from narcolepsy if you fall asleep without warning anytime or anywhere. Go see a doctor in that case.
null
false
192
Topic modeling approaches are unsupervised statistical algorithms that usually considers each document as a "bag of words". There were several attempts to enrich word-based topic models (=unigram topic models) with additional prior knowledge or multiword expressions. Andrzejewski et al. BIBREF5 incorporated knowledge by Must-Link and Cannot-Link primitives represented by a Dirichlet Forest prior. These primitives were then used in BIBREF6 , where similar words are encouraged to have similar topic distributions. However, all such methods incorporate knowledge in a hard and topic-independent way, which is a simplification since two words that are similar in one topic are not necessarily of equal importance for another topic. Xie et al. BIBREF7 proposed a Markov Random Field regularized LDA model (MRF-LDA), which utilizes the external knowledge to improve the coherence of topic modeling. Within a document, if two words are labeled as similar according to the external knowledge, their latent topic nodes are connected by an undirected edge and a binary potential function is defined to encourage them to share the same topic label. Distributional similarity of words is calculated beforehand on a large text corpus. In BIBREF8 , the authors gather so-called lexical relation sets (LR-sets) for word senses described in WordNet. The LR-sets include synonyms, antonyms and adjective-attribute related words. To adapt LR-sets to a specific domain corpus and to remove inappropriate lexical relations, the correlation matrix for word pairs in each LR-set is calculated. This matrix at the first step is used for filtrating inappropriate senses, then it is used to modify the initial LDA topic model according to the generalized Polya urn model described in BIBREF9 . The generalized Polya urn model boosts probabilities of related words in word-topic distributions. Gao and Wen BIBREF10 presented Semantic Similarity-Enhanced Topic Model that accounts for corpus-specific word co-occurrence and word semantic similarity calculated on WordNet paths between corresponding synsets using the generalized Polya urn model. They apply their topic model for categorizing short texts. All above-mentioned approaches on adding knowledge to topic models are limited to single words. Approaches using ngrams in topic models can be subdivided into two groups. The first group of methods tries to create a unified probabilistic model accounting unigrams and phrases. Bigram-based approaches include the Bigram Topic Model BIBREF11 and LDA Collocation Model BIBREF12 . In BIBREF13 the Topical N-Gram Model was proposed to allow the generation of ngrams based on the context. However, all these models are enough complex and hard to compute on real datasets. The second group of methods is based on preliminary extraction of ngrams and their further use in topics generation. Initial studies of this approach used only bigrams BIBREF14 , BIBREF15 . Nokel and Loukachevitch BIBREF16 proposed the LDA-SIM algorithm, which integrates top-ranked ngrams and terms of information-retrieval thesauri into topic models (thesaurus relations were not utilized). They create similarity sets of expressions having the same word components and sum up frequencies of similarity set members if they co-occur in the same text. In this paper we describe the approach to integrate whole manual thesauri into topic models together with multiword expressions. Xie et al. [8] proposed a Markov Random Field regularized LDA model (MRFLDA), which utilizes the external knowledge to improve the coherence of topic modeling.
What model did Xie et al. propose?
A Markov Random Field regularized LDA model.
null
false
null
Edmund Selous started as a conventional naturalist, but developed a hatred of the killing of animals for scientific study and was a pioneer of bird-watching as a method of scientific study. He was a strong proponent of non-destructive bird-study as opposed to the collection of skins and eggs.
Given this paragraph, what method of scientific study did Edmund Selous champion?
Edmund Selous was a pioneer of bird-watching as a method of scientific study.
null
false
null
Tabarra agitation of 1939 was an event when some 18,000 Shias were jailed during months of March, April, and May 1939 in Lucknow, India. A civil disobedience movement was at once started by Shias following the Congress Government communique of 31 March 1939 (which allowed reciting of previously banned Madhe-Sahaba at public meetings). Sunnis called off their civil disobedience movement but its effect on Shias was far greater than anticipated; they had the authority of two committees (Piggot committee and Allsop committee) and the practice of thirty years in support of their position. Some 1800 Shias courted arrest, including prominent Shia figures such as Syed Ali Zaheer (newly elected MLA from Allahabad-Jaunpur), the Princes of the former royal family of Awadh, Sayed Muhammad Naseer Naseer-ul-Millat the son of Maulana Nasir a respected Shia mujtahid (the eldest son, student and designated successor of Maulana Sayed Nasir Hussain of Abaqati family), Maulana Sayed Kalb-e-Husain and his son Maulana Kalb-e-Abid (both ulema of Nasirabadi family) and the brothers of Raja of Salempur and the Raja of Pirpur, important ML leaders. It was believed that Maulana Nasir himself besides the top ranking ML leaders such as Raja of Mahmudabad and Raja of Pirpur would together court arrest. Maulana Azad, appointed an arbitrator by the parties, called a conference of Shias and Sunnis, but it failed to produce a settlement. Azad conveyed to Sir Haig, the Governor, his view that the communique was 'based on inadequate appreciation of the situation and, therefore, an error of judgement'. But, this statement came just a day before the Barawafat, and a Madhe-Sahaba procession was taken out. The Shias too were allowed a counter-procession, it was disturbed and resulted in a serious riot. Several people died and police had to open fire several times. The following year i.e. 1940, the district authorities banned the public recitation of Madhe-Sahaba and Tabarra in processions and meetings held in public places.
Can you tell me about Tabarra?
Civil disobedient movement in 1939 is an event that is called Tabarra agitation. Around 18,000 Shias were jailed during 1939 in Lucknow, India
null
false
null
What are some of the causes for the 1929 US stock market crash?
The 1920’s were a booming time for Americans as the stock market was surging due to speculative investing. Stock prices rose to records heights which drove the public to invest their liquid assets into securities. Billions of dollars were drawn from banks for broker’s loans to carry margin accounts. People were mortgaging their homes to pour money into stocks. In late 1929, the market eventually overheated and prices began to fall despite efforts by the banks to buy up large blocks of stock to stem the losses. Eventually, the panic was too much and the market crashed on Black Monday, October 24th when the market closed down 12.8 percent. The sharp decline in the 1929 stock market eventually led to the Great Depression of the 1930’s.
null
false
null
Side A 1. "H2O Gate Blues" – 7:58 (about the Watergate break-in and cover-up) (from Winter in America) 2. "We Beg Your Pardon (Pardon our Analysis)" – 7:52 (about the pardon of Richard Nixon) (from The First Minute of a New Day) 3. "The New Deal" – 3:10 Side B 4. "Jose Campos Torres" – 2:36 (about Jose Campos Torres, a U.S. Army veteran who was arrested and then murdered and tossed into a bayou by two police officers in Houston in 1978, spurring the Moody Park Riot) 5. "The Ghetto Code (Dot Dot Dit Dit Dot Dot Dash)" – 12:57 6. "Bicentennial Blues" – 8:39 (from It's Your World)
What is the length of the song "The New Deal"?
3:10
null
false
119
Our goal is to demonstrate how multitask learning can be successfully applied on the task of sentiment classification of tweets. The particularities of tweets are to be short and informal text spans. The common use of abbreviations, creative language etc., makes the sentiment classification problem challenging. To validate our hypothesis, that learning the tasks jointly can benefit the performance, we propose an experimental setting where there are data from two different twitter sentiment classification problems: a fine-grained and a ternary. We consider the fine-grained task to be our primary task as it is more challenging and obtaining bigger datasets, e.g. by distant supervision, is not straightforward and, hence we report the performance achieved for this task. Ternary and fine-grained sentiment classification were part of the SemEval-2016 “Sentiment Analysis in Twitter” task BIBREF16 . We use the high-quality datasets the challenge organizers released. The dataset for fine-grained classification is split in training, development, development_test and test parts. In the rest, we refer to these splits as train, development and test, where train is composed by the training and the development instances. Table TABREF7 presents an overview of the data. As discussed in BIBREF16 and illustrated in the Table, the fine-grained dataset is highly unbalanced and skewed towards the positive sentiment: only INLINEFORM0 of the training examples are labeled with one of the negative classes. Feature representation We report results using two different feature sets. The first one, dubbed nbow, is a neural bag-of-words that uses text embeddings to generate low-dimensional, dense representations of the tweets. To construct the nbow representation, given the word embeddings dictionary where each word is associated with a vector, we apply the average compositional function that averages the embeddings of the words that compose a tweet. Simple compositional functions like average were shown to be robust and efficient in previous work BIBREF17 . Instead of training embeddings from scratch, we use the pre-trained on tweets GloVe embeddings of BIBREF10 . In terms of resources required, using only nbow is efficient as it does not require any domain knowledge. However, previous research on sentiment analysis showed that using extra resources, like sentiment lexicons, can benefit significantly the performance BIBREF3 , BIBREF2 . To validate this and examine at which extent neural networks and multitask learning benefit from such features we evaluate the models using an augmented version of nbow, dubbed nbow+. The feature space of the latter, is augmented using 1,368 extra features consisting mostly of counts of punctuation symbols ('!?#@'), emoticons, elongated words and word membership features in several sentiment lexicons. Due to space limitations, for a complete presentation of these features, we refer the interested reader to BIBREF2 , whose open implementation we used to extract them. Evaluation measure To reproduce the setting of the SemEval challenges BIBREF16 , we optimize our systems using as primary measure the macro-averaged Mean Absolute Error ( INLINEFORM0 ) given by: INLINEFORM1 where INLINEFORM0 is the number of categories, INLINEFORM1 is the set of instances whose true class is INLINEFORM2 , INLINEFORM3 is the true label of the instance INLINEFORM4 and INLINEFORM5 the predicted label. The measure penalizes decisions far from the true ones and is macro-averaged to account for the fact that the data are unbalanced. Complementary to INLINEFORM6 , we report the performance achieved on the micro-averaged INLINEFORM7 measure, which is a commonly used measure for classification. The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion. Both SVMs and LRs as discussed above treat the problem as a multi-class one, without considering the ordering of the classes. For these four models, we tuned the hyper-parameter INLINEFORM0 that controls the importance of the L INLINEFORM1 regularization part in the optimization problem with grid-search over INLINEFORM2 using 10-fold cross-validation in the union of the training and development data and then retrained the models with the selected values. Also, to account for the unbalanced classification problem we used class weights to penalize more the errors made on the rare classes. These weights were inversely proportional to the frequency of each class. For the four models we used the implementations of Scikit-learn BIBREF19 . For multitask learning we use the architecture shown in Figure FIGREF2 , which we implemented with Keras BIBREF20 . The embeddings are initialized with the 50-dimensional GloVe embeddings while the output of the biLSTM network is set to dimension 50. The activation function of the hidden layers is the hyperbolic tangent. The weights of the layers were initialized from a uniform distribution, scaled as described in BIBREF21 . We used the Root Mean Square Propagation optimization method. We used dropout for regularizing the network. We trained the network using batches of 128 examples as follows: before selecting the batch, we perform a Bernoulli trial with probability INLINEFORM0 to select the task to train for. With probability INLINEFORM1 we pick a batch for the fine-grained sentiment classification problem, while with probability INLINEFORM2 we pick a batch for the ternary problem. As shown in Figure FIGREF2 , the error is backpropagated until the embeddings, that we fine-tune during the learning process. Notice also that the weights of the network until the layer INLINEFORM3 are shared and therefore affected by both tasks. To tune the neural network hyper-parameters we used 5-fold cross validation. We tuned the probability INLINEFORM0 of dropout after the hidden layers INLINEFORM1 and for the biLSTM for INLINEFORM2 , the size of the hidden layer INLINEFORM3 and the probability INLINEFORM4 of the Bernoulli trials from INLINEFORM5 . During training, we monitor the network's performance on the development set and apply early stopping if the performance on the validation set does not improve for 5 consecutive epochs. Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry “Balikas et al.” stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art. Due to the stochasticity of training the biLSTM models, we repeat the experiment 10 times and report the average and the standard deviation of the performance achieved. Several observations can be made from the table. First notice that, overall, the best performance is achieved by the neural network architecture that uses multitask learning. This entails that the system makes use of the available resources efficiently and improves the state-of-the-art performance. In conjunction with the fact that we found the optimal probability INLINEFORM0 , this highlights the benefits of multitask learning over single task learning. Furthermore, as described above, the neural network-based models have only access to the training data as the development are hold for early stopping. On the other hand, the baseline systems were retrained on the union of the train and development sets. Hence, even with fewer resources available for training on the fine-grained problem, the neural networks outperform the baselines. We also highlight the positive effect of the additional features that previous research proposed. Adding the features both in the baselines and in the biLSTM-based architectures improves the INLINEFORM1 scores by several points. Lastly, we compare the performance of the baseline systems with the performance of the state-of-the-art system of BIBREF2 . While BIBREF2 uses n-grams (and character-grams) with INLINEFORM0 , the baseline systems (SVMs, LRs) used in this work use the nbow+ representation, that relies on unigrams. Although they perform on par, the competitive performance of nbow highlights the potential of distributed representations for short-text classification. Further, incorporating structure and distributed representations leads to the gains of the biLSTM network, in the multitask and single task setting. Similar observations can be drawn from Figure FIGREF10 that presents the INLINEFORM0 scores. Again, the biLSTM network with multitask learning achieves the best performance. It is also to be noted that although the two evaluation measures are correlated in the sense that the ranking of the models is the same, small differences in the INLINEFORM1 have large effect on the scores of the INLINEFORM2 measure. To validate our hypothesis, that learning the tasks jointly can benefit the performance, we propose an experimental setting where there are data from two different twitter sentiment classification problems: a fine-grained and a ternary.
What do the authors do to verify that joint learning tasks can benefit performance?
They propose an experimental setting where there are data from two different twitter sentiment classification problems: a fine-grained and a ternary.
null
false
null
Which sports would be easiest to find success in if you’re not tall: baseball, soccer, basketball, bowling.
Soccer and bowling would be the two easiest sports in this list to succeed in if you’re not tall.
1908.00153
true
null
In this work, we build a novel regression model, based on linguistic, content, behavioral and topic features to detect Arabic Twitter bots to understand the impact of bots in spreading religious hatred in Arabic Twitter space. In particular, we quantitatively code and analyze a representative sample of 450 accounts disseminating hate speech from the dataset constructed in our previous work BIBREF18 , BIBREF19 for bot-like behavior. We compare our assigned bot-likelihood scores to those of Botometer BIBREF14 , a well-known machine-learning-based bot detection tool, and we show that Botometer performs a little above average in detecting Arabic bots. Based on our analysis, we build a predictive regression model and train it on various sets of features and show that our regression model outperforms Botometer's by a significant margin (31 points in Spearman's rho). Finally, we provide a large-scale analysis of predictive features that distinguish bots from humans in terms of characteristics and behaviors within the context of social media. In this work, we build a novel regression model, based on linguistic, content, behavioral and topic features to detect Arabic Twitter bots to understand the impact of bots in spreading religious hatred in Arabic Twitter space.
Do they propose a new model to better detect Arabic bots specifically?
Yes.
null
false
null
In philosophy, praxeology or praxiology (/ˌpræksiˈɒlədʒi/; from Ancient Greek πρᾶξις (praxis) 'deed, action', and -λογία (-logia) 'study of') is the theory of human action, based on the notion that humans engage in purposeful behavior, contrary to reflexive behavior and other unintentional behavior. French social philosopher Alfred Espinas gave the term its modern meaning, and praxeology was developed independently by two principal groups: the Austrian school, led by Ludwig von Mises, and the Polish school, led by Tadeusz Kotarbiński.
What is the name of the major school of praxiology not developed by Ludwig von Mises
The major school of praxiology not developed by Ludwig von Mises is the Polish school, which was led by Tadeusz Kotarbiński.
null
false
null
What is more important in society, freedom or safety?
In our society, freedom and safety are both valued. While many other countries may put safety first, the harsh reality for Americans is that freedom always comes before safety. Despite being equally important, Americans foolishly put their trust in freedom over safety, without realizing the dangers of those freedoms. The 2nd Amendment is a commonly danger to freedom. Americans are allowed to own guns for self protection. However, since the 18th century, guns have changed. Instead of a rifle or shotgun, Americans own ARI5s and other guns that can fire hundreds of bullets per round. Security in America has also increased leaving people to question the need for a personal weapon. Often a result of having the freedom to easily buy and keep guns is school shootings. Teens who are depressed, lonely, suicidal, or crazy can easily buy or ise their parents guns to shoot and will other students, teachers, or themselves. In late 2022, a six year old brought his father's hand gun to school and shot his teacher. Yes, the gun was licensed and own by the father but why should any US citizen or even person feel safe in this country when the 2nd Amendment exsits? Should anyone trust their freedoms when it is easier to buy a gun than a car? On the flip side, America's freedom is a gift that many other countries don't give. In 2019 when covid-19 first immerged in China, the country. went on lockdown, and in some cities, still remains so. By the third year of the pandemic, Americans learned of the extreme measures the Chinese government had taken upon its citizens, arresting those who left their homes for any reason, in the worst case. While China's laid protocols were way too extreme, Americans over the pandemic chose their freedom to not wear a mask over the safety of themselves and others, again. To be clear, freedom and safety are not opposites, and countries who manage both well are impressive to be sure. But Americans are unwilling, to a dangerous level, to let go of some or any of their freedoms, even if it will benefit the overall safety of America. Freedom, in America, is valued above all, and could arguably be our most identifiable trait.
null
false
null
Built for $360,000 to replace the former Sanford Field in nearby Central Campus, Sanford Stadium was inaugurated on October 12, 1929, with a 15–0 victory over the Yale Bulldogs football team. Originally constructed to accommodate 30,000 fans, a double deck addition in 1967 added 19,000 more seats, and a 1981 addition to encircle the field added another 19,000 seats. After several more renovations, the stadium now holds more than 93,000 spectators, making it one of the largest collegiate stadiums in the country and the thirteenth largest stadium in the world. The stadium is named for Steadman Sanford, a former president of the university and chancellor of the University System of Georgia. Besides being the home of the Georgia Bulldogs football team, the stadium also serves as an event venue, the location of undergraduate graduation ceremonies, and was used for the medal competition of men's and women's Olympic football (soccer) at the 1996 Summer Olympics.
Given the following paragraph about the Sanford Stadium of the University of Georgia, where does the name of the stadium originate from?
The Sanford Stadium of the University of Georgia is named after Steadman Sanford who was both a former president and chancellor of the University of Georgia.
null
false
null
Where should I go on my honeymoon?
There are many beautiful places to go on a honeymoon that are romantic and exciting. If you like adventure, Europe or South America have beautiful beaches and great restaurants and new places to see and diverse cultures to expose yourself to. If you prefer a more relaxing vacation, Hawaii is so beautiful and also has tons of hiking and great outdoor adventures to experience. There are many great choices in the United States also, you could take a tour of national parks out west, or go on a wine tour in California, or visit the Great Lakes of Michigan!
null
false
null
What is the capital of Colorado?
Denver is the capital
null
false
null
Think up some openings for beginners to the game of Chess
Chess openings are the most important part of the game, there are some advanced ones, some common ones, and some easy ones. This is a list of openings for beginners to the game: - The King's Pawn Opening: This is a very common opening, you move your king's pawn 2 spaces forward, so does the opponent. This is followed by developing your knights. - The Fianchetto defense. You move your kingside knight's pawn forward 1 space, then you develop the kingside knight. You then put your kingside bishop in the square that the pawn used to be in, followed by castling. This opening is common among lower ranked players. - Four Knights Game: This doesn’t happen too much. Both sides start by developing their knights, the pawns take it over from there. - The French Defense: You move your king's pawn 1 space forward, followed by developing all of your kingside pieces, and finishing by castling.
1909.08041
false
null
We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA . As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\sim $8 absolute points increase on EM and $\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation. Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\sim $4 and $\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval. FLOAT SELECTED: Table 1: Results of systems on HOTPOTQA. FLOAT SELECTED: Table 2: Performance of systems on FEVER. “F1” indicates the sentence-level evidence F1 score. “LA” indicates Label Acc. without considering the evidence prediction. “FS”=FEVER Score (Thorne et al., 2018) We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA . As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. FLOAT SELECTED: Table 1: Results of systems on HOTPOTQA. FLOAT SELECTED: Table 2: Performance of systems on FEVER. “F1” indicates the sentence-level evidence F1 score. “LA” indicates Label Acc. without considering the evidence prediction. “FS”=FEVER Score (Thorne et al., 2018)
What baseline approaches do they compare against?
HotspotQA: Yang, Ding, Muppet Fever: Hanselowski, Yoneda, Nie
null
false
null
Which items are not allowed when passing through TSA security? baby carrier, empty water bottle, dog, box cutters, baseball bat, bike, axe, cooking spray
empty water bottle, box cutters, axe, cooking spray
null
false
null
The North Atlantic Treaty Organization (NATO, /ˈneɪtoʊ/; French: Organisation du traité de l'Atlantique nord, OTAN), also called the North Atlantic Alliance, is an intergovernmental military alliance between 31 member states – 29 European and two North American. Established in the aftermath of World War II, the organization implemented the North Atlantic Treaty, signed in Washington, D.C., on 4 April 1949. NATO is a collective security system: its independent member states agree to defend each other against attacks by third parties. During the Cold War, NATO operated as a check on the perceived threat posed by the Soviet Union. The alliance remained in place after the dissolution of the Soviet Union and has been involved in military operations in the Balkans, the Middle East, South Asia, and Africa. The organization's motto is animus in consulendo liber (Latin for "a mind unfettered in deliberation").
Give me a few key facts from the paragraph below
The North Atlantic Treaty Organization (NATO) was established on April 4, 1949. As a military alliance, it serves as a unified defense organization for its member countries. For example, it served as a check against Russia during the Cold War. NATO has 31 member countries today.
null
false
null
How do you play blackjack?
Blackjack or Twenty-One is a casino type game Blackjack is the most widely played casino banking game in the world. It uses decks of 52 cards and descends from a global family of casino banking games known as Twenty-One where players bet against the dealer for hands as close to 21 as possible without going over.
1911.11698
false
null
Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector. Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW).
What Doc2Vec architectures other than PV-DBOW have been tried?
The answers are shown as follows: * PV-DM
null
false
null
Why education is so important apart from career opportunity
A good and educated person only can treat another person as an human being.
null
false
null
What is the oldest ballpark in Major League Baseball?
The oldest ballpark in Major League Baseball is Fenway Park in Boston, which was build in 1912.
null
false
null
Which of the following is a famous food in Dali County Shaanxi Province in China: Waterbasin Lamb, Roujiamo, fried dumplings
Waterbasin Lamb
null
false
null
In the first quarter of 2020, consumers responded to COVID by shopping less frequently (5.6% fewer transactions), and buying more when they did shop (16.5%). As people shifted from eating out to eating at home, net sales at Walmart increased by 10.5%, while online sales rose by 74%. Although Walmart experienced a 5.5% increase in operating expenses, its net income increased by 3.9%. In the third quarter of 2020, ending October 31, Walmart reported revenue of $134.7 billion, representing a year-on-year increase of 5.2 percent. In December 2020, Walmart launched a new service, Carrier Pickup, that allows the customers to schedule a return for a product bought online, in-store, or from a third-party vendor. These services can be initiated on the Walmart App or on the website. In January 2021, Walmart announced that the company is launching a fintech startup, with venture partner Ribbit Capital, to provide financial products for consumers and employees. In February 2021, Walmart acquired technology from Thunder Industries, which uses automation to create digital ads, to expand its online marketing capabilities. In August 2021, Walmart announced it would open its Spark crowdsource delivery to other businesses as a white-label service, competing with Postmates and online food ordering delivery companies. In December 2021, Walmart announced it will participate in the Stephens Investment Conference Wednesday, and the Morgan Stanley Virtual Global Consumer & Retail Conference. In June 2022, Walmart announced it would be acquiring Memomi, an AR optical tech company. In August 2022, Walmart announced it would be acquiring Volt Systems, a vendor management and product tracking software company. Walmart announced it was partnering with Paramount to offer Paramount+ content to its Walmart+ subscribers in a bid to better compete with Amazon. Walmart announced in August 2022 that locations were not going back to 24 hours with most stores now being open between 6am and 11pm. In January 2023, Walmart announced it would raise its minimum wage for U.S. hourly workers from $12 to $14 an hour. Approximately 340,000 employees are expected to receive a raise, effective in early March 2023, and Walmart's U.S. average wage is expected to be over $17.50. The company also announced it would be adding additional college degrees and certificates to its Live Better U program.
From the passage below, extract the names of the companies and technologies that Walmart has acquired over the last 3 years.
Walmart acquired Volt Systems in August 2022. Volt Systems focuses on vendor management and product tracking. They also acquired technology from Thunder Industries in February 2021 to create digital advertisements.
null
false
null
Give me five tips for practicing a speech.
(1) Write out the speech by hand. Handwriting can help you to memorize the text. (2) Write out a very short (5-point) outline, with one or two key lines you want to emphasize. If you can remember these, you can fill in the rest. (3) Practice the speech in an unusual situation, such as facing a blank wall or in the middle of an open field. If you can stay on point and deliver the speech there, then you can do it in front of an audience. (4) Record yourself practicing, and listen to the recording. Ideally, use video so you can see your gestures and body language. (5) Practice in an exaggerated manner: speak more loudly than you think you need to; use big, slow hand gestures; and speak more slowly than you plan to.
null
false
null
Jonava railway bridge (lit. Jonavos geležinkelio tiltas, Jonavos geležinis tiltas) is a railway bridge over the Neris in Jonava (Jonava district municipality), the ninth largest city in Lithuania with a population of ca 35,000.
How many cities in Lithuania are larger than Neris?
Eight cities are larger than Neris in Lithuania. This is because Neris is the 9th largest, meaning there are 8 which are larger.
null
false
null
Why do people like working from home?
Employees like working from home primarily due to the flexibility it offers. Through remote work, employees can still produce deliverables but do not need to spend time commuting, getting ready in the morning, or rushing to beat traffic. It also allows working parents to better manage childcare responsibilities. On the other hand, it is important to note that some of the drawbacks of remote work include isolation and reduced productivity. It is important for each manager to determine which type of work is best suited for their team to balance employee productivity & flexibility preferences.
1805.12032
false
null
The first metric we report is the reaction type. Recent studies have found that 59% of bitly-URLs on Twitter are shared without ever being read BIBREF11 , and 73% of Reddit posts were voted on without reading the linked article BIBREF12 . Instead, users tend to rely on the commentary added to retweets or the comments section of Reddit-posts for information on the content and its credibility. Faced with this reality, we ask: what kind of reactions do users find when they browse sources of varying credibility? Discourse acts, or speech acts, can be used to identify the use of language within a conversation, e.g., agreement, question, or answer. Recent work by Zhang et al. zhang2017characterizing classified Reddit comments by their primary discourse act (e.g., question, agreement, humor), and further analyzed patterns from these discussions. The second metric we report is reaction speed. A study by Jin et al. jin2013epidemiological found that trusted news stories spread faster than misinformation or rumor; Zeng et al. zeng2016rumors found that tweets which deny rumors had shorter delays than tweets of support. Our second goal is to determine if these trends are maintained for various types of news sources on Twitter and Reddit. To examine whether users react to content from trusted sources differently than from deceptive sources, we measure the reaction delay, which we define as the time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred. We report the cumulative distribution functions (CDFs) for each source type and use Mann Whitney U (MWU) tests to compare whether users respond with a given reaction type with significantly different delays to news sources of different levels of credibility. The first metric we report is the reaction type. The second metric we report is reaction speed. A study by Jin et al. jin2013epidemiological found that trusted news stories spread faster than misinformation or rumor; Zeng et al. zeng2016rumors found that tweets which deny rumors had shorter delays than tweets of support. Our second goal is to determine if these trends are maintained for various types of news sources on Twitter and Reddit. To examine whether users react to content from trusted sources differently than from deceptive sources, we measure the reaction delay, which we define as the time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred. We report the cumulative distribution functions (CDFs) for each source type and use Mann Whitney U (MWU) tests to compare whether users respond with a given reaction type with significantly different delays to news sources of different levels of credibility.
How is speed measured?
The answers are shown as follows: * time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred
null
false
null
Give me the top guitar players of all time
Jimi Hendrix Eddie Van Halen Eric Clapton Chuck Berry Keith Richards Jimmy Page B.B King
null
false
346
Neural Machine Translation (NMT) BIBREF0 , BIBREF1 has recently established itself as a new state-of-the art in machine translation. We present Nematus, a new toolkit for Neural Machine Translation. Nematus has its roots in the dl4mt-tutorial. We found the codebase of the tutorial to be compact, simple and easy to extend, while also producing high translation quality. These characteristics make it a good starting point for research in NMT. Nematus has been extended to include new functionality based on recent research, and has been used to build top-performing systems to last year's shared translation tasks at WMT BIBREF2 and IWSLT BIBREF3 . Nematus is implemented in Python, and based on the Theano framework BIBREF4 . It implements an attentional encoder–decoder architecture similar to DBLP:journals/corr/BahdanauCB14. Our neural network architecture differs in some aspect from theirs, and we will discuss differences in more detail. We will also describe additional functionality, aimed to enhance usability and performance, which has been implemented in Nematus. Nematus has been extended to include new functionality based on recent research, and has been used to build top-performing systems to last year's shared translation tasks at WMT and IWSLT.
What has Nematus been used to do?
To build top-performing systems to last year's shared translation tasks at WMT and IWSLT.
null
false
null
What did Jamaican-American author Nicola Yoon major in college for her undergraduate degree?
Electrical engineering at Cornell University
null
false
null
Which rock star is known to consume heavy amounts of drugs and it surprising he has live this long?
Keith Richards of the Rolling Stones comes to mind.
null
false
null
What are some of the greatest rock bands of all time?
The Rolling Stones, The Beatles, and Led Zeppelin
1911.03562
false
null
FLOAT SELECTED: Figure 33 The most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers. FLOAT SELECTED: Figure 33 The most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers.
Which 3 NLP areas are cited the most?
machine translation, statistical machine, sentiment analysis
1904.11942
false
null
The series of TempEval competitions BIBREF21 , BIBREF22 , BIBREF23 have attracted many research interests in predicting event temporal relations. Early attempts by BIBREF24 , BIBREF21 , BIBREF25 , BIBREF26 only use pair-wise classification models. State-of-the-art local methods, such as ClearTK BIBREF27 , UTTime BIBREF28 , and NavyTime BIBREF29 improve on earlier work by feature engineering with linguistic and syntactic rules. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Additionally, several models BramsenDLB2006, ChambersJ2008, DoLuRo12, NingWuRo18, P18-1212 have successfully incorporated global inference to impose global prediction consistency such as temporal transitivity. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets.
What were the traditional linguistic feature-based models?
The answers are shown as follows: * CAEVO
null
false
91
To make the conclusion convincing, we first choose two large-scale datasets that are publicly available, i.e., Chinese-English and English-French. Since English, French, and Chinese all belong to the subject-verb-object (SVO) family, we choose another very different subject-object-verb (SOV) language, Japanese, which might bring some interesting linguistic behaviors in English-Japanese translation. For Chinese-English task, we use WMT17 Chinese-English dataset that consists of $20.6$M sentence pairs. For English-French task, we use WMT14 English-French dataset that comprises $35.5$M sentence pairs. For English-Japanese task, we follow BIBREF17 to use the first two sections of WAT17 English-Japanese dataset that consists of $1.9$M sentence pairs. Following the standard NMT procedure, we adopt the standard byte pair encoding (BPE) BIBREF18 with 32K merge operations for all language pairs. We believe that these datasets are large enough to confirm the rationality and validity of our experimental analyses. We choose the state-of-the-art Transformer BIBREF1 model and the conventional RNN-Search model BIBREF0 as our test bed. We implement the Attribution method based on the Fairseq-py BIBREF19 framework for the above models. All models are trained on the training corpus for 100k steps under the standard settings, which achieve comparable translation results. All the following experiments are conducted on the test dataset, and we estimate the input word importance using the model generated hypotheses. In the following experiments, we compare IG (Attribution) with several black-box methods (i.e., Content, Frequency, Attention) as introduced in Section SECREF8. In Section SECREF21, to ensure that the translation performance decrease attributes to the selected words instead of the perturbation operations, we randomly select the same number of words to perturb (Random), which serves as a baseline. Since there is no ranking for content words, we randomly select a set of content words as important words. To avoid the potential bias introduced by randomness (i.e., Random and Content), we repeat the experiments for 10 times and report the averaged results. We calculate the Attention importance in a similar manner as the Attribution, except that the attention scores use a max operation due to the better performance. We evaluate the effectiveness of estimating word importance by the translation performance decrease. More specifically, unlike the usual way, we measure the decrease of translation performance when perturbing a set of important words that are of top-most word importance in a sentence. The more translation performance degrades, the more important the word is. We use the standard BLEU score as the evaluation metric for translation performance. To make the conclusion more convincing, we conduct experiments on different types of synthetic perturbations (Section SECREF21), as well as different NMT architectures and language pairs (Section SECREF27). In addition, we compare with a supervised erasure method, which requires ground-truth translations for scoring word importance (Section SECREF30). In this experiment, we investigate the effectiveness of word importance estimation methods under different synthetic perturbations. Since the perturbation on text is notoriously hard BIBREF20 due to the semantic shifting problem, in this experiment, we investigate three types of perturbations to avoid the potential bias : Deletion perturbation removes the selected words from the input sentence, and it can be regarded as a specific instantiation of sentence compression BIBREF21. Mask perturbation replaces embedding vectors of the selected words with all-zero vectors BIBREF22, which is similar to Deletion perturbation except that it retains the placeholder. Grammatical Replacement perturbation replaces a word by another word of the same linguistic role (i.e., POS tags), yielding a sentence that is grammatically correct but semantically nonsensical BIBREF23, BIBREF24, such as “colorless green ideas sleep furiously”. Figure FIGREF19 illustrates the experimental results on Chinese$\Rightarrow $English translation with Transformer. It shows that Attribution method consistently outperforms other methods against different perturbations on a various number of operations. Here the operation number denotes the number of perturbed words in a sentence. Specifically, we can make the following observations. Under three different perturbations, perturbing words of top-most importance leads to lower BLEU scores than Random selected words. It confirms the existence of important words, which have greater impacts on translation performance. Furthermore, perturbing important words identified by Attribution outperforms the Random method by a large margin (more than 4.0 BLEU under 5 operations). Figure FIGREF19 shows that two black-box methods (i.e., Content, Frequency) perform only slightly better than the Random method. Specifically, the Frequency method demonstrates even worse performances under the Mask perturbation. Therefore, linguistic properties (such as POS tags) and the word frequency can only partially help identify the important words, but it is not as accurate as we thought. In the meanwhile, it is intriguing to explore what exact linguistic characteristics these important words reveal, which will be introduced in Section SECREF5. We also evaluate the Attention method, which bases on the encoder-decoder attention scores at the last layer of Transformer. Note that the Attention method is also used to simulate the best black-box method SOCRAT, and the results show that it is more effective than black-box methods and the Random baseline. Given the powerful Attention method, Attribution method still achieves best performances under all three perturbations. Furthermore, we find that the gap between Attribution and Attention is notably large (around $1.0+$ BLEU difference). Attention method does not provide as accurate word importance as the Attribution, which exhibits the superiority of gradient-based methods and consists with the conclusion reported in the previous study BIBREF8. In addition, as shown in Figure FIGREF19, the perturbation effectiveness of Deletion, Mask, and Grammatical Replacement varies from strong to weak. In the following experiments, we choose Mask as the representative perturbation operation for its moderate perturbation performance, based on which we compare two most effective methods Attribution and Attention. We validate the effectiveness of the proposed approach using a different NMT architecture RNN-Search on the Chinese$\Rightarrow $English translation task. The results are shown in Figure FIGREF20(a). We observe that the Attribution method still outperforms both Attention method and Random method by a decent margin. By comparing to Transformer, the results also reveal that the RNN-Search model is less robust to these perturbations. To be specific, under the setting of five operations and Attribution method, Transformer shows a relative decrease of $55\%$ on BLEU scores while the decline of RNN-Search model is $64\%$. We further conduct experiments on another two language pairs (i.e., English$\Rightarrow $French, English$\Rightarrow $Japanese in Figures FIGREF20(b, c)) as well as the reverse directions (Figures FIGREF20(d, e, f)) using Transformer under the Mask perturbation. In all the cases, Attribution shows the best performance while Random achieves the worst result. More specifically, Attribution method shows similar translation quality degradation on all three language-pairs, which declines to around the half of the original BLEU score with five operations. There exists another straightforward method, Erasure BIBREF7, BIBREF22, BIBREF25, which directly evaluates the word importance by measuring the translation performance degradation of each word. Specifically, it erases (i.e., Mask) one word from the input sentence each time and uses the BLEU score changes to denote the word importance (after normalization). In Figure FIGREF31, we compare Erasure method with Attribution method under the Mask perturbation. The results show that Attribution method is less effective than Erasure method when only one word is perturbed. But it outperforms the Erasure method when perturbing 2 or more words. The results reveal that the importance calculated by erasing only one word cannot be generalized to multiple-words scenarios very well. Besides, the Erasure method is a supervised method which requires ground-truth references, and finding a better words combination is computation infeasible when erasing multiple words. We close this section by pointing out that our gradient-based method consistently outperforms its black-box counterparts in various settings, demonstrating the effectiveness and universality of exploiting gradients for estimating word importance. In addition, our approach is on par with or even outperforms the supervised erasure method (on multiple-word perturbations). This is encouraging since our approach does not require any external resource and is fully unsupervised. To make the conclusion more convincing, we conduct experiments on different types of synthetic perturbations (Section 4.1), as well as different NMT architectures and language pairs (Section 4.2). In addition, we compare with a supervised erasure method, which requires ground-truth translations for scoring word importance (Section 4.3).
Summarize the main content of the experiment.
To make the conclusion more convincing, they conduct experiments on different types of synthetic perturbations (Section 4.1), as well as different NMT architectures and language pairs (Section 4.2). In addition, they compare with a supervised erasure method, which requires ground-truth translations for scoring word importance (Section 4.3).
null
false
null
What is the difference between American Football and European Football?
Prior to American Football coming into existence, European Football was played throughout the world as an athletic sport that entailed kicking a round ball into an opponent's goal. European Football is typically played on a grass turf surface called a pitch. European Football was actually first called soccer before its adoption by the entire world and ended up settling on the name football. This is where the common confusion surrounds European Football and American Football. American Football, created much later than European Football, is played with a pig skin with a prolate shape on a grass turf surface, much like European Football. In American Football, instead of using their feet, players use their hands to pass the ball to one another through the air or hand the ball off to one another and attempt to run into the opponent's endzone. When European Football eventually came to the United States, the moniker soccer remained with the sport. This is why European Football is the same as American Soccer and American Football is not the same as European Football.
null
false
369
A valuable source of information in matching user accounts, one used in traditional stylometry tasks, is the way in which people use language. A speaker or writer's choice of words depends on many factors, including the rules of grammar, message content and stylistic considerations. There is a great variety of possible ways to compare the language patterns of two people. However, first we need a method for modelling the language of a given user. Below we explain how this is done. Most statistical language models do not attempt to explicitly model the complete language generation process, but rather seek a compact model that adequately explains the observed linguistic data. Probabilistic models of language assign probabilities to word sequences $w_1$ . . . $w_\ell $ , and as such the likelihood of a corpus can be used to fit model parameters as well as characterize model performance. N-gram language modelling BIBREF18 , BIBREF19 , BIBREF20 is an effective technique that treats words as samples drawn from a distribution conditioned on other words, usually the immediately preceding $n-1$ words, in order to capture strong local word dependencies. The probability of a sequence of $\ell $ words, written compactly as $w_1^\ell $ is $\Pr (w_1^\ell )$ and can be factored exactly as $\Pr (w_1^\ell ) = \Pr (w_1) \prod _{i=2}^\ell \Pr (w_i|w_1^{i-1})$ However, parameter estimation in this full model is intractable, as the number of possible word combinations grows exponentially with sequence length. N-gram models address this with the approximation $\tilde{\Pr }(w_i|w_{i-n+1}^{i-1}) \approx \Pr (w_i|w_1^{i-1})$ using only the preceding $n-1$ words for context. A bigram model ( $n=2$ ) uses the preceding word for context, while a unigram model ( $n=1$ ) does not use any context. For this work, we used unigram models in Python, utilizing some components from NLTK BIBREF21 . Probability distributions were calculated using Witten-Bell smoothing BIBREF19 . Rather than assigning word $w_i$ the maximum likelihood probability estimate $p_i = \frac{c_i}{N}$ , where $c_i$ is the number of observations of word $w_i$ and $N$ is the total number of observed tokens, Witten-Bell smoothing discounts the probability of observed words to $p_i^* = \frac{c_i}{N+T}$ where $T$ is the total number of observed word types. The remaining $Z$ words in the vocabulary that are unobserved (i.e. where $c_i = 0$ ) are given by $p_i^* = \frac{T}{Z(N+T)}$ . We experimented with two methods for measuring the similarity between n-gram language models. In particular, we tried approaches based on KL-divergence and perplexity BIBREF22 . We also tried two methods that do not rely on n-gram models, cosine similarity of TF-IDF vectors BIBREF23 , as well as our own novel method, called the confusion model. The performance of each method is shown in Table 2 . Note that all methods outperform the random baseline in both accuracy and average rank by a great margin. Below we explain each of these metrics. The first metric used for measuring the distance between the language of two user accounts is the Kullback-Leibler (KL) divergence BIBREF22 between the unigram probability distribution of the corpus corresponding to the two accounts. The KL-divergence provides an asymmetric measure of dissimilarity between two probability distribution functions $p$ and $q$ and is given by: $KL(p||q) = \int p(x)ln\frac{p(x)}{q(x)}$ We can modify the equation to prove a symmetric distance between distributions: $KL_{2}(p||q) = KL(p||q)+KL(q||p)$ For this method, the similarity metric is the perplexity BIBREF22 of the unigram language model generated from one account, $p$ and evaluated on another account, $q$ . Perplexity is given as: $PP(p,q) = 2^{H(p,q)}$ where $H(p,q)$ is the cross-entropy BIBREF22 between distributions of the two accounts $p$ and $q$ . More similar models lead to smaller perplexity. As with KL-divergence, we can make perplexity symmetric: $PP_{2}(p,q) = PP(p,q)+PP(q,p)$ This method outperformed the KL-divergence method in terms of average rank but not accuracy (see Table 2 ). Perhaps the relatively low accuracies of perplexity and KL-divergence measures should not be too surprising. These measures are most sensitive to the variations in frequencies of most common words. For instance, in its most straightforward implementation, the KL-divergence measure would be highly sensitive to the frequency of the word “the". Although this problem might be mitigated by the removal of stop words and applying topic modelling to the texts, we believe that this issue is more nuanced than that. Different social media (such as Twitter and Facebook) are used by people for different purposes, and thus Twitter and Facebook entries by the same person are likely to be thematically different. So it is likely that straightforward comparison of language models would be inefficient for this task. One possible solution for this problem is to look at users' language models not in isolation, but in comparison to the languages models of everyone else. In other words, identify features of a particular language model that are characteristic to its corresponding user, and then use these features to estimate similarity between different accounts. This is a task that Term Frequency-Inverse Document Frequency, or TF-IDF, combined with cosine similarity, can manage. TF-IDF is a method of converting text into numbers so that it can be represented meaningfully by a vector BIBREF23 . TF-IDF is the product of two statistics, TF or Term Frequency and IDF or Inverse Document Frequency. Term Frequency measures the number of times a term (word) occurs in a document. Since each document will be of different size, we need to normalize the document based on its size. We do this by dividing the Term Frequency by the total number of terms. TF considers all terms as equally important, however, certain terms that occur too frequently should have little effect (for example, the term “the"). And conversely, terms that occur less in a document can be more relevant. Therefore, in order to weigh down the effects of the terms that occur too frequently and weigh up the effects of less frequently occurring terms, an Inverse Document Frequency factor is incorporated which diminishes the weight of terms that occur very frequently in the document set and increases the weight of terms that occur rarely. Generally speaking, the Inverse Document Frequency is a measure of how much information a word provides, that is, whether the term is common or rare across all documents. Using TF-IDF, we derive a vector from the corpus of each account. We measure the similarity between two accounts using cosine similarity: $Similarity(d1,d2) = \frac{d1 \cdot d2}{||d1||\times ||d2||}$ Here, $d1 \cdot d2$ is the dot product of two documents, and $||d1||\times ||d2||$ is the product of the magnitude of the two documents. Using TD-IDF and cosine similarity, we achieved significantly better results than the last two methods, with an accuracy of $0.21$ and average rank of 999. TF-IDF can be thought of as a heuristic measure of the extent to which different words are characteristic of a user. We came up with a new, theoretically motivated measure of “being characteristic" for words. We considered the following setup : The whole corpus of the $11,224$ Twitter and Facebook accounts was treated as one long string; For each token in the string, we know the user who produced it. Imagine that we removed this information and are now making a guess as to who the user was. This will give us a probability distribution over all users; Now imagine that we are making a number of the following samples: randomly selecting a word from the string, taking the true user, $TU$ for this word and a guessed user, $GU$ from correspondent probability distribution. Intuitively, the more often a particular pair, $TU=U_{1}, GU=U_{2}$ appear together, the stronger is the similarity between $U_{1}$ and $U_{2}$ ; We then use mutual information to measure the strength of association. In this case, it will be the mutual information BIBREF22 between random variables, $TU=U_{1}$ and $GU=U_{2}$ . This mutual information turns out to be proportional to the probabilities of $U_{1}$ and $U_{2}$ in the dataset, which is undesirable for a similarity measure. To correct for this, we divide it by the probabilities of $U_{1}$ and $U_{2}$ ; We call this model the confusion model, as it evaluated the probability that $U_{1}$ will be confused for $U_{2}$ on the basis of a single word. The expression for the similarity value according to the model is $S\times log(S)$ , where $S$ is: $S=\sum _{w} p(w)p(U_{1}|w)p(U_{2}|w)$ Note that if $U_{1}=U_{2}$ , the words contributing most to the sum will be ordered by their “degree of being characteristic". The values, $p(w)$ and $p(u|w)$ have to be estimated from the corpus. To do that, we assumed that the corpus was produced using the following auxiliary model: For each token, a user is selected from a set of users by multinomial distribution; A word is selected from a multinomial distribution of words for this user to produce the token. We used Dirichlet distributions BIBREF24 as priors over multinomials. This method outperforms all other methods with an accuracy of $0.27$ and average rank of 859. We experimented with two methods for measuring the similarity between ngram language models. In particular, we tried approaches based on KL-divergence and perplexity [5]. We also tried two methods that do not rely on n-gram models, cosine similarity of TF-IDF vectors [17], as well as our own novel method, called the confusion model. The performance of each method is shown in Table 2. Note that all methods outperform the random baseline in both accuracy and average rank by a great margin. Below we explain each of these metrics.
Do all methods they experimented with outperform the random baseline in average rank?
Yes.
null
false
null
The women's freestyle 48 kilograms is a competition featured at the 2017 World Wrestling Championships, and was held in Paris, France on 24 August 2017.
What is Women's freestyle 48 kg competition featured in 2017 World Wrestling Championships?
The women's freestyle 48 kilograms is a competition featured at the 2017 World Wrestling Championships, and was held in Paris, France on 24 August 2017. This freestyle wrestling competition consisted of a single-elimination tournament, with a repechage used to determine the winners of two bronze medals.
null
false
null
In what key do most car horns honk?
F
null
false
141
We report and discuss the results of our two tasks in four subsections. The first and second subsections include the effect of local representation and distant representation, respectively. The impact of CVT is explained in the third subsection. The last subsection presents a comparison of our model and all the baselines. Moreover, we also conduct paired t-tests to investigate the significance of the improvement from each contribution, as shown in Section SECREF9 . The first and second subsections include the effect of local representation and distant representation, respectively. The impact of CVT is explained in the third subsection. The last subsection presents a comparison of our model and all the baselines. Moreover, we also conduct paired t-tests to investigate the significance of the improvement from each contribution,
What are the main content of the Results and discussions section?
The first and second subsections include the effect of local representation and distant representation, respectively. The impact of CVT is explained in the third subsection. The last subsection presents a comparison of their model and all the baselines. Moreover, they also conduct paired t-tests to investigate the significance of the improvement from each contribution.
null
false
44
In this paper, we presented a novel semi-supervised pipeline along with a novel dataset for identification of complaint based posts in the transport domain. The proposed methodology can be expanded for other fields by altering the lexicons used for the creation of information cues. There are limitations to this analysis; we do not use neural networks which mandate a large volume of data. In the future, we aim to identify demographic features for identification of complaint based posts on social media platforms. In this paper, we presented a novel semi-supervised pipeline along with a novel dataset for identification of complaint based posts in the transport domain.
What is the novel dataset presented by authors used for?
It is used for identification of complaint based posts in the transport domain.
1712.03547
false
null
FLOAT SELECTED: Table 1: Results on test data. The proposed method significantly improves interpretability while maintaining comparable performance on KG tasks (Section 4.3). FLOAT SELECTED: Table 1: Results on test data. The proposed method significantly improves interpretability while maintaining comparable performance on KG tasks (Section 4.3).
When they say "comparable performance", how much of a performance drop do these new embeddings result in?
Performance was comparable, with the proposed method quite close and sometimes exceeding performance of baseline method.
1603.01417
true
null
We have proposed new modules for the DMN framework to achieve strong results without supervision of supporting facts. These improvements include the input fusion layer to allow interactions between input facts and a novel attention based GRU that allows for logical reasoning over ordered inputs. Our resulting model obtains state of the art results on both the VQA dataset and the bAbI-10k text question-answering dataset, proving the framework can be generalized across input domains. Our resulting model obtains state of the art results on both the VQA dataset and the bAbI-10k text question-answering dataset, proving the framework can be generalized across input domains.
Does the DMN+ model establish state-of-the-art ?
Yes.
1608.06378
false
null
We propose a listening comprehension model for the task defined above, the Attention-based Multi-hop Recurrent Neural Network (AMRNN) framework, and show that this model is able to perform reasonably well for the task. In the proposed approach, the audio of the stories is first transcribed into text by ASR, and the proposed model is developed to process the transcriptions for selecting the correct answer out of 4 choices given the question. The initial experiments showed that the proposed model achieves encouraging scores on the TOEFL listening comprehension test. The attention-mechanism proposed in this paper can be applied on either word or sentence levels. We found that sentence-level attention achieved better results on the manual transcriptions without ASR errors, but word-level attention outperformed the sentence-level on ASR transcriptions with errors. We propose a listening comprehension model for the task defined above, the Attention-based Multi-hop Recurrent Neural Network (AMRNN) framework, and show that this model is able to perform reasonably well for the task. In the proposed approach, the audio of the stories is first transcribed into text by ASR, and the proposed model is developed to process the transcriptions for selecting the correct answer out of 4 choices given the question.
What approach does this work propose for the new task?
The answers are shown as follows: * We propose a listening comprehension model for the task defined above, the Attention-based Multi-hop Recurrent Neural Network (AMRNN) framework, and show that this model is able to perform reasonably well for the task. In the proposed approach, the audio of the stories is first transcribed into text by ASR, and the proposed model is developed to process the transcriptions for selecting the correct answer out of 4 choices given the question.
null
false
null
The Adecco Group, is a Swiss–French company based in Zurich, Switzerland. It is the world's second largest Human Resources provider and temporary staffing firm, and a Fortune Global 500 company. The Adecco Group global headquarters in Zurich is located in Bellerivestrasse since 2017 The Adecco Group global headquarters in Zurich is located in Bellerivestrasse since 2017 They directly employ 700,000 people a day, and with the expansion of its operations to China, the number of people working under them totals up to 3.5 million.
Who is the company Adecco
The Adecco Group, is a Swiss–French company based in Zurich, Switzerland. It is the world's second largest Human Resources provider and temporary staffing firm, and a Fortune Global 500 company.
null
false
369
Stylometry is defined as, "the statistical analysis of variations in literary style between one writer or genre and another". It is a centuries-old practice, dating back the early Renaissance. It is most often used to attribute authorship to disputed or anonymous documents. Stylometry techniques have also successfully been applied to other, non-linguistic fields, such as paintings and music. The main principles of stylometry were compiled and laid out by the philosopher Wincenty Lutosławski in 1890 in his work "Principes de stylométrie" BIBREF0 . Today, there are millions of users with accounts and profiles on many different social media and networking sites. It is not uncommon for users to have multiple accounts on different social media and networking sites. With so many networking, emailing, and photo sharing sites on the Web, a user often accumulates an abundance of account profiles. There is an increasing focus from the academic and business worlds on aggregating user information across different sites, allowing for the development of more complete user profiles. There currently exist several businesses that focus on this task BIBREF1 , BIBREF2 , BIBREF3 . These businesses use the aggregate profiles for advertising, background checks or customer service related tasks. Moreover, profile matching across social networks, can assist the growing field of social media rumor detection BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , since many malicious rumors are spread on different social media platforms by the same people, using different accounts and usernames. Motivated by traditional stylometry and the growing interest in matching user accounts across Internet services, we created models for Digital Stylometry, which fuses traditional stylometry techniques with big-data driven social informatics methods used commonly in analyzing social networks. Our models use linguistic and temporal activity patterns of users on different accounts to match accounts belonging to the same person. We evaluated our models on $11,224$ accounts belonging to $5,612$ distinct users on two of the largest social media networks, Twitter and Facebook. The only information that was used in our models were the time and the linguistic content of posts by the users. We intentionally did not use any other information, especially the potentially personally identifiable information that was explicitly provided by the user, such as the screen name, birthday or location. This is in accordance with traditional stylometry techniques, since people could misstate, omit, or lie about this information. Also, we wanted to show that there are implicit clues about the identities of users in the content (language) and context (time) of the users' interactions with social networks that can be used to link their accounts across different services. Other than the obvious technical goal, the purpose of this paper is to shed light on the relative ease with which seemingly innocuous information can be used to track users across social networks, even when signing up on different services using completely different account and profile information (such as name and birthday). This paper is as much of a technical contribution, as it is a warning to users who increasingly share a large part of their private lives on these services. The rest of this paper is structured as follows. In the next sections we will review related work on linking profiles, followed by a description of our data collection and annotation efforts. After that, we discuss the linguistic, temporal and combined temporal-linguistic models developed for linking user profiles. Finally, we discuss and summarize our findings and contributions and discuss possible paths for future work. Our models use linguistic and temporal activity patterns of users on different accounts to match accounts belonging to the same person. We evaluated our models on $11,224$ accounts belonging to $5,612$ distinct users on two of the largest social media networks, Twitter and Facebook.
What are the two social media networks used by the authors when developed models?
Twitter and Facebook.
1708.05873
false
null
We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. BIBREF5 propose semantic coherence measure, which is closely related to point-wise mutual information measure posited by BIBREF6 to evaluate topic quality. BIBREF5 show that semantic coherence corresponds to expert judgments and more general human judgments in Amazon's Mechanical Turk experiments. Exclusivity scores for each topic follows BIBREF7 . Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. Cohesive and exclusive topics are more semantically useful. Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 . We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 .
How are the main international development topics that states raise identified?
They focus on exclusivity and semantic coherence measures: Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. They select select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence.
null
false
null
The Summer Olympic Games (French: Jeux olympiques d'été), also known as the Games of the Olympiad, and often referred to as the Summer Olympics, is a major international multi-sport event normally held once every four years. The inaugural Games took place in 1896 in Athens, Greece, and the most recent Games were held in 2021 in Tokyo, Japan. The International Olympic Committee (IOC) is responsible for organising the Games and for overseeing the host city's preparations. The tradition of awarding medals began in 1904; in each Olympic event, gold medals are awarded for first place, silver medals for second place, and bronze medals for third place. The Winter Olympic Games were created out of the success of the Summer Olympic Games, which are regarded as the largest and most prestigious multi-sport international event in the world.
Extract all the place names from the paragraph. Different places should be separated by comma.
Athens, Greece, Tokyo, Japan
null
false
null
The All India Carrom Federation (AICF) (Hindi: अखिल भारतीय कैरम फेडरेशन) is India's national sport federation for the indigenous game of carrom, sanctioning six to seven national-level tournaments per year. It is India's representative body in the International Carrom Federation. The AICF has 15 regional and 28 state subnational affiliate institutions, the largest of which is the Maharashtra Carrom Association, further subdivided into local organisations.
Tell me some important points about AICF from the given passage.
1. India's national sports federation for the traditional game of carrom, the All India Carrom Federation (AICF), annually approves six to seven national-level competitions. 2. It serves as the International Carrom Federation's representative for India. 3. The AICF has 28 state-level affiliate institutions, 15 regional affiliate institutions, and 15 local affiliate institutions, the largest of which is the Maharashtra Carrom Association.
null
false
null
Novak Djokovic (Serbian Cyrillic: Новак Ђоковић, romanized: Novak Đoković, pronounced [nôʋaːk dʑôːkoʋitɕ] (listen); born 22 May 1987) is a Serbian professional tennis player. He is currently ranked the world No. 1 player in men's singles by the Association of Tennis Professionals (ATP), and has held the top position for a record total 381 weeks, finishing as the year-end No. 1 an Open Era record seven times. He has won a total of 93 singles titles on the men's ATP Tour with titles comprising a joint-record 22 majors, a record ten Australian Opens, a record 38 Masters, a joint-record six Tour Finals, and a record 66 Big Titles. Djokovic has completed a non-calendar year Grand Slam in singles, becoming the only man in tennis history to be the reigning champion of the four majors at once across three different surfaces. He is also the first man in the Open Era to achieve a double Career Grand Slam in singles and the only player to complete the career Golden Masters in singles by winning all of the nine Masters tournaments, which he has done twice.
Given this paragraph about Novak Djokovic, how many majors titles does he currently hold?
22
null
false
null
Which of these are Satellite and Planets: Hydra, Venus, Titan, Kerberos, Pluto, Jupiter, Moon
Satellite: Kerberos, Hydra, Moon, Titan Planets: Pluto, Jupiter, Venus
null
false
null
The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give.
How many people are needed to perform the Turing test?
You need two people: one person is the conversation partner to the machine. And another, the evaluator which observes the conversation from distance unaware which side is the machine and which side is human.
1807.03367
true
null
We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs. We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk).
Did the authors use crowdsourcing platforms?
Yes.
null
false
null
Which TV show from the 80s and 90s features a bar owned by a former baseball player?
Cheers
2001.07209
false
null
Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$. A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule; A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class; Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$. A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule; A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;
How does the parameter-free model work?
The answers are shown as follows: * A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule; * A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;
1806.02908
false
null
We used four classification algorithms: 1) Logistic regression, which is conventionally used in sentiment classification. Other three algorithms which are relatively new and has shown great results on sentiment classification types of problems are: 2) Naïve Bayes with SVM (NBSVM), 3) Extreme Gradient Boosting (XGBoost) and 4) FastText algorithm with Bidirectional LSTM (FastText-BiLSTM). We used four classification algorithms: 1) Logistic regression, which is conventionally used in sentiment classification. Other three algorithms which are relatively new and has shown great results on sentiment classification types of problems are: 2) Naïve Bayes with SVM (NBSVM), 3) Extreme Gradient Boosting (XGBoost) and 4) FastText algorithm with Bidirectional LSTM (FastText-BiLSTM).
What state of the art models are used in the experiments?
The answers are shown as follows: * 2) Naïve Bayes with SVM (NBSVM) * 3) Extreme Gradient Boosting (XGBoost) * 4) FastText algorithm with Bidirectional LSTM (FastText-BiLSTM)