paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| 99
|
In recent years there has been increasing interest on the issue of disinformation spreading on online social media. Global concern over false (or "fake") news as a threat to modern democracies has been frequently raised–ever since 2016 US Presidential elections–in correspondence of events of political relevance, where the proliferation of manipulated and low-credibility content attempts to drive and influence people opinions BIBREF0BIBREF1BIBREF2BIBREF3.
Researchers have highlighted several drivers for the diffusion of such malicious phenomenon, which include human factors (confirmation bias BIBREF4, naive realism BIBREF5), algorithmic biases (filter bubble effect BIBREF0), the presence of deceptive agents on social platforms (bots and trolls BIBREF6) and, lastly, the formation of echo chambers BIBREF7 where people polarize their opinions as they are insulated from contrary perspectives.
The problem of automatically detecting online disinformation news has been typically formulated as a binary classification task (i.e. credible vs non-credible articles), and tackled with a variety of different techniques, based on traditional machine learning and/or deep learning, which mainly differ in the dataset and the features they employ to perform the classification. We may distinguish three approaches: those built on content-based features, those based on features extracted from the social context, and those which combine both aspects. A few main challenges hinder the task, namely the impossibility to manually verify all news items, the lack of gold-standard datasets and the adversarial setting in which malicious content is created BIBREF3BIBREF6.
In this work we follow the direction pointed out in a few recent contributions on the diffusion of disinformation compared to traditional and objective information. These have shown that false news spread faster and deeper than true news BIBREF8, and that social bots and echo chambers play an important role in the diffusion of malicious content BIBREF6, BIBREF7. Therefore we focus on the analysis of spreading patterns which naturally arise on social platforms as a consequence of multiple interactions between users, due to the increasing trend in online sharing of news BIBREF0.
A deep learning framework for detection of fake news cascades is provided in BIBREF9, where the authors refer to BIBREF8 in order to collect Twitter cascades pertaining to verified false and true rumors. They employ geometric deep learning, a novel paradigm for graph-based structures, to classify cascades based on four categories of features, such as user profile, user activity, network and spreading, and content. They also observe that a few hours of propagation are sufficient to distinguish false news from true news with high accuracy. Diffusion cascades on Weibo and Twitter are analyzed in BIBREF10, where authors focus on highlighting different topological properties, such as the number of hops from the source or the heterogeneity of the network, to show that fake news shape diffusion networks which are highly different from credible news, even at early stages of propagation.
In this work, we consider the results of BIBREF11 as our baseline. The authors use off-the-shelf machine learning classifiers to accurately classify news articles leveraging Twitter diffusion networks. To this aim, they consider a set of basic features which can be qualitatively interpreted w.r.t to the social behavior of users sharing credible vs non-credible information. Their methodology is overall in accordance with BIBREF12, where authors successfully detect Twitter astroturfing content, i.e. political campaigns disguised as spontaneous grassroots, with a machine learning framework based on network features.
In this paper, we propose a classification framework based on a multi-layer formulation of Twitter diffusion networks. For each article we disentangle different social interactions on Twitter, namely tweets, retweets, mentions, replies and quotes, to accordingly build a diffusion network composed of multiple layers (on for each type of interaction), and we compute structural features separately for each layer. We pick a set of global network properties from the network science toolbox which can be qualitatively explained in terms of social dimensions and allow us to encode different networks with a tuple of features. These include traditional indicators, e.g. network density, number of strong/weak connected components and diameter, and more elaborated ones such as main K-core number BIBREF13 and structural virality BIBREF14. Our main research question is whether the use of a multi-layer, disentangled network yields a significant advance in terms of classification accuracy over a conventional single-layer diffusion network. Additionally, we are interested in understanding which of the above features, and in which layer, are most effective in the classification task.
We perform classification experiments with an off-the-shelf Logistic Regression model on two different datasets of mainstream and disinformation news shared on Twitter respectively in the United States and in Italy during 2019. In the former case we also account for political biases inherent to different news sources, referring to the procedure proposed in BIBREF2 to label different outlets. Overall we show that we are able to classify credible vs non-credible diffusion networks (and consequently news articles) with high accuracy (AUROC up to 94%), even when accounting for the political bias of sources (and training only on left-biased or right-biased articles). We observe that the layer of mentions alone conveys useful information for the classification, denoting a different usage of this functionality when sharing news belonging to the two news domains. We also show that most discriminative features, which are relative to the breadth and depth of largest cascades in different layers, are the same across the two countries.
The outline of this paper is the following: we first formulate the problem and describe data collection, network representation and structural properties employed for the classification; then we provide experimental results–classification performances, layer and feature importance analyses and a temporal classification evaluation–and finally we draw conclusions and future directions.
These have shown that false news spread faster and deeper than true news (Vosoughi, Roy, and Aral 2018), and that social bots and echo chambers play an important role in the diffusion of malicious content (Shao et al. 2018a; Del Vicario et al. 2016).
|
What plays an important role in the diffusion of fake news?
|
Social bots and echo chambers.
|
null | false
| null |
The Parthian Empire, also known as the Arsacid Empire was a major Iranian political and cultural power in ancient Iran from 247 BC to 224 AD. Its latter name comes from its founder, Arsaces I, who led the Parni tribe in conquering the region of Parthia in Iran's northeast, then a satrapy (province) under Andragoras, who was rebelling against the Seleucid Empire. Mithridates I (r. c. 171–132 BC) greatly expanded the empire by seizing Media and Mesopotamia from the Seleucids. At its height, the Parthian Empire stretched from the northern reaches of the Euphrates, in what is now central-eastern Turkey, to present-day Afghanistan and western Pakistan. The empire, located on the Silk Road trade route between the Roman Empire in the Mediterranean Basin and the Han dynasty of China, became a center of trade and commerce.
|
What is the difference between Parthian and Arsacid Empire?
|
Parthian and Arsacid are different names of the same ancient Iranian empire that ruled from 247 BC to 224 AD. The Parthian Kings, or Shahs, often used Arsaces as their crown names which is why some refer to this empire as the Arsacids. They are also called Parthian because they started by conquering the region of Partha in Iran's northeast and the adopted the language of that region, called Parthian Middle Persian.
|
null | false
| null |
Why do children love playing sports?
|
There are many reasons why children love playing sports.
First, it's fun. Sports are a great way to have fun and get exercise.
Second, children learn teamwork and discipline.
Third, it's a great way to make new friends and learn social skills. They also learn how to cooperate with others and how to resolve conflicts.
Sports help kids stay healthy and fit. They also help them develop their muscles and coordination.
Activities can lead to a lifelong love of sports. Kids who play sports often continue to play them as adults.
|
|
null | false
| null |
The FIFA World Cup, often simply called the World Cup, is an international association football competition contested by the senior men's national teams of the members of the Fédération Internationale de Football Association (FIFA), the sport's global governing body. The tournament has been held every four years since the inaugural tournament in 1930, except in 1942 and 1946 when it was not held because of the Second World War. The reigning champions are Argentina, who won their third title at the 2022 tournament.
The format involves a qualification phase, which takes place over the preceding three years, to determine which teams qualify for the tournament phase. In the tournament phase, 32 teams compete for the title at venues within the host nation(s) over about a month. The host nation(s) automatically qualify to the group stage of the tournament. The FIFA World Cup has been scheduled to expand to 48 teams for the 2026 tournament.
As of the 2022 FIFA World Cup, 22 final tournaments have been held and a total of 80 national teams have competed. The trophy has been won by eight national teams. Brazil, with five wins, are the only team to have played in every tournament. The other World Cup winners are Germany and Italy, with four titles each; Argentina, with three titles; France and inaugural winner Uruguay, each with two titles; and England and Spain, with one title each.
The World Cup is the most prestigious association football tournament in the world, as well as the most widely viewed and followed single sporting event in the world. The viewership of the 2018 World Cup was estimated to be 3.57 billion (close to half of the global population), while the engagement with 2022 World Cup was estimated to be around 5 billion with close to 1.5 billion people watching the final match.
Seventeen countries have hosted the World Cup, most recently Qatar, who hosted the 2022 edition. The 2026 tournament will be jointly hosted by Canada, the United States and Mexico, which will give Mexico the distinction of being the first country to host games in three World Cups.
|
From the passage provided, extract the country which hosted the 2022 world cup.
|
Qatar hosted 2022 FIFA World Cup
|
null | false
| 47
|
There have been many implementations of the word2vec model in either of the two architectures it provides: continuous skipgram and CBoW (BIBREF0). Similar distributed models of word or subword embeddings (or vector representations) find usage in sota, deep neural networks like BERT and its successors (BIBREF1, BIBREF2, BIBREF3). These deep networks generate contextual representations of words after been trained for extended periods on large corpora, unsupervised, using the attention mechanisms (BIBREF4).
It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets.
The objective of this work is to determine the optimal combinations of word2vec hyper-parameters for intrinsic evaluation (semantic and syntactic analogies) and extrinsic evaluation tasks (BIBREF13, BIBREF14), like SA and NER. It is not our objective in this work to record sota results. Some of the main contributions of this research are the empirical establishment of optimal combinations of word2vec hyper-parameters for NLP tasks, discovering the behaviour of quality of vectors viz-a-viz increasing dimensions and the confirmation of embeddings being task-specific for the downstream. The rest of this paper is organised as follows: the literature review that briefly surveys distributed representation of words, particularly word2vec; the methodology employed in this research work; the results obtained and the conclusion.
The objective of this work is to determine the optimal combinations of word2vec hyper-parameters for intrinsic evaluation (semantic and syntactic analogies) and extrinsic evaluation tasks ,like SA and NER.
|
Why did the author do this study?
|
To determine the optimal combinations of word2vec hyper-parameters for intrinsic evaluation (semantic and syntactic analogies) and extrinsic evaluation tasks ,like SA and NER.
|
null | false
| null |
Is competition healthy?
|
Competition is prevalent everywhere in our society. Despite causing many people to feel stressed or pressured, competition is beneficial for challenging oneself, but it must be utilized properly.
A common habitat of competition is in sports. Whether it be competing against another person, another team, or against time, sports push people to work harder, faster, and stronger. This athletic competition is good, because it is mutually beneficial for both parties. Both and all players are challenged physically, and thats are taught both the importance and, if they succeed, the rewards of hard work.
Competition in sports is expected, However, the misuse of competitiveness has become common in academics. While competition in sports is expected, academics is meant to be more reflective. Competition between students for the highest marks can be good, as both will challenge each other, but often makes other students feel belittled and unworthy. Grades in particular are a measure of self-knowledge and hard work and should not be used to compare one student to another. This is where and how competition is often misused and its intentions misunderstood.
Often this misused competition comes from pressure from peers, parents, and sometimes teachers. A common example is competition between siblings stimulated by parents: comparing the childrens looks, academie abilities, athleticism, and social awareness. This kind of abusive competition is more likely to create anxiety, insecurity, and depression for both siblings, revealing perhaps the most important condition for healthy competition: it has to come from yourself.
Almost all negative experiences with competition stem from society around us, leading many to believe that the correct thing to do is compare students, siblings. children, employees, and more. But competition's true purpose is to inspiles oneself to work hard and take on new challenges. "Forced" competition doesn't allow a person to be in control of their goals. This is why Kohn felt "caught on a treadmill," when being compared to others. The competition comes from within and motivates one to set, work for, and reach new goals.
To prevent unhealthy competition, people should try to be more vocal about their goals and expectations about themselves. Encouragement, of course, is always nice so voicing your wishes is important. If parents, teachers, or bosses know what you expect from yourself. they can better encourage, support, and challenge you.
|
|
null | false
| null |
The Dual Alliance (German: Zweibund, Hungarian: Kettős Szövetség) was a defensive alliance between Germany and Austria-Hungary, which was created by treaty on October 7, 1879, as part of Germany's Otto von Bismarck's system of alliances to prevent or limit war. The two powers promised each other support in case of attack by Russia. Also, each state promised benevolent neutrality to the other if one of them was attacked by another European power (generally taken to be France, even more so after the Franco-Russian Alliance of 1894). Bismarck saw the alliance as a way to prevent the isolation of the German Empire, which had just been founded a few years before, and to preserve peace, as Russia would not wage war against both empires.
Formation
When Austria-Hungary and Germany formed an alliance in 1879, it was one of the more surprising alliances of its time. Though both shared the German language and a similar culture, Austria-Hungary and Germany were often driven apart, most notably during the recent Austro-Prussian War. Additionally, the Habsburg rulers believed that the promotion of nationalism, which was favoured by Germany, would destroy their multinational empire. However, their common distrust of Russia brought both empires together for a common cause.
|
What were the primary motivations for the Dual Alliance?
|
The primary motivations for the alliance between Germany and Austria-Hungary were peacemaking and protection. The alliance ensured that both Germany and Austria-Hungary would be safe from attack by Russia. The alliance also promoted peace between the two countries and other European powers.
|
null | false
| null |
Pwojè Pyebwa (Tree Project) is a tree-planting project in Haiti. It was designed, implemented, and initially run by cultural anthropologists. Different from reforestation projects, Pwojè Pyebwa promoted agroforestry—the strategic mixing of crops, trees, and animals. The agroforestry project in Haiti project was funded by the United States Agency for International Development USAID, but implemented through the Pan American Development Organization (PADF) and CARE. It was originally called the Agroforestry Outreach Project (AOP). The original project spanned a decade (1981–1991), but policies from Pwojè Pyebwa continued to direct PADF tree-planting through a second decade (1992 to 2000).
The project was executed throughout the country. Based on Haitian farmers’ voluntary and widespread participation, the project has been deemed a success by many. Pwojè Pyebwa distributed free trees to farmers, and advised on the incorporation of trees into existing cropping arrangements based on anthropological research into the land tenure, agricultural systems, and the historical market-orientation of Haitian peasant farmers. The project eventually won the Praxis prize in applied anthropology. It is estimated that the project involved one-third of the entire rural Haitian population. Approximately 65 million trees were planted.
|
Given a reference text about Pwojè Pyebwa, who funded the project?
|
Pwojè Pyebwa was funded by the United States Agency for International Development (USAID).
|
null | false
| null |
In 2004, Nintendo released the Nintendo DS, which featured such innovations as dual screens – one of which being a touchscreen – and wireless connectivity for multiplayer play. Throughout its lifetime, more than 154 million units were sold, making it the most successful handheld console and the second best-selling console in history. In 2005, Nintendo released the Game Boy Micro, the last system in the Game Boy line. Sales did not meet Nintendo's expectations, with 2.5 million units being sold by 2007. In mid-2005, the Nintendo World Store was inaugurated in New York City.
|
Given this paragraph about video game consoles, when was the Nintendo DS released?
|
2004
|
null | false
| null |
How do dolphins make noise?
|
Dolphins use their blow hole on top of their head to make a variety of noises.
|
|
1702.06700
| false
| null |
holistic: The baseline model which maps the holistic image feature and LSTM-encoded question feature to a common space and perform element-wise multiplication between them.
TraAtt: The traditional attention model, implementation of WTL model BIBREF9 using the same $3\times 3$ regions in SalAtt model.
RegAtt: The region attention model which employs our novel attention method, same as the SalAtt model but without region pre-selection.
ConAtt: The convolutional region pre-selection attention model which replaces the BiLSTM in SalAtt model with a weight-sharing linear mapping, implemented by a convolutional layer.
Besides, we also compare our SalAtt model with the popular baseline models i.e. iBOWIMG BIBREF4 , VQA BIBREF1 , and the state-of-the-art attention-based models i.e. WTL BIBREF9 , NMN BIBREF21 , SAN BIBREF14 , AMA BIBREF33 , FDA BIBREF34 , D-NMN BIBREF35 , DMN+ BIBREF8 on two tasks of COCO-VQA.
holistic: The baseline model which maps the holistic image feature and LSTM-encoded question feature to a common space and perform element-wise multiplication between them.
TraAtt: The traditional attention model, implementation of WTL model BIBREF9 using the same $3\times 3$ regions in SalAtt model.
RegAtt: The region attention model which employs our novel attention method, same as the SalAtt model but without region pre-selection.
ConAtt: The convolutional region pre-selection attention model which replaces the BiLSTM in SalAtt model with a weight-sharing linear mapping, implemented by a convolutional layer.
Besides, we also compare our SalAtt model with the popular baseline models i.e. iBOWIMG BIBREF4 , VQA BIBREF1 , and the state-of-the-art attention-based models i.e. WTL BIBREF9 , NMN BIBREF21 , SAN BIBREF14 , AMA BIBREF33 , FDA BIBREF34 , D-NMN BIBREF35 , DMN+ BIBREF8 on two tasks of COCO-VQA.
|
To which previous papers does this work compare its results?
|
The answers are shown as follows:
* holistic
* TraAtt
* RegAtt
* ConAtt
* ConAtt
* iBOWIMG
* VQA
* VQA
* WTL
* NMN
* SAN
* AMA
* FDA
* D-NMN
* DMN+
|
1906.05012
| false
| null |
Our framework includes three key modules: Retrieve, Fast Rerank, and BiSET. For each source article, Retrieve aims to return a few candidate templates from the training corpus. Then, the Fast Rerank module quickly identifies a best template from the candidates. Finally, BiSET mutually selects important information from the source article and the template to generate an enhanced article representation for summarization.
This module starts with a standard information retrieval library to retrieve a small set of candidates for fine-grained filtering as cao2018retrieve. To do that, all non-alphabetic characters (e.g., dates) are removed to eliminate their influence on article matching. The retrieval process starts by querying the training corpus with a source article to find a few (5 to 30) related articles, the summaries of which will be treated as candidate templates.
The above retrieval process is essentially based on superficial word matching and cannot measure the deep semantic relationship between two articles. Therefore, the Fast Rerank module is developed to identify a best template from the candidates based on their deep semantic relevance with the source article. We regard the candidate with highest relevance as the template. As illustrated in Figure FIGREF6 , this module consists of a Convolution Encoder Block, a Similarity Matrix and a Pooling Layer.
Our framework includes three key modules: Retrieve, Fast Rerank, and BiSET. For each source article, Retrieve aims to return a few candidate templates from the training corpus. Then, the Fast Rerank module quickly identifies a best template from the candidates. Finally, BiSET mutually selects important information from the source article and the template to generate an enhanced article representation for summarization.
This module starts with a standard information retrieval library to retrieve a small set of candidates for fine-grained filtering as cao2018retrieve. To do that, all non-alphabetic characters (e.g., dates) are removed to eliminate their influence on article matching. The retrieval process starts by querying the training corpus with a source article to find a few (5 to 30) related articles, the summaries of which will be treated as candidate templates.
The above retrieval process is essentially based on superficial word matching and cannot measure the deep semantic relationship between two articles. Therefore, the Fast Rerank module is developed to identify a best template from the candidates based on their deep semantic relevance with the source article. We regard the candidate with highest relevance as the template. As illustrated in Figure FIGREF6 , this module consists of a Convolution Encoder Block, a Similarity Matrix and a Pooling Layer.
|
How are templates discovered from training data?
|
The answers are shown as follows:
* For each source article, Retrieve aims to return a few candidate templates from the training corpus. Then, the Fast Rerank module quickly identifies a best template from the candidates.
|
null | false
| null |
What is muscular hypertrophy?
|
Muscular hypertrophy is the process of increasing the size of muscles.
|
|
null | false
| 190
|
Relation extraction (RE) is one of the most fundamental tasks in natural language processing, and its goal is to identify the relationship between a given pair of entities in a sentence. Typically, a large-scale training dataset with clean labels is required to train a reliable relation extraction model. However, it is time-consuming and labor-intensive to annotate such data by crowdsourcing. To overcome the lack of labeled training data, BIBREF0 mintz2009distant presents a distant supervision approach that automatically generates a large-scale, labeled training set by aligning entities in knowledge graph (e.g. Freebase BIBREF1) to corresponding entity mentions in natural language sentences. This approach is based on a strong assumption that, any sentence containing two entities should be labeled according to the relationship of the two entities on the given knowledge graph. However, this assumption does not always hold. Sometimes the same two entities in different sentences with various contexts cannot express a consistent relationship as described in the knowledge graph, which certainly results in wrongly labeled problem.
To alleviate the aformentioned problem, BIBREF2 riedel2010modeling proposes a multi-instance learning framework, which relaxes the strong assumption to expressed-at-least-one assumption. In plainer terms, this means any possible relation between two entities hold true in at least one distantly-labeled sentence rather than all of the them that contains those two entities. In particular, instead of generating a sentence-level label, this framework assigns a label to a bag of sentences containing a common entity pair, and the label is a relationship of the entity pair on knowledge graph. Recently, based on the labeled data at bag level, a line of works BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 under selective attention framework BIBREF5 let model implicitly focus on the correctly labeled sentence(s) by an attention mechanism and thus learn a stable and robust model from the noisy data.
However, such selective attention framework is vulnerable to situations where a bag is merely comprised of one single sentence labeled; and what is worse, the only one sentence possibly expresses inconsistent relation information with the bag-level label. This scenario is not uncommon. For a popular distantly supervised relation extraction benchmark, e.g., NYT dataset BIBREF2, up to $80\%$ of its training examples (i.e., bags) are one-sentence bags. From our data inspection, we randomly sample 100 one-sentence bags and find $35\%$ of them is incorrectly labeled. Two examples of one-sentence bag are shown in Table TABREF1. These results indicate that, in training phrase the selective attention module is enforced to output a single-valued scalar for $80\%$ examples, leading to an ill-trained attention module and thus hurting the performance.
Motivated by aforementioned observations, in this paper, we propose a novel Selective Gate (SeG) framework for distantly supervised relation extraction. In the proposed framework, 1) we employ both the entity embeddings and relative position embeddings BIBREF8 for relation extraction, and an entity-aware embedding approach is proposed to dynamically integrate entity information into each word embedding, yielding more expressively-powerful representations for downstream modules; 2) to strengthen the capability of widely-used piecewise CNN (PCNN) BIBREF3 on capturing long-term dependency BIBREF9, we develop a light-weight self-attention BIBREF10, BIBREF11 mechanism to capture rich dependency information and consequently enhance the capability of neural network via producing complementary representation for PCNN; and 3) based on preceding versatile features, we design a selective gate to aggregate sentence-level representations into bag-level one and alleviate intrinsic issues appearing in selective attention.
Compared to the baseline framework (i.e., selective attention for multi-instance learning), SeG is able to produce entity-aware embeddings and rich-contextual representations to facilitate downstream aggregation modules that stably learn from noisy training data. Moreover, SeG uses gate mechanism with pooling to overcome problem occurring in selective attention, which is caused by one-sentence bags. In addition, it still keeps a light-weight structure to ensure the scalability of this model.
The experiments and extensive ablation studies on New York Time dataset BIBREF2 show that our proposed framework achieves a new state-of-the-art performance regarding both AUC and top-n precision metrics for distantly supervised relation extraction task, and also verify the significance of each proposed module. Particularly, the proposed framework can achieve AUC of 0.51, which outperforms selective attention baseline by 0.14 and improves previous state-of-the-art approach by 0.09.
1) we employ both the entity embeddings and relative position embeddings (Zeng et al. 2014) for relation extraction, and an entity-aware embedding approach is proposed to dynamically integrate entity information into each word embedding, yielding more expressively-powerful representations for downstream modules;
|
What is employed by them for relation extraction?
|
Both the entity embeddings and relative position embeddings.
|
null | false
| null |
What is a proper noun?
|
A proper noun is a noun that identifies an entity and is used to reference that entity. For example, India is a proper noun.
|
|
null | false
| 276
|
Among neural networks, LSTMs BIBREF0 are commonly used for language modeling. Although new architectures BIBREF1, BIBREF2 challenge this standard, LSTMs remain competitive for language modeling BIBREF3. However, despite the success of LM LSTMs, it is not clear what makes them so effective. In particular, are representations derived through language modeling able to effectively encode syntactic structures and relations? Do they encode them in a reliable and systematic way?
The typical metric used to compare LMs, perplexity, is not adapted to address these questions. Perplexity measures the probability assigned to held-out data from the corpus the LM is trained on. Because the held-out and training data are typically randomly extracted from an initial corpus, they have similar statistics, which is good from a machine learning viewpoint, but bad from the viewpoint of linguistic analysis: perplexity is mostly sensitive to the most common sentence types in the initial corpus and therefore will not reflect well the behavior of the LM in the tail of the distribution. In addition, the sentences extracted from a natural corpus confound several factors: syntax, semantics, pragmatics, etc. further complicating the interpretation of a good perplexity score.
To circumvent this limitation, recent work has focused on using probing techniques inspired by linguistic and psycholinguistics (for instance, grammaticality or acceptability judgments, or forced choice). In addition, instead of using sentences from the training corpus, studies rely more and more on automatically constructed test sentences, which enable for a removal of the bias in the original corpus and focus on particular linguistic phenomena. Here, we will use acceptability judgments operationalized by the log probability of sentences according to the LM and sets of synthetic sentences generated from template sentences to probe for a challenging linguistic structure: verb argument structure.
Verb argument structure provides languages with a way to link syntactic position in a sentence (subject, direct object, etc) with semantic roles (agent, patient, etc), in other words, to determine who is doing what. It is currently unknown whether neural LMs purely trained from surface statistics are able to capture this kind of structure, or whether additional information from another modality would be needed to provide some semantic grounding.
Verb argument structure is typically correlated to sentence position in many languages like English. But in other languages with relatively free word order, it is indicated by morphological markers. Here, we study German, where the arguments of a verb can occur in any position (when occurring within a relative clause), and is indicated by the case of the noun phrase (nominative, accusative, etc).
We setup a test of argument structure representation by presenting a trained LM with carefully constructed sets of sentences that either have the right set of arguments, or abnormal sentences where one case is missing or duplicated. We use word order permutations to control for unigram and positional statistics. If the LM is able to track argument structure irrespective of word order, it should assign lower grammaticality scores (log probabilities) to the incorrect sentences as compared to the correct ones.
Since at the level of the sentence, we study a global rather than local syntactic phenomenon, we depart from earlier work BIBREF4, BIBREF5, BIBREF6, BIBREF7 and do not compare pairs of sentences. Rather, we compare a set of valid grammatical variations of the template to a corresponding set of grammatical violations of the template. Thus, for each template, we measure the model's ability to discriminate grammatical sentences from ungrammatical ones using receiver operating characteristic curves, or ROC curves. We also compute the area under the ROC curve, or AUC. In our results, we often report the average AUC over templates as our metric.
We evaluate three LMs on our dataset, the two-layer LSTM of BIBREF8 trained on German Wikipedia text, as well as n-gram baselines using the same corpus. We ask proficient German speakers to annotate our sentences for grammaticality, providing a human comparison. Since some of these sentences are rather implausible because of the permutations, we also collect human meaningfulness scores. We find that our dataset is challenging for both LMs and humans and that LMs lag behind human performance.
We evaluate three LMs on our dataset, the two-layer LSTM of BIBREF8 trained on German Wikipedia text, as well as n-gram baselines using the same corpus.
|
What do the authors evaluate on the dataset?
|
The authors evaluate three LMs on the dataset
|
null | false
| null |
Ted Ringwood was born in Kew, only child of Alfred Edward Ringwood. He attended Hawthorn West State School where he played cricket and Australian Rules football. In 1943 he was successful in gaining a scholarship to Geelong Grammar School where he boarded. On matriculation, he enrolled in Geology a science degree at the University of Melbourne where he held a Commonwealth Government Scholarship, and was awarded a resident scholarship at Trinity College. He represented the college and the university in football. He obtained First Class Honours degree in Geology and began a MSc degree in field-mapping and petrology of the Devonian Snowy River volcanics of northeastern Victoria, graduating with Honours in 1953. Ringwood then undertook a PhD, beginning an experimental study about the origin of metalliferous ore deposits, but later changed his research topic so as to apply geochemistry to an understanding of the structure of the Earth, in particular the mineralogical constitution of the Earth's mantle
|
What university did Ted Ringwood join and what discipline did he follow ?
|
Ted Ringwood joined the University of Melbourne and he took a Geology science degree
|
null | false
| null |
Collins graduated from the Alabama School of Fine Arts in Birmingham in 1980 as a Theater Arts major. She completed her bachelor of arts degree from Indiana University Bloomington in 1985 with a double major in theater and telecommunications. In 1989, Collins earned her Master of Fine Arts in dramatic writing from the New York University Tisch School of the Arts.
|
Extract the majors of Suzanne Collins from the text. Separate them with a comma.
|
Theater Arts, Theater, Telecommunications, Fine Arts in dramatic writing
|
null | false
| null |
classify these as kitchen or bathroom items: toothpaste, pan, pots, spoons, shampoo, dish soap, bath soap, bath bombs, plates, oven mitts, cups, toilet paper
|
Kitchen items: pan, pots, spoons, dish soap, plates, oven mitts, cups
Bathroom items: toothpaste, shampoo, bath soap, bath bombs, toilet paper
|
|
null | false
| 404
|
Unsupervised Domain Adaptation: It refers to a class of methods that aim to adapt models to work in a target domain distinct from what it was trained on. One of the most prominent lines of work is based on DAT. This involves using an additional discriminator to distinguish between samples of source and target domain. The goal of the model is to learn features that can not be distinguished between source and target. The follow-up works have improved this basic idea by introducing a class information based discriminator (CDAN), introducing a transferable normalization function etc. In this work, we focus on analyzing and improving such methods. Another line of work involves DA by using self-training on target domain which will not be the focus of this work.
As neural networks operate in the regime of over parameterized models, low error on training data does not always lead to better generalization . Often it has been stated) that smoother minima does generalize better on unseen data. But until recently, this was practically expensive as smoothing required additional costly computations. Recently, a method called Sharpness Aware Minimization (SAM) has been proposed to find a smoother minima with an additional gradient computation step. SAM also improves the ImageNet model performance on ImageNet-C and ImageNet-R (which are out of distribution). It has also been observed that smoothness w.r.t. input (image) is beneficial for domain adaptation, which motivates us to explore smoothness w.r.t weights (W) in case of DAT. However, the earlier work has focused on achieving a smoother minima w.r.t. W for ERM. Currently, no study has been done if the loss function is composed of both ERM and adversarial objectives (as present in DAT).
SAM also improves the ImageNet model performance (Chen et al., 2021) on ImageNet-C and ImageNet-R (which are out of distribution). It has also been observed that smoothness w.r.t. input (image) is beneficial for domain adaptation (Shu et al., 2018; Cai et al., 2021), which motivates us to explore smoothness w.r.t weights (W) in case of DAT. However, the earlier work has focused on achieving a smoother minima w.r.t. W for ERM.
|
Why is better out-of-distribution generalization for smoother minimum expected?what is the motivation/context for investigating Conjecture 1?
|
Converging to smooth minima is shown to have generalize models robust to corruptions (which are out of distribution) in comparison to SGD [1]. [1] shows significant improvement in performance on both ImageNet-C and ImageNet-R (both out of distribution datasets). Also, smoothness in image space has been shown to be beneficial for domain adaptation [2]. We aim to explore the smoothness in weight space and its effect on domain adaptation in this current work.
|
null | false
| null |
Describe a plan for a road trip from Las Vegas to West Grand Canyon, including stops for visiting Hoover Dam and lunch. Your plan should include a budget, a timeline, and a description of the sights, for a group of 7 friends.
|
That's a great road trip! A group of 7 people can fit perfectly on a 7 seat car that you can rent in one of the many providers in Las Vegas. Start your day early in the morning to get the car and pick up your friends. From there your first destination is going to be the Hoover Dam, around 40 minutes by car from Las Vegas. Take your time to take pictures from the bridge and then go the dam, you can visit inside, the self-guided ticket is $10. You will be there for a couple of hours.
After these you need to go south east and take a turn to Dolan Springs, this will be an hour driving, so by that time you are around lunch time, which will cost you approximately $15.
From Dolan Sprints it's a straight 50 minutes ride to the West Grand Canyon. In the road you will find midway the Arizona's Joshua Tree Forest. Take the time to appreciate it, as well as the desert along the way.
When you arrive at the West Grand Canyon, you can buy the $70 general admission ticket + Skywalk. This gives you unlimited access for the day to Eagle Point, Guano Point, Hualapai Ranch and the Skywalk experience. You can walk the sky on the 10-foot wide, horseshoe-shaped glass bridge that extends 70 feet over the rim of the Grand Canyon.
When leaving the West Grand Canyon the trip back to Las Vegas takes 2h. You will need to refill the tank, before returning the vehicle to the car renting company. The renting for the day and the gas will cost around $35 per person.
Depending on your pace, this is a morning and afternoon road trip. With an estimated cost of $130 per person.
|
|
null | false
| null |
What is the tallest mountain in Colorado?
|
Mount Elbert is the tallest mountain in Colorado standing at 14,439 feet
|
|
null | false
| null |
The National Association of Broadcasters (NAB) is a trade association and lobby group representing the interests of commercial and non-commercial over-the-air radio and television broadcasters in the United States. The NAB represents more than 8,300 terrestrial radio and television stations as well as broadcast networks.
As of 2022, the president and CEO of the NAB is Curtis LeGeyt.
Founding
NAB headquarters in Washington, D.C.
The NAB was founded as the National Association of Radio Broadcasters (NARB) in April 1923 at the Drake Hotel in Chicago. The association's founder and first president was Eugene F. McDonald Jr., who also launched the Zenith corporation. In 1951 it changed its name to the National Association of Radio and Television Broadcasters (NARTB) to include the television industry. In 1958 it adopted its current name, "National Association of Broadcasters".
Commercial radio
The NAB worked to establish a commercial radio system in the United States. The system was set up in August 1928 with the establishment of General Order 40—a radio reallocation scheme by the Federal Radio Commission which awarded the choicest frequencies and broadcast times to the then-emerging commercial radio industry. In the wake of General Order 40, a loose coalition of educators, nonprofit broadcasters, labor unions, and religious groups coalesced to oppose the NAB and their allies through the 1920s and 1930s, and to develop a public, nonprofit, license-funded radio system without commercials (similar to what happened with the BBC). The coalition claimed that the commercial industry would only promote profitable programming, thereby reducing the quality and future potential of radio broadcasting.
Not having the political connections, resources, or publicity of the NAB and the commercial radio industry, the non-profit coalition eventually lost the fight with the passage of the Communications Act of 1934.
The National Independent Broadcasters were formed in 1939 as part of the NAB, to represent stations that were not associated with any network, but the group split off in 1941.
Satellite radio
Many satellite radio enthusiasts have criticized the NAB for lobbying against legislation approvals for those services. The NAB protested the FCC's approval of both satellite radio services in the United States—XM and Sirius—and furthermore criticized the 2008 merger of the two companies, calling the merged company a "potential monopoly".
Digital transition
In 2005, the NAB, together with the Association for Maximum Service Television Stations, Inc. (MSTV), commenced development of a prototype high quality, low cost digital-to-analog converter box for terrestrial digital television reception. The result of this project was a specification for the converter box, which was then adopted by the National Telecommunications and Information Administration as a technical requirement for eligible converter boxes for the Administration's Digital-to-Analog Converter Box Coupon Program.
White space
The NAB has lobbied against the use of white spaces, unused broadcast spectrum lying between broadcast channels, for wireless broadband internet and other digital use. The NAB has claimed that use of white space will interfere with existing broadcast spectrum, even though tests by the Federal Communications Commission at levels far stronger than that being advocated for in policy circles have not supported such claims. Indeed, the FCC has recommended the use of white spaces for broadband and other digital use. In 2011 the NAB funded an advertising campaign titled "The Future of TV", advocating for the private ownership of the spectrum, framed as a threat to free television.
Free TV campaign
In mid-2014, an NAB advertising campaign against a Congressional threat appeared, advocating viewers to defeat a cable-TV lobby.
Similar organizations
Organizations similar to the NAB exist in individual U.S. states, including Georgia Association of Broadcasters (GAB) in Georgia, and the Illinois Broadcasters Association (IBA), in Illinois. In Canada, the Canadian Association of Broadcasters (CAB) has a similar role.
|
Who are the CEO of National Association of Broadcasters
|
As of 2022, the president and CEO of the NAB is Curtis LeGeyt
|
null | false
| null |
How do you motivate and focus a team to exceed expectations during a terrible economic downturn?
|
Set a goal, vision, create plans to execute, don’t over complicate it. Win as a team and lose as team.
|
|
null | false
| null |
The Bakersfield Majestics, formerly the Bakersfield Magic, are an American basketball team based in Bakersfield, California, and members in the minor professional The Basketball League (TBL).
Starting The Franchise
From 2006 to 2016, Bakersfield, California, was home to the Bakersfield Jam of the NBA Development League. The team played at Rabobank Arena until 2009 and later at the Jam Events Center. On May 9, 2014, the Jam entered a hybrid affiliation with the Phoenix Suns, giving the Suns exclusive affiliation with the team while allowing the Jam to operate under their own management. On April 12, 2016, the Suns announced that the organization had purchased the Jam and were relocating the franchise to the town of Prescott Valley, Arizona, for the 2016–17 season to become the Northern Arizona Suns. In response, the previous owners of the Jam franchise, Stan Ellis and David Higdon, announced that they had been working with the D-League in securing a new franchise and affiliation before the 2016–17 season, although it did not come to fruition.
In June 2021, it was first reported by KGET-TV that Bakersfield would host the Bakersfield Magic in The Basketball League for the 2022 season. On August 5, 2021, it was announced that Spencer Norman, previously announced as the team market owner for the Fresno Fire, would be the team market owner. Karaya Gage was initially announced as the team's first head coach, making her the first African-American woman to serve as Head Coach of a men's professional basketball team. However, after coaching the team during a pre-season exhibition against members of the Bakersfield Police Department on January 29, 2022, Gage quietly left the organization. The team then announced that Peter Pang would serve as the new Head Coach, becoming the first Chinese-American to coach a professional basketball team in the United States. Lawrence "Doc" Holliday was the team's initial General Manager. However, the team had announced on March 22, 2022, that Holliday had died.
On April 1, 2022, The Bakersfield Magic announced that L. Dee Slade would take over as the new General Manager of the team.
|
Given a reference text about the Bakersfield Majestics, where did the team play its games?
|
The Bakersfield Majestics played its games at Rabobank Arena until 2009 and then the Jam Events Center in Bakersfield, California.
|
null | false
| null |
What are the unique characteristics of the Jindo dog breed?
|
The Jindo breed of dog is indigenous to the island of Jindo off the Southern coast of South Korea. They are typically between 30 and 50 pounds fully grown and vary from a white to a golden brown color. Listed as of the living natural treasures of South Korea, they are known for being intelligent, loyal, and having a keen ability to find their way home.
|
|
null | false
| null |
What is Linux?
|
Linux is an open-source UNIX-like operating system initially developed by Linus Torvalds and now with thousands of contributors around the world. Linux is actually the operating system kernel and the other components necessary to form an operating system are coming from other open source projects and in particular the GNU project. For this reason, the operating system is sometimes referred to as GNU/Linux.
|
|
null | false
| 76
|
Wikipedia is the largest source of open and collaboratively curated knowledge in the world. Introduced in 2001, it has evolved into a reference work with around 5m pages for the English Wikipedia alone. In addition, entities and event pages are updated quickly via collaborative editing and all edits are encouraged to include source citations, creating a knowledge base which aims at being both timely as well as authoritative. As a result, it has become the preferred source of information consumption about entities and events. Moreso, this knowledge is harvested and utilized in building knowledge bases like YAGO BIBREF0 and DBpedia BIBREF1 , and used in applications like text categorization BIBREF2 , entity disambiguation BIBREF3 , entity ranking BIBREF4 and distant supervision BIBREF5 , BIBREF6 .
However, not all Wikipedia pages referring to entities (entity pages) are comprehensive: relevant information can either be missing or added with a delay. Consider the city of New Orleans and the state of Odisha which were severely affected by cyclones Hurricane Katrina and Odisha Cyclone, respectively. While Katrina finds extensive mention in the entity page for New Orleans, Odisha Cyclone which has 5 times more human casualties (cf. Figure FIGREF2 ) is not mentioned in the page for Odisha. Arguably Katrina and New Orleans are more popular entities, but Odisha Cyclone was also reported extensively in national and international news outlets. This highlights the lack of important facts in trunk and long-tail entity pages, even in the presence of relevant sources. In addition, previous studies have shown that there is an inherent delay or lag when facts are added to entity pages BIBREF7 .
To remedy these problems, it is important to identify information sources that contain novel and salient facts to a given entity page. However, not all information sources are equal. The online presence of major news outlets is an authoritative source due to active editorial control and their articles are also a timely container of facts. In addition, their use is in line with current Wikipedia editing practice, as is shown in BIBREF7 that almost 20% of current citations in all entity pages are news articles. We therefore propose news suggestion as a novel task that enhances entity pages and reduces delay while keeping its pages authoritative.
Existing efforts to populate Wikipedia BIBREF8 start from an entity page and then generate candidate documents about this entity using an external search engine (and then post-process them). However, such an approach lacks in (a) reproducibility since rankings vary with time with obvious bias to recent news (b) maintainability since document acquisition for each entity has to be periodically performed. To this effect, our news suggestion considers a news article as input, and determines if it is valuable for Wikipedia. Specifically, given an input news article INLINEFORM0 and a state of Wikipedia, the news suggestion problem identifies the entities mentioned in INLINEFORM1 whose entity pages can improve upon suggesting INLINEFORM2 . Most of the works on knowledge base acceleration BIBREF9 , BIBREF10 , BIBREF11 , or Wikipedia page generation BIBREF8 rely on high quality input sources which are then utilized to extract textual facts for Wikipedia page population. In this work, we do not suggest snippets or paraphrases but rather entire articles which have a high potential importance for entity pages. These suggested news articles could be consequently used for extraction, summarization or population either manually or automatically – all of which rely on high quality and relevant input sources.
We identify four properties of good news recommendations: salience, relative authority, novelty and placement. First, we need to identify the most salient entities in a news article. This is done to avoid pollution of entity pages with only marginally related news. Second, we need to determine whether the news is important to the entity as only the most relevant news should be added to a precise reference work. To do this, we compute the relative authority of all entities in the news article: we call an entity more authoritative than another if it is more popular or noteworthy in the real world. Entities with very high authority have many news items associated with them and only the most relevant of these should be included in Wikipedia whereas for entities of lower authority the threshold for inclusion of a news article will be lower. Third, a good recommendation should be able to identify novel news by minimizing redundancy coming from multiple news articles. Finally, addition of facts is facilitated if the recommendations are fine-grained, i.e., recommendations are made on the section level rather than the page level (placement).
Approach and Contributions. We propose a two-stage news suggestion approach to entity pages. In the first stage, we determine whether a news article should be suggested for an entity, based on the entity's salience in the news article, its relative authority and the novelty of the article to the entity page. The second stage takes into account the class of the entity for which the news is suggested and constructs section templates from entities of the same class. The generation of such templates has the advantage of suggesting and expanding entity pages that do not have a complete section structure in Wikipedia, explicitly addressing long-tail and trunk entities. Afterwards, based on the constructed template our method determines the best fit for the news article with one of the sections.
We evaluate the proposed approach on a news corpus consisting of 351,982 articles crawled from the news external references in Wikipedia from 73,734 entity pages. Given the Wikipedia snapshot at a given year (in our case [2009-2014]), we suggest news articles that might be cited in the coming years. The existing news references in the entity pages along with their reference date act as our ground-truth to evaluate our approach. In summary, we make the following contributions.
In summary, we make the following contributions. • we propose a two-stage news suggestion approach for Wikipedia entity pages. • we adopt and address the problem of determining whether a news article should be referenced to an entity considering the entity salience, relative authority and novelty of the article for the entity page. • we are able to place articles in a specific section of the entity page. Through section templates, we address the problems of entities with a limited section structure by class-based generalization i.e. we can expand entity pages with sections that come from entities of a similar class. • an extensive evaluation on 351,982 news articles and 73,734 entity pages, using their state for the years [2009-2013].
|
What are the contributions of this article?
|
• They propose a two-stage news suggestion approach for Wikipedia entity pages. • They adopt and address the problem of determining whether a news article should be referenced to an entity considering the entity salience, relative authority and novelty of the article for the entity page.• They are able to place articles in a specific section of the entity page.• An extensive evaluation on 351,982 news articles and 73,734 entity pages, using their state for the years.
|
null | false
| null |
Give me a list of most popular songs of Foo Fighters
|
1. Everlong
2. The Pretender
3. Best of You
4. Times like These
5. My Hero
6. Monkey Wrench
7. All My Life
8. Learn to Fly
9. Walk
10. The Sky Is a Neighbourhood
11. These days
12. Something from Nothing
13. Run
14. Stranger Things Have Happened
15. Rope
|
|
1909.07734
| false
| null |
The two datasets used for the challenge are Friends and EmotionPush, part of the EmotionLines corpus BIBREF4. The datasets contain English-language dialogues of varying lengths. For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. The Friends dialogues are scripts taken from the American TV sitcom (1994-2004). The EmotionPush dialogues are from Facebook Messenger chats by real users which have been anonymized to ensure user privacy. For both datasets, dialogue lengths range from 5 to 24 lines each. A breakdown of the lengths of the dialogues is shown in Table .
For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation.
|
How large is the first dataset?
|
1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation
|
null | false
| null |
Aeolosaurini is an extinct clade of titanosaurian dinosaurs known from the Cretaceous period of Argentina and Brazil. Rodrigo M. Santucci and Antonio C. de Arruda-Campos (2011) in their cladistic analysis found Aeolosaurus, Gondwanatitan, Maxakalisaurus, Panamericansaurus and Rinconsaurus to be aeolosaurids.
Aeolosaurini is characterized by several synapomorphies of the caudal vertebrae, such as angled centra, elongate prezygapophyses, and neural arches shifted anteriorly relative to the centra. In life, their tails may have been strongly curved downward as a result of these traits, which may have increased the force exerted by the caudofemoralis longus muscle in retracting the hindlimb. Some aeolosaurins, such as Shingopana and Overosaurus, were relatively small compared to other titanosaurs, whereas others, such as Aeolosaurus maximus, were large.
|
What is an Aeolosaurini?
|
Aeolosaurini is an extinct clade of titanosaurian dinosaurs known from the Cretaceous period of Argentina and Brazil. Aeolosaurini is characterized by several synapomorphies of the caudal vertebrae, such as angled centra, elongate prezygapophyses, and neural arches shifted anteriorly relative to the centra.
|
null | false
| null |
What are the primary causes of inflation?
|
There is often a decrease in the quality of money (currency value) when the money supply increases too rapidly in an economy.
In general, economists believe that increased money supply (monetary inflation) causes price increases over the long run, but disagree on causes for short-term inflation.
|
|
null | false
| null |
Provide a short review for a Japanese restaurant called Honto in Brisbane. It was a little hard to find as it is in a backstreet, but we found it without too much trouble. Be sure to mention it was very atmospheric and that the food was great.
|
Dining at Honto was an especially memorable experience for us. The space was atmospheric and the Japanese food served was exceptional. It is surely one of Brisbane's best kept secrets.
|
|
null | false
| null |
Karl Otto Lagerfeld (German pronunciation: [kaʁl ˈʔɔtoː ˈlaːɡɐˌfɛlt] (listen); 10 September 1933 – 19 February 2019) was a German fashion designer, creative director, artist and photographer.
He was known as the creative director of the French fashion house Chanel, a position held from 1983 until his death, and was also creative director of the Italian fur and leather goods fashion house Fendi, and of his own eponymous fashion label. He collaborated on a variety of fashion and art-related projects.
|
Extract who was Karl Lagerfeld
|
Karl Lagerfeld was one of the world's premier fashion designers who rose to international fame in the 1980s, as the creative director of the French fashion house Chanel. He was also the creative director of Fendi and had his own fashion label. Early in his career, he was the assistant to Pierre Balmain and became the creative director of Jean Patou in 1958. He collaborated with many high end fashion labels such as Krizia, Charles Jourdan, Chloe and Valentino.
|
null | false
| null |
Jeffrey Howard Archer, Baron Archer of Weston-super-Mare (born 15 April 1940) is an English novelist, life peer and former politician. Before becoming an author, Archer was a Member of Parliament (1969–1974), but did not seek re-election after a financial scandal that left him almost bankrupt.
Archer revived his fortunes as a novelist. His 1979 novel Kane and Abel remains one of the best-selling books in the world, with an estimated 34 million copies sold worldwide. Overall his books have sold more than 320 million copies worldwide.
Archer became deputy chairman of the Conservative Party (1985–86), before resigning after a newspaper accused him of paying money to a prostitute. In 1987, he won a court case and was awarded large damages because of this claim. He was made a life peer in 1992 and subsequently became Conservative candidate to be the first elected Mayor of London. He resigned his candidacy in 1999 after it emerged that he had lied in his 1987 libel case. He was imprisoned (2001–2003) for perjury and perverting the course of justice, ending his active political career.
|
Who is Jeffrey Archer?
|
Jeffrey Archer is British Author and was also British Parliament member.One of his most famous books include Kane & Abel.
|
null | false
| null |
Categorize these modes of transportation into "would use to go across continents" "would use for daily commuting" "would use for fun": car, bicycle, boat, airplane, dune buggy, skateboard, rv, truck, sled, train
|
Would use to go across continents: airplane, car, truck, train, boat
Would use for daily commuting:car, truck, train, bicycle, skateboard, boat
Would use for fun: sled, rv, dune buggy, sled, skateboard, bicycle, car, truck, boat
|
|
null | false
| 35
|
The word embeddings of the source and target language sentences are used as initial vectors of the model to improve contextualization. The skip gram model of the word2vec algorithm optimizes the vectors by accounting for the average log probability of context words given a source word. DISPLAYFORM0
where k is the context window taken for the vectorization, INLINEFORM0 refers to the INLINEFORM1 word of the corpus and INLINEFORM2 is the size of the training corpus in terms of the number of words. Here, the probabily INLINEFORM3 is computed as a hierarchical softmax of the product of the transpose of the output vector of INLINEFORM4 and the input vector of INLINEFORM5 for each and every pair over the entire vocabulary. The processes of negative sampling and subsampling of frequent words that were used in the original model aren't used in this experiment BIBREF3 .
For the process of creating semantically meaningful word embeddings, a monolingual corpus of 569,772 Tamil sentences was used. This gave the vectors more contextual richness due to the increased size of the corpus as opposed to using just the bilingual corpus' target side sentences BIBREF3 .
In the experiment, the word2vec model was trained using a vector size of 100 to ensure that the bulk of the limited memory of the GPU will be used for the neural attention translation model. It has been shown that any size over that of 150 used for word vectorization gives similar results and that a size of 100 performs close to the model with 150-sized word vectors BIBREF7 . A standard size of 5 was used as window size and the model was trained over 7 worker threads simultaneously. A batch size of 50 words was used for training. The negative sampling was set at 1 as it is the nature of morphologically rich languages to have a lot of important words that don't occur more than once in the corpus. The gensim word2vec toolkit was used to implement this word embedding process BIBREF8 .
For the process of creating semantically meaningful word embeddings, a monolingual corpus of 569,772 Tamil sentences was used. This gave the vectors more contextual richness due to the increased size of the corpus as opposed to using just the bilingual corpus' target side sentences (Mikolov and Dean 2013).
|
Why is the monolingual corpus better than bilingual corpus?
|
A monolingual corpus that contains 569,772 Tamil sentences gave the vectors more contextual richness due to the increased size.
|
null | false
| null |
What are some common onion varieties used in cooking?
|
Common onions used in cooking are yellow onion, white onion, red onion, sweet onion, green onion, spring onion, leeks, and shallots.
|
|
1910.11204
| false
| null |
Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task.
Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings.
|
How big is improvement over the old state-of-the-art performance on CoNLL-2009 dataset?
|
The answers are shown as follows:
* our Open model achieves more than 3 points of f1-score than the state-of-the-art result
|
null | false
| null |
What is the tallest building in Seattle?
|
Columbia Tower
|
|
null | false
| null |
Identify which instrument is string or percussion: Zill, Epinette des Vosges
|
Epinette des Vosges is string, Zill is percussion.
|
|
null | false
| 349
|
Generating descriptions for multimedia contents such as images and videos, so called Image Captioning, is helpful for e-commerce companies or news agencies. For instance, in e-commerce field, people will no longer need to put much effort into understanding and describing products' images on their websites because image contents can be recognized and descriptions are automatically generated. Inspired by Horus BIBREF0 , Image Captioning system can also be integrated into a wearable device, which is able to capture surrounding images and generate descriptions as sound in real time to guide people with visually impaired.
Image Captioning has attracted attentions from researchers in recent years BIBREF1, BIBREF2, BIBREF3, and there has been promising attempts dealing with language barrier in this task by extending existed dataset captions into different languages BIBREF3, BIBREF4.
In this study, generating image captions in Vietnamese language is put into consideration. One straightforward approach for this task is to translate English captions into Vietnamese by human or by using machine translation tool, Google translation. With the method of translating directly from English to Vietnamese, we found that the descriptions are sometimes confusing and unnatural to native people. Moreover, image understandings are cultural dependent, as in Western, people usually have different ways to grasp images and different vocabulary choices for describing contexts. For instance, in Fig. FIGREF2, one MS-COCO English caption introduce about "a baseball player in motion of pitching", which makes sense and capture accurately the main activity in the image. Though it sounds sensible in English, the sentence becomes less meaningful when we try to translate it into Vietnamese. One attempt of translating the sentence is performed by Google Translation, and the result is not as expected.
Therefore, we come up with the approach of constructing a Vietnamese Image Captioning dataset with descriptions written manually by human. Composed by Vietnamese people, the sentences would be more natural and friendlier to Vietnamese users. The main resources we used from MS-COCO for our dataset are images. Besides, we consider having our dataset focus on sportball category due to several reasons:
By concentrating on a specific domain we are more likely to improve performance of the Image Captioning models. We expect our dataset can be used to confirm or reject this hypothesis.
Sportball Image Captioning can be used in certain sport applications, such as supportting journalists describing great amount of images for their articles.
Our primary contributions of this paper are as follows:
Firstly, we introduce UIT-ViIC, the first Vietnamese dataset extending MS-COCO with manually written captions for Image Captioning. UIT-ViIC is published for research purposes.
Secondly, we introduce our annotation tool for dataset construction, which is also published to help annotators conveniently create captions.
Finally, we conduct experiments to evaluate state-of-the-art models (evaluated on English dataset) on UIT-ViIC dataset, then we analyze the performance results to have insights into our corpus.
The structure of the paper is organized as follows. Related documents and studies are presented in Section SECREF2. UIT-ViIC dataset creation is described in Section SECREF3. Section SECREF4 describes the methods we implement. The experimental results and analysis are presented in Section SECREF5. Conclusion and future work are deduced in Section SECREF6.
With the method of translating directly from English to Vietnamese, we found that the descriptions are sometimes confusing and unnatural to native people.
|
What is confused with the method of translating directly from English to Vietnamese?
|
The descriptions are sometimes confusing and unnatural to native people.
|
null | false
| null |
Which is a species of fish? Barracuda or Barracks
|
Barracuda
|
|
null | false
| 158
|
In the preprocessing step, we used pre-trained GloVe vectors for word embeddings, and they would not be updated during training; We padded sentence number in each passage to 101, all word number in each sentence to 100. Word number of queries and choices were padded to 50. For all kernels of CNN, INLINEFORM0 , each of which has three different kernel width INLINEFORM1 ; each of them has same kernel number INLINEFORM2 . We utilized dropout in each CNN layer with dropout rate 0.8. We used Adam BIBREF8 optimizer to optimize our model with initial learning rate 0.001.
We used Adam [9] optimizer to optimize our model with initial learning rate 0.001.
|
What do they use to optimize the model?
|
Adam optimizer.
|
null | false
| null |
Who is the most influential president in history?
|
Abraham Lincoln is mostly regarded as the greatest president for his leadership during the Civil War and the abolition of slavery. His main contender is Franklin D. Roosevelt, for leading the country out of the Great Depression and during World War II.
|
|
1909.05358
| false
| null |
Multiple turns: The average number of utterances per dialog is about 23 which ensures context-rich language behaviors.
Multiple turns: The average number of utterances per dialog is about 23 which ensures context-rich language behaviors.
|
What is the average number of turns per dialog?
|
The answers are shown as follows:
* The average number of utterances per dialog is about 23
|
null | false
| null |
One of the driest places on Earth is the Atacama Desert. It is virtually devoid of life because it is blocked from receiving precipitation by the Andes mountains to the east and the Chilean Coast Range to the west. The cold Humboldt Current and the anticyclone of the Pacific are essential to keep the dry climate of the Atacama. The average precipitation in the Chilean region of Antofagasta is just 1 mm (0.039 in) per year. Some weather stations in the Atacama have never received rain. Evidence suggests that the Atacama may not have had any significant rainfall from 1570 to 1971. It is so arid that mountains that reach as high as 6,885 m (22,589 ft) are completely free of glaciers and, in the southern part from 25°S to 27°S, may have been glacier-free throughout the Quaternary, though permafrost extends down to an altitude of 4,400 m (14,400 ft) and is continuous above 5,600 m (18,400 ft). Nevertheless, there is some plant life in the Atacama, in the form of specialist plants that obtain moisture from dew and the fogs that blow in from the Pacific.
|
Extract the reasons the Atcama Desert is one of the driest places on earth and list them in bullet format.
|
-blocked from receiving precipitation by the Andes mountains to the east and the Chilean Coast Range to the west
-cold Humboldt Current and the anticyclone of the Pacific
|
null | false
| null |
Grimsthorpe is a hamlet in the South Kesteven district of Lincolnshire, England. It is situated on the A151 road, and 3 miles (5 km) north-west from Bourne. Grimsthorpe falls within the civil parish of Edenham, which is governed by Edenham Grimsthorpe Elsthorpe & Scottlethorpe Parish Council.
Grimsthorpe Castle is 500 yards (460 m) to the west.
John Marius Wilson's 1870 Imperial Gazetteer of England and Wales described Grimsthorpe as: a hamlet in Edenham parish, Lincoln; on the river Glen, 1½ mile W of Edenham village. Pop., 135. Grimsthorpe Park was the seat once of the Duke of Ancaster, afterwards of Lord Gwyder; is now the seat of Lord Willoughby d'Eresby; was built partly in the time of Henry III., but principally by the Duke of Suffolk, to entertain Henry VIII.; is a large, irregular, but magnificent structure; and stands in an ornate park, about 16 miles in circuit. A Cistertian abbey, founded about 1451, by the Earl of Albemarle, and called Vallis Dei, or, vulgarly, Vaudy, formerly stood in the park, about a mile from the castle.
|
What is Grimsthorpe?
|
Grimsthorpe is a hamlet in the South Kesteven district of Lincolnshire, England. It is situated on the A151 road, and 3 miles (5 km) north-west from Bourne. Grimsthorpe falls within the civil parish of Edenham, which is governed by Edenham Grimsthorpe Elsthorpe & Scottlethorpe Parish Council.
Grimsthorpe Castle is 500 yards (460 m) to the west.
John Marius Wilson's 1870 Imperial Gazetteer of England and Wales described Grimsthorpe as:
a hamlet in Edenham parish, Lincoln; on the river Glen, 1½ mile W of Edenham village. Pop., 135. Grimsthorpe Park was the seat once of the Duke of Ancaster, afterwards of Lord Gwyder; is now the seat of Lord Willoughby d'Eresby; was built partly in the time of Henry III., but principally by the Duke of Suffolk, to entertain Henry VIII.; is a large, irregular, but magnificent structure; and stands in an ornate park, about 16 miles in circuit. A Cistertian abbey, founded about 1451, by the Earl of Albemarle, and called Vallis Dei, or, vulgarly, Vaudy, formerly stood in the park, about a mile from the castle.
|
null | false
| null |
How many letters there are in the Japanese language?
|
here are 46 basic characters in the Japanese alphabet and additional characters/sounds that can be made from the basic 46. Each of the alphabet symbols forms one syllable made of the 21 romaji, which is phonetic and always read the same way.
|
|
null | false
| null |
How do I optimize my Kafka consumption in Spark for low latency?
|
You can tune the batch size and processing time to ensure that your Spark job can keep up with the data volume and maintain low latency.
|
|
null | false
| null |
Which of the following surnames have appeared on the Wimbledon Men's Championship trophy in the last sixty years? Federer, Djokovic, Navratilova, Connors, Borg, McEnroe, Becker, Laver, Evert, Court, Nastase, Sullivan, Leconte, Bahrami.
|
Wimbledon is a famous tennis tournament that is played, every two weeks of the summer, in a suburb of London in the United Kingdom. The correct answer is: Federer, Djokovic, Connors, Borg, McEnroe, Becker, Laver. The tennis players called Nastase, Sullivan, Leconte, and Bahrami have never won the Wimbledon Championships. The names 'Evert', 'Navratilova' and 'Court' belong to female champions (who have won the women's event in the last sixty years).
|
|
null | false
| null |
Why are semantic HTML elements important to use?
|
The main benefit of using semantic HTML elements is that it improves the accessibility of a website when used correctly. For example, text that is styled to look like a heading for a section of content, may not be exposed as a heading to a screen reader or search engine unless that text is wrapped in a heading element — h1, h2, h3, h4, h5, or h6.
|
|
null | false
| 322
|
NLG is the process of automatically generating coherent NL text from non-linguistic data BIBREF0. Recently, the field has seen an increased interest in the development of NLG systems focusing on verbalizing resources from SW data BIBREF1. The SW aims to make information available on the Web easier to process for machines and humans. However, the languages underlying this vision, i.e., RDF, SPARQL and OWL, are rather difficult to understand for non-expert users. For example, while the meaning of the OWL class expression Class: Professor SubClassOf: worksAt SOME University is obvious to every SW expert, this expression (“Every professor works at a university”) is rather difficult to fathom for lay persons.
Previous works such as SPARQL2NL BIBREF2 and SPARTIQULATION BIBREF3 have already shown the usefulness of the verbalization of SPARQL and RDF in areas such as question answering BIBREF4 and the explanation of the output of systems based onSW technologies BIBREF5. However, other SW languages are rarely investigated, such as OWL.
In this paper, we present an open-source holistic NLG framework for the SW, named LD2NL, which facilitates the verbalization of the three key languages of the SW, i.e., RDF, OWL, and SPARQL into NL. Our framework is based on a bottom-up paradigm for verbalizing SW data. Additionally, LD2NL builds upon SPARQL2NL as it is open-source and the paradigm it follows can be reused and ported to RDF and OWL. Thus, LD2NL is capable of generating either a single sentence or a summary of a given resource, rule, or query. To validate our framework, we evaluated LD2NL using experts 66 in NLP and SW as well as 20 non-experts who were lay users or non-users of SW. The results suggest that LD2NL generates texts which can be easily understood by humans. The version of LD2NL used in this paper, all experimental results will be publicly available.
In this paper, we present an open-source holistic NLG framework for the SW, named LD2NL, which facilitates the verbalization of the three key languages of the SW, i.e., RDF, OWL, and SPARQL into NL.
|
What is the LD2NL presented in this paper?
|
It is an open-source holistic NLG framework for the SW.
|
1910.00458
| false
| null |
Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11.
To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11.
|
What four representative datasets are used for bechmark?
|
The answers are shown as follows:
* DREAM, MCTest, TOEFL, and SemEval-2018 Task 11
|
null | false
| 328
|
Neural network based models have been widely exploited with the prosperities of Deep Learning BIBREF0 and achieved inspiring performances on many NLP tasks, such as text classification BIBREF1 , BIBREF2 , semantic matching BIBREF3 , BIBREF4 and machine translation BIBREF5 . These models are robust at feature engineering and can represent words, sentences and documents as fix-length vectors, which contain rich semantic information and are ideal for subsequent NLP tasks.
One formidable constraint of deep neural networks (DNN) is their strong reliance on large amounts of annotated corpus due to substantial parameters to train. A DNN trained on limited data is prone to overfitting and incapable to generalize well. However, constructions of large-scale high-quality labeled datasets are extremely labor-intensive. To solve the problem, these models usually employ a pre-trained lookup table, also known as Word Embedding BIBREF6 , to map words into vectors with semantic implications. However, this method just introduces extra knowledge and does not directly optimize the targeted task. The problem of insufficient annotated resources is not solved either.
Multi-task learning leverages potential correlations among related tasks to extract common features, increase corpus size implicitly and yield classification improvements. Inspired by BIBREF7 , there are a large literature dedicated for multi-task learning with neural network based models BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . These models basically share some lower layers to capture common features and further feed them to subsequent task-specific layers, which can be classified into three types:
In this paper, we propose a generalized multi-task learning architecture with four types of recurrent neural layers for text classification. The architecture focuses on Type-III, which involves more complicated interactions but has not been researched yet. All the related tasks are jointly integrated into a single system and samples from different tasks are trained in parallel. In our model, every two tasks can directly interact with each other and selectively absorb useful information, or communicate indirectly via a shared intermediate layer. We also design a global memory storage to share common features and collect interactions among all tasks.
We conduct extensive experiments on five benchmark datasets for text classification. Compared to learning separately, jointly learning multiple relative tasks in our model demonstrate significant performance gains for each task.
Our contributions are three-folds:
In our model, every two tasks can directly interact with each other and selectively absorb useful information, or communicate indirectly via a shared intermediate layer.
|
How to interact between every two tasks in the model?
|
In the model, every two tasks can directly interact with each other and selectively absorb useful information, or communicate indirectly via a shared intermediate layer.
|
null | false
| null |
Tell me if the following tourist attractions in Barcelona are free or paid: Park Güell, Parc de la Ciutadella, La Boqueria, Tibidabo, Casa Battló, La Rambla, Born Centre Cultural, La Sagrada Familia, La Pedrera, La Barceloneta, Palau de Música Catalana, Camp Nou
|
Park Güell - paid
Parc de la Ciutadella - free
La Boqueria - free
Tibidabo - free
Casa Battló - paid
La Rambla - free
Born Centre Cultural - free
La Sagrada Familia - paid
La Pedrera - paid
La Barceloneta - free
Palau de Música Catalana - paid
Camp Nou - paid
|
|
null | false
| null |
Controversy erupted in mid-winter 2001 when Seimon Glyn, Gwynedd Council's housing committee chairman and Plaid Cymru member, voiced frustration over "English immigrants" moving into traditionally Welsh speaking communities. Glyn was commenting on a report underscoring the dilemma of rocketing house prices outstripping what locals could pay, with the report warning that "...traditional Welsh communities could die out..." as a consequence.
In 2001 nearly a third of all purchases of properties in Gwynedd were by buyers from out of the county, with some communities reporting as many as a third of local homes used as holiday homes. Holiday home owners typically spend less than six months of the year in the local community.
The issue of locals being priced out of the local housing market is common to many rural communities throughout Britain, but in Wales the added dimension of language further complicates the issue, as many new residents do not learn the Welsh language.
Concerned for the Welsh language under these pressures, Glyn said "Once you have more than 50% of anybody living in a community that speaks a foreign language, then you lose your indigenous tongue almost immediately". His comments attracted strong criticism of Plaid Cymru from other national parties.
By spring 2002 both the Snowdonia National Park (Welsh: Parc Cenedlaethol Eryri) and Pembrokeshire Coast National Park (Welsh: Parc Cenedlaethol Arfordir Penfro) authorities began limiting second home ownership within the parks, following the example set by Exmoor. According to planners in Snowdonia and Pembroke applicants for new homes must demonstrate a proven local need or the applicant must have strong links with the area.
|
Without directly quoting from the text, give me a summary of the second home controversy in Gwynedd, Wales.
|
By the end of 2001, as much as a third of the homes in the Gwynedd county was owned by owners not using these houses as a primary home. The local community was concerned that the use of the Welsh language was at risk because a growing amount of owners did only speak English, not Welsh. To combat this phenomenon, two national parks in the county started limiting second home ownership in the area, requiring proof of local need or strong ties in the area.
|
null | false
| null |
Do Texans have a Southern accent?
|
It depends on who you ask. Most Texans would answer the question with no. Texans would describe their accent as a Texas drawl. The sound of a Texas drawl differs from what is commonly considered a southern accent. A southern accent is common in states such as Alabama, Georgia, the Carolinas, Mississippi, and others.
|
|
null | false
| null |
Categorize each of the following classes as either 'hard', 'medium', 'easy': english writing, math, history, chemistry, physics.
|
English writing in general is considered as easy, however to master the skills of writing is actually pretty hard.
Math is in general considered as hard in school, since it's less intuitive and requires multiple step thinking process to solve problems.
History is considered easy at school while chemistry is considered as medium.
Physics is considered as hard. It combines math and science. It has lots of concepts to learn, and it's cumulative: one new concept is built on top of many previous concepts, one physics concept may depends on concepts from math, science, etc.
|
|
null | false
| 271
|
Large repositories of textual communications (e.g. forum and microblog posts) have gained recent popularity as proxies for dialog BIBREF0, BIBREF1, BIBREF2. However, conversations in these settings differ from natural dialog: turns may be sparsely scattered over a large temporal span, contain distinct syntax and vocabulary BIBREF3, and differ greatly in formality and focus BIBREF4. In this paper, we investigate how appropriate such data is for modeling natural dialog, and introduce Interview, a new high-quality large-scale open-domain conversational dataset grounded in interview settings with annotations for specific speaker roles.
We compare the performance of state-of-the-art language models fine-tuned on Interview and other popular conversational datasets, demonstrating that Interview contains more complex dialog and better models the characteristics of natural spoken conversations. Our dataset is an order of magnitude larger than existing high-quality natural dialog datasets and contains speaker role annotations for each turn, facilitating the development of conversational agents and assistive systems for settings involving specific speaker roles, such as doctor-patient interviews or hosted talk shows.
In particular, we explore the tasks of role modeling in media dialog and role change detection on Interview and find that leveraging role information can enable more nuanced, on-topic and natural dialog generation, as well as improve role change classification performance.
In summary, we present Interview, the first large-scale open-domain media dialog dataset. We explore two tasks for which it serves as a promising benchmark dataset: speaker role modeling and speaker change detection. We build simple yet strong models to show quantitatively that role labels from Interview improve performance on such tasks. Interview's scale, spoken origins, role diversity, and complex utterances make it a better source for grounded open-domain conversations.
We build simple yet strong models to show quantitatively that role labels from Interview improve performance on such tasks.
|
Do the authors build simple yet strong models to show quantitatively?
|
Yes, they do.
|
null | false
| null |
What is the weather like in San Francisco?
|
The weather is typically mild in San Francisco. It is usually 50 to 65 degrees Fahrenheit in the both the winters and summers. San Francisco is known for having microclimates, which means that different neighborhoods might have different weather at the same time. The Mission District is known for being sunny whereas the Sunset is often foggy. San Francisco is often windy and typically gets cool at night. It's always smart to bring a jacket in San Francisco.
|
|
null | false
| null |
Classify these as animals or vehicles: helicopter, car, sheep, dog, bike, bus, van, tiger, cat, llama, deer, tractor, airplane, ship, rabbit, snake
|
The animals are: helicopter, car, bike, bus, van, tractor, ship
The vehicles are: sheep, dog, tiger, cat, llama, deer, rabbit, snake
|
|
null | false
| null |
Identify which instrument is string or percussion: Surdo, Kora
|
Kora is string, Surdo is percussion.
|
|
null | false
| null |
Harold Brainerd Hersey (April 11, 1893 – March 1956) was an American pulp editor and publisher, publishing several volumes of poetry. His pulp industry observations were published in hardback as Pulpwood Editor (1937).
|
Given this paragraph about Harold Brainerd Hersey, what did he do and when were his observations published?
|
Harold Brainerd Herse was an American pulp editor and publisher. His observations were published in 1937.
|
null | false
| null |
On a supercontinent version of Earth that has four Moons, powerful monsters and supervillains wreak havoc. The millionaire Agoni creates the Hero Association, which employs superheroes to fight evil. Saitama, an unassociated hero, hails from City Z and performs heroic deeds as a hobby. For three years, he has trained enough to defeat any enemy with a single punch, his unmatched strength leaving him bored. He becomes a reluctant mentor to Genos, a cyborg seeking revenge against another cyborg who killed his family and destroyed his hometown, after Saitama defeats a monster that defeated Genos.
|
What is the plot of One-Punch Man?
|
One-Punch Man is a manga and an anime about a protagonist, Saitama, who can defeat all of his enemies with one punch. It takes place on a version of Earth with one supercontinent and 4 moons, which has regular monster and supervillain attacks. To combat this, a Hero association was formed, but the protagonist is not part of this association and simply performs heroic deeds because he was bored. One of the highest ranked heros of the Hero association, Genos, sees Saitama's strength and becomes his disciple after Saitama beat a monster that beat Genos.
|
null | false
| null |
Can artificial intelligence and large language models replace software engineers?
|
While it is unlikely that artificial intelligence (AI) and large language models (LLMs) while replace high performing software engineers in the near future, there is certainly an argument that these technologies could make engineers much more productive. This in turn could reduce the need and therefore replace lower level software engineering positions. These kinds of cultural and organizational changes will likely take time. Another important dynamic to consider is that generative AI technology could significantly lower the barrier to entry for these positions while also driving the need for newer kinds of roles based on the use of this technology platform.
|
|
null | false
| 92
|
Progress in AI has been driven by, among other things, the development of challenging large-scale benchmarks like ImageNet BIBREF0 in computer vision, and SNLI BIBREF1, SQuAD BIBREF2, and others in natural language processing (NLP). Recently, for natural language understanding (NLU) in particular, the focus has shifted to combined benchmarks like SentEval BIBREF3 and GLUE BIBREF4, which track model performance on multiple tasks and provide a unified platform for analysis.
With the rapid pace of advancement in AI, however, NLU benchmarks struggle to keep up with model improvement. Whereas it took around 15 years to achieve “near-human performance” on MNIST BIBREF5, BIBREF6, BIBREF7 and approximately 7 years to surpass humans on ImageNet BIBREF8, BIBREF0, BIBREF9, the GLUE benchmark did not last as long as we would have hoped after the advent of BERT BIBREF10, and rapidly had to be extended into SuperGLUE BIBREF11. This raises an important question: Can we collect a large benchmark dataset that can last longer?
The speed with which benchmarks become obsolete raises another important question: are current NLU models genuinely as good as their high performance on benchmarks suggests? A growing body of evidence shows that state-of-the-art models learn to exploit spurious statistical patterns in datasets BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, instead of learning meaning in the flexible and generalizable way that humans do. Given this, human annotators—be they seasoned NLP researchers or non-experts—might easily be able to construct examples that expose model brittleness.
We propose an iterative, adversarial human-and-model-in-the-loop solution for NLU dataset collection that addresses both benchmark longevity and robustness issues. In the first stage, human annotators devise examples that our current best models cannot determine the correct label for. These resulting hard examples—which should expose additional model weaknesses—can be added to the training set and used to train a stronger model. We then subject the strengthened model to human interference and collect more weaknesses over several rounds. After each round, we both train a new model, and set aside a new test set. The process can be iteratively repeated in a never-ending learning BIBREF18 setting, with the model getting stronger and the test set getting harder in each new round.This process yields a “moving post” dynamic target for NLU systems, rather than a static benchmark that will eventually saturate.
Our approach draws inspiration from recent efforts that gamify collaborative training of machine learning agents over multiple rounds BIBREF19 and pit “builders” against “breakers” to learn better models BIBREF20. Recently, Dinan2019build showed that a similar approach can be used to make dialogue safety classifiers more robust. Here, we focus on natural language inference (NLI), arguably the most canonical task in NLU. We collected three rounds of data, and call our new dataset Adversarial NLI (ANLI).
Our contributions are as follows: 1) We introduce a novel human-and-model-in-the-loop dataset, currently consisting of three rounds that progressively increase in difficulty and complexity, that includes annotator-provided explanations. 2) We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks. 3) We provide a detailed analysis of the collected data that sheds light on the shortcomings of current models, categorizes the data by inference type to examine weaknesses, and demonstrates good performance on NLI stress tests. The ANLI dataset is available at github.com/facebookresearch/anli/. A demo of the annotation procedure can be viewed at adversarialnli.com.
2) We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks. 3) We provide a detailed analysis of the collected data that sheds light on the shortcomings of current models, categorizes the data by inference type to examine weaknesses, and demonstrates good performance on NLI stress tests.
|
What advantages do the training models show when training models on this new dataset?
|
The paper shows that training models on this new dataset leads to state-of-the-art performance on a variety of popular natural language understanding (NLU) benchmarks, categorizes the data by inference type to examine weaknesses, and demonstrates good performance on NLI stress tests.
|
null | false
| null |
Identify which instrument is string or percussion: Stomp box, Gunjac
|
Gunjac is string, Stomp box is percussion.
|
|
null | false
| null |
Write a letter to a school asking for an appointment to tour their facility
|
Dear Sir / Madam:
We live in Amsterdam and are parents of a bright 8 year old. We have heard wonderful things about your school and are interested in a tour. We would love to meet the admissions director and tour the facilities.
Can you please let us know your availability in the coming few weeks?
Thank you!
|
|
null | false
| null |
Identify which animal species is alive or extinct: Tikataalik, Tree Kangaroo
|
Tree Kangaroo is alive, Tikataalik is extinct.
|
|
null | false
| 109
|
As discussed in the literature BIBREF5 , sarcasm detection may depend on sentiment and other cognitive aspects. For this reason, we incorporate both sentiment and emotion clues in our framework. Along with these, we also argue that personality of the opinion holder is an important factor for sarcasm detection. In order to address all of these variables, we create different models for each of them, namely: sentiment, emotion and personality. The idea is to train each model on its corresponding benchmark dataset and, hence, use such pre-trained models together to extract sarcasm-related features from the sarcasm datasets.
Now, the viable research question here is - Do these models help to improve sarcasm detection performance?' Literature shows that they improve the performance but not significantly. Thus, do we need to consider those factors in spotting sarcastic sentences? Aren't n-grams enough for sarcasm detection? Throughout the rest of this paper, we address these questions in detail. The training of each model is done using a CNN. Below, we explain the framework in detail. Then, we discuss the pre-trained models. Figure FIGREF6 presents a visualization of the proposed framework.
In order to address all of these variables, we create different models for each of them, namely: sentiment, emotion and personality.
|
What models are created for each variable?
|
Sentiment, emotion and personality.
|
1903.03467
| false
| null |
We compare the different conditions by comparing BLEU BIBREF5 with respect to the reference Hebrew translations. We use the multi-bleu.perl script from the Moses toolkit BIBREF6 . Table shows BLEU scores for the different prefixes. The numbers match our expectations: Generally, providing an incorrect speaker and/or audience information decreases the BLEU scores, while providing the correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline. We note the BLEU score improves in all cases, even when given the wrong gender of either the speaker or the audience. We hypothesise this improvement stems from the addition of the word “said” which hints the model to generate a more “spoken” language which matches the tested scenario. Providing correct information for both speaker and audience usually helps more than providing correct information to either one of them individually. The one outlier is providing “She” for the speaker and “her” for the audience. While this is not the correct scenario, we hypothesise it gives an improvement in BLEU as it further reinforces the female gender in the sentence.
Results: Speaker. Figure FIGREF3 shows the result for controlling the morphological properties of the speaker ({he, she, I} said). It shows the proportion of gender-inflected verbs for the various conditions and the reference. We see that the baseline system severely under-predicts the feminine form of verbs as compared to the reference. The “He said” conditions further decreases the number of feminine verbs, while the “I said” conditions bring it back to the baseline level. Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference (though still under-predicting some of the feminine cases).
Generally, providing an incorrect speaker and/or audience information decreases the BLEU scores, while providing the correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline.
We see that the baseline system severely under-predicts the feminine form of verbs as compared to the reference. The “He said” conditions further decreases the number of feminine verbs, while the “I said” conditions bring it back to the baseline level. Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference (though still under-predicting some of the feminine cases).
|
How is it demonstrated that the correct gender and number information is injected using this system?
|
The answers are shown as follows:
* correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline
* Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference
|
1911.03154
| true
| null |
Recently, a number of researchers have endeavored to explore methods for simultaneous translation in the context of NMT BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some of them propose sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5, BIBREF10. These approaches are either memory inefficient during training BIBREF5 or hard to implement BIBREF10. Others utilize a full-sentence base model to perform simultaneous translation by modifications to the encoder and the decoding process. To match the incremental source context, they replace the bidirectional encoder with a left-to-right encoder BIBREF3, BIBREF11, BIBREF4, BIBREF12 or recompute the encoder hidden states BIBREF13. On top of that, heuristic algorithms BIBREF3, BIBREF14 or a READ/WRITE model trained with reinforcement learning BIBREF11, BIBREF4, BIBREF12 or supervised learning BIBREF13 are used to decide, at every step, whether to wait for the next source token or output a target token. However, these models either cannot directly use a pretrained vanilla CNMT model with bidirectional encoder as the base model or work in a sub-optimal way in the decoding stage.
Recently, a number of researchers have endeavored to explore methods for simultaneous translation in the context of NMT BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some of them propose sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5, BIBREF10.
|
Has there been previous work on SNMT?
|
Yes.
|
null | false
| 33
|
This section describes the similarity and relatedness measures we integrate in our second–order co–occurrence vectors. We use two taxonomies in this study, SNOMED–CT and MeSH. SNOMED–CT (Systematized Nomenclature of Medicine Clinical Terms) is a comprehensive clinical terminology created for the electronic representation of clinical health information. MeSH (Medical Subject Headings) is a taxonomy of biomedical terms developed for indexing biomedical journal articles.
We obtain SNOMED–CT and MeSH via the Unified Medical Language System (UMLS) Metathesaurus (version 2016AA). The Metathesaurus contains approximately 2 million biomedical and clinical concepts from over 150 different terminologies that have been semi–automatically integrated into a single source. Concepts in the Metathesaurus are connected largely by two types of hierarchical relations: INLINEFORM0 / INLINEFORM1 (PAR/CHD) and INLINEFORM2 / INLINEFORM3 (RB/RN).
This section describes the similarity and relatedness measures we integrate in our second–order co–occurrence vectors.
|
What does Section 2 mainly describe?
|
This section describes the similarity and relatedness measures they integrate in their second–order co–occurrence vectors.
|
null | false
| null |
What is genome analysis?
|
The genome sequence of an organism includes the collective DNA sequences of each chromosome in the organism. For a bacterium containing a single chromosome, a genome project will aim to map the sequence of that chromosome. For the human species, whose genome includes 22 pairs of autosomes and 2 sex chromosomes, a complete genome sequence will involve 46 separate chromosome sequences.
The Human Genome Project is a well known example of a genome project.
Genome assembly refers to the process of taking a large number of short DNA sequences and reassembling them to create a representation of the original chromosomes from which the DNA originated. In a shotgun sequencing project, all the DNA from a source (usually a single organism, anything from a bacterium to a mammal) is first fractured into millions of small pieces. These pieces are then "read" by automated sequencing machines. A genome assembly algorithm works by taking all the pieces and aligning them to one another, and detecting all places where two of the short sequences, or reads, overlap. These overlapping reads can be merged, and the process continues.
Genome assembly is a very difficult computational problem, made more difficult because many genomes contain large numbers of identical sequences, known as repeats. These repeats can be thousands of nucleotides long, and occur different locations, especially in the large genomes of plants and animals.
The resulting (draft) genome sequence is produced by combining the information sequenced contigs and then employing linking information to create scaffolds. Scaffolds are positioned along the physical map of the chromosomes creating a "golden path".
Originally, most large-scale DNA sequencing centers developed their own software for assembling the sequences that they produced. However, this has changed as the software has grown more complex and as the number of sequencing centers has increased. An example of such assembler Short Oligonucleotide Analysis Package developed by BGI for de novo assembly of human-sized genomes, alignment, SNP detection, resequencing, indel finding, and structural variation analysis.
Since the 1980s, molecular biology and bioinformatics have created the need for DNA annotation. DNA annotation or genome annotation is the process of identifying attaching biological information to sequences , and particularly in identifying the locations of genes and determining what those genes do.
When sequencing a genome, there are usually regions that are difficult to sequence (often regions with highly repetitive DNA). Thus, 'completed' genome sequences are rarely ever complete, and terms such as 'working draft' or 'essentially complete' have been used to more accurately describe the status of such genome projects. Even when every base pair of a genome sequence has been determined, there are still likely to be errors present because DNA sequencing is not a completely accurate process. It could also be argued that a complete genome project should include the sequences of mitochondria and (for plants) chloroplasts as these organelles have their own genomes.
It is often reported that the goal of sequencing a genome is to obtain information about the complete set of genes in that particular genome sequence. The proportion of a genome that encodes for genes may be very small (particularly in eukaryotes such as humans, where coding DNA may only account for a few percent of the entire sequence). However, it is not always possible (or desirable) to only sequence the coding regions separately. Also, as scientists understand more about the role of this noncoding DNA (often referred to as junk DNA), it will become more important to have a complete genome sequence as a background to understanding the genetics and biology of any given organism.
In many ways genome projects do not confine themselves to only determining a DNA sequence of an organism. Such projects may also include gene prediction to find out where the genes are in a genome, and what those genes do. There may also be related projects to sequence ESTs or mRNAs to help find out where the genes actually are.
|
|
null | false
| 332
|
As language processing tools become more prevalent concern has grown over their susceptibility to social biases and their potential to propagate bias BIBREF1, BIBREF2. Natural language training data inevitably reflects biases present in our society. For example, gender bias manifests itself in training data which features more examples of men than of women. Tools trained on such data will then exhibit or even amplify the biases BIBREF3.
Gender bias is a particularly important problem for Neural Machine Translation (NMT) into gender-inflected languages. An over-prevalence of some gendered forms in the training data leads to translations with identifiable errors BIBREF0. Translations are better for sentences involving men and for sentences containing stereotypical gender roles. For example, mentions of male doctors are more reliably translated than those of male nurses BIBREF2, BIBREF4.
Recent approaches to the bias problem in NLP have involved training from scratch on artificially gender-balanced versions of the original dataset BIBREF5, BIBREF6 or with de-biased embeddings BIBREF7, BIBREF8. While these approaches may be effective, training from scratch is inefficient and gender-balancing embeddings or large parallel datasets are challenging problems BIBREF9.
Instead we propose treating gender debiasing as a domain adaptation problem, since NMT models can very quickly adapt to a new domain BIBREF10. To the best of our knowledge this work is the first to attempt NMT bias reduction by fine-tuning, rather than retraining. We consider three aspects of this adaptation problem: creating less biased adaptation data, parameter adaptation using this data, and inference with the debiased models produced by adaptation.
Regarding data, we suggest that a small, trusted gender-balanced set could allow more efficient and effective gender debiasing than a larger, noisier set. To explore this we create a tiny, handcrafted profession-based dataset for transfer learning. For contrast, we also consider fine-tuning on a counterfactual subset of the full dataset and propose a straightforward scheme for artificially gender-balancing parallel text for NMT.
We find that during domain adaptation improvement on the gender-debiased domain comes at the expense of translation quality due to catastrophic forgetting BIBREF11. We can balance improvement and forgetting with a regularised training procedure, Elastic Weight Consolidation (EWC), or in inference by a two-step lattice rescoring procedure.
We experiment with three language pairs, assessing the impact of debiasing on general domain BLEU and on the WinoMT challenge set BIBREF0. We find that continued training on the handcrafted set gives far stronger and more consistent improvements in gender-debiasing with orders of magnitude less training time, although as expected general translation performance as measured by BLEU decreases.
We further show that regularised adaptation with EWC can reduce bias while limiting degradation in general translation quality. We also present a lattice rescoring procedure in which initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored by the adapted model. We believe this approach, rescoring with models targeted to remove bias, is novel in NMT. The rescoring procedure improves WinoMT accuracy by up to 30% with no decrease in BLEU on the general test set.
Recent recommendations for ethics in Artificial Intelligence have suggested that social biases or imbalances in a dataset be addressed prior to model training BIBREF12. This recommendation presupposes that the source of bias in a dataset is both obvious and easily adjusted. We show that debiasing a full NMT dataset is difficult, and suggest alternative efficient and effective approaches for debiasing a model after it is trained. This avoids the need to identify and remove all possible biases prior to training, and has the added benefit of preserving privacy, since no access to the original data or knowledge of its contents is required. As evidence, in section SECREF43, we show this scheme can be applied to remove gender bias in the output of ‘black box‘ online commercial MT systems.
We also present a lattice rescoring procedure in which initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored by the adapted model.
|
How does the lattice rescoring procedure work?
|
The authors also present a lattice rescoring procedure in which initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored by the adapted model.
|
null | false
| null |
Mohandas Karamchand Gandhi (/ˈɡɑːndi, ˈɡændi/ GA(H)N-dee, Gujarati: [ˈmoɦəndɑs ˈkəɾəmtʃənd ˈɡɑ̃dʱi]; 2 October 1869 – 30 January 1948) was an Indian lawyer, anti-colonial nationalist and political ethicist who employed nonviolent resistance to lead the successful campaign for India's independence from British rule. He inspired movements for civil rights and freedom across the world. The honorific Mahātmā (Sanskrit: "great-souled", "venerable"), first applied to him in 1914 in South Africa, is now used throughout the world.
Born and raised in a Hindu family in coastal Gujarat, Gandhi trained in the law at the Inner Temple, London, and was called to the bar at age 22 in June 1891. After two uncertain years in India, where he was unable to start a successful law practice, he moved to South Africa in 1893 to represent an Indian merchant in a lawsuit. He went on to live in South Africa for 21 years. It was here that Gandhi raised a family and first employed nonviolent resistance in a campaign for civil rights. In 1915, aged 45, he returned to India and soon set about organising peasants, farmers, and urban labourers to protest against excessive land-tax and discrimination.
Assuming leadership of the Indian National Congress in 1921, Gandhi led nationwide campaigns for easing poverty, expanding women's rights, building religious and ethnic amity, ending untouchability, and, above all, achieving swaraj or self-rule. Gandhi adopted the short dhoti woven with hand-spun yarn as a mark of identification with India's rural poor. He began to live in a self-sufficient residential community, to eat simple food, and undertake long fasts as a means of both introspection and political protest. Bringing anti-colonial nationalism to the common Indians, Gandhi led them in challenging the British-imposed salt tax with the 400 km (250 mi) Dandi Salt March in 1930 and in calling for the British to quit India in 1942. He was imprisoned many times and for many years in both South Africa and India.
|
Who is Mahatma Gandhi?
|
Mahatma Gandhi was an Indian freedom fighter and peace loving leader. Mahatma Gandhi was born as Mohandas Karamchand Gandhi in Gujarat on 2nd October 1869. Born and raised in a Hindu family in coastal Gujarat, Gandhi trained in the law at the Inner Temple, London, and was called to the bar at age 22 in June 1891. Gandhi moved to South Africa in 1893 to represent an Indian merchant in a lawsuit after two uncertain years in India, where he was unable to start a successful law practice. He went on to live in South Africa for 21 years. Gandhi led nationwide campaigns for freedom movement, easing poverty, expanding women's rights, building religious and ethnic amity, ending untouchability, and, above all, achieving swaraj or self-rule. Gandhi started Dandi Salt March in 1930 calling quit India for the then rulers the British.
|
null | false
| 100
|
Question classification can enable targetting question answering models, but is challenging to implement with high performance without using rule-based methods. In this work we generate the most fine-grained challenge dataset for question classification, using complex and syntactically diverse questions, and show gains of up to 12% are possible with our question classification model across datasets in open, science, and medical domains. This model is the first demonstration of a question classification model achieving state-of-the-art results across benchmark datasets in open, science, and medical domains. We further demonstrate attending to question type can significantly improve question answering performance, with large gains possible as quesion classification performance improves. Our error analysis suggests that developing high-precision methods of question classification independent of their recall can offer the opportunity to incrementally make use of the benefits of question classification without suffering the consequences of classification errors on QA performance.
We further demonstrate attending to question type can significantly improve question answering performance, with large gains possible as quesion classifi- cation performance improves.
|
What has the author further proved through the experiment?
|
They further demonstrate attending to question type can significantly improve question answering performance, with large gains possible as quesion classifi- cation performance improves.
|
null | false
| null |
Identify which instrument is string or percussion: Kendang, Gekkin
|
Gekkin is string, Kendang is percussion.
|
|
null | false
| null |
Haile Selassie I (Ge'ez: ቀዳማዊ ኀይለ ሥላሴ, romanized: Qädamawi Häylä Səllasé,[nb 2] Amharic pronunciation: [ˈhaɪlə sɨlˈlase] (listen);[nb 3] born Tafari Makonnen; 23 July 1892 – 27 August 1975) was Emperor of Ethiopia from 1930 to 1974. He rose to power as Regent Plenipotentiary of Ethiopia (Enderase) for Empress Zewditu from 1916. Haile Selassie is widely considered a defining figure in modern Ethiopian history, and the key figure of Rastafari, a religious movement in Jamaica that emerged shortly after he became emperor in the 1930s. He was a member of the Solomonic dynasty, which claims to trace lineage to Emperor Menelik I, believed to be the son of King Solomon and Makeda the Queen of Sheba.
|
who is Haile Sellassie?
|
Haile Selassie (Amharic: ቀዳማዊ ኀይለ ሥላሴ) was a former Emperor of Ethiopia. He ruled the country from 1930 to 1974. His first name before he became emperor was Ras Teferi Mekonnen. The religion movement, Rastafari, after the name of Ras Teferi Mekonnen, emerged in Jamaica shortly after Haile Selassie became emperor. His root is from an extended Solomon and Queen of Sheba(Makeda).
|
null | false
| null |
What is the best language in the world
|
Bengali is the best language in the world Because it's very sweet to listen.
|
|
null | false
| 89
|
In this section, we first introduce the task definition and our protocol to extract structured answer-relevant relations. Then we formalize the task under the encoder-decoder framework with gated attention and dual copy mechanism.
In this section, we first introduce the task definition and our protocol to extract structured answer relevant relations. Then we formalize the task under the encoder-decoder framework with gated at tention and dual copy mechanism.
|
What does Section 2 mainly describe?
|
In this section, they first introduce the task definition and their protocol to extract structured answer relevant relations. Then they formalize the task under the encoder-decoder framework with gated at tention and dual copy mechanism.
|
null | false
| null |
Cricket is a bat-and-ball game played between two teams of eleven players on a field at the center of which is a 22-yard (20-metre) pitch with a wicket at each end, each comprising two bails balanced on three stumps. The batting side scores runs by striking the ball bowled at one of the wickets with the bat and then running between the wickets, while the bowling and fielding side tries to prevent this (by preventing the ball from leaving the field, and getting the ball to either wicket) and dismiss each batter (so they are "out"). Means of dismissal include being bowled, when the ball hits the stumps and dislodges the bails, and by the fielding side either catching the ball after it is hit by the bat, but before it hits the ground, or hitting a wicket with the ball before a batter can cross the crease in front of the wicket. When ten batters have been dismissed, the innings ends and the teams swap roles. The game is adjudicated by two umpires, aided by a third umpire and match referee in international matches. They communicate with two off-field scorers who record the match's statistical information.
|
What is Cricket?
|
Cricket is a bat and ball game, invented in England. The earliest reference to cricket is in South East England in the mid-16th century. It spread globally with the expansion of the British Empire, with the first international matches in the second half of the 19th century. The game's governing body is the International Cricket Council (ICC), which has over 100 members, twelve of which are full members who play Test matches. The game's rules, the Laws of Cricket, are maintained by Marylebone Cricket Club (MCC) in London. The sport is followed primarily in South Asia, Australasia, the United Kingdom, Southern Africa and the West Indies. There are 3 main forms/editions of Cricket: Test matches (usually lasting 5 days), ODI (one day international) usually lasting 6-8 hours, and more recent and popular T20 usually lasting 3-4 hours.
|
null | false
| null |
What are some easy to buy birthday gifts for a coworker?
|
A book, a giftcard, a box of chocolates, a bag of candy, a desk plant or a desk decoration
|
|
null | false
| null |
There are many different types of cheese. Cheeses can be grouped or classified according to criteria such as length of fermentation, texture, methods of production, fat content, animal milk, and country or region of origin. The method most commonly and traditionally used is based on moisture content, which is then further narrowed down by fat content and curing or ripening methods. The criteria may either be used singly or in combination, with no single method being universally used.
|
From the provided paragraph, write a bulleted list of how cheeses can be grouped.
|
* Length of fermentation
* Texture
* Methods of production
* Fat content
* Animal milk
* Country or region of origin
|
1908.08419
| false
| null |
To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model.
To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model.
|
How does the scoring model work?
|
The answers are shown as follows:
* First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word
|
null | false
| 206
|
Keyphrases are single or multi-word lexical units that best summarise a document BIBREF0. As such, they are of great importance for indexing, categorising and browsing digital libraries BIBREF1. Yet, very few documents have keyphrases assigned, thus raising the need for automatic keyphrase generation systems. This task falls under the task of automatic keyphrase extraction which can also be the subtask of finding keyphrases that only appear in the input document. Generating keyphrases can be seen as a particular instantiation of text summarization, where the goal is not to produce a well-formed piece of text, but a coherent set of phrases that convey the most salient information. Those phrases may or may not appear in the document, the latter requiring some form of abstraction to be generated. State-of-the-art systems for this task rely on recurrent neural networks BIBREF2, BIBREF3, BIBREF4, and hence require large amounts of annotated training data to achieve good performance. As gold annotated data is expensive and difficult to obtain BIBREF5, previous works focused on readily available scientific abstracts and used author-assigned keyphrases as a proxy for expert annotations. However, this poses two major issues: 1) neural models for keyphrase generation do not generalize well across domains, thus limiting their use in practice; 2) author-assigned keyphrases exhibit strong consistency issues that negatively impacts the model's performance. There is therefore a great need for annotated data from different sources, that is both sufficiently large to support the training of neural-based models and that comprises gold-standard labels provided by experts. In this study, we address this need by providing KPTimes, a dataset made of 279 923 news articles that comes with editor-assigned keyphrases.
Online news are particularly relevant to keyphrase generation since they are a natural fit for faceted navigation BIBREF6 or topic detection and tracking BIBREF7. Also, and not less importantly, they are available in large quantities and are sometimes accompanied by metadata containing human-assigned keyphrases initially intended for search engines. Here, we divert these annotations from their primary purpose, and use them as gold-standard labels to automatically build our dataset. More precisely, we collect data by crawling selected news websites and use heuristics to draw texts paired with gold keyphrases. We then explore the resulting dataset to better understand how editors tag documents, and how these expert annotations differ from author-assigned keyphrases found in scholarly documents. Finally, we analyse the performance of state-of-the-art keyphrase generation models and investigate their transferability to the news domain and the impact of domain shift.
Finally, we analyse the performance of state-of-the-art keyphrase generation models and investigate their transferability to the news domain and the impact of domain shift.
|
What models do the authors investigate?
|
The performance of state-of-the-art keyphrase generation models.
|
null | false
| 247
|
With the rapid growth of social network platforms, more and more people tend to share their experiences and emotions online.[2]Corresponding Author: xuruifeng@hit.edu.cn Emotion analysis of online text becomes a new challenge in Natural Language Processing (NLP). In recent years, studies in emotion analysis largely focus on emotion classification including detection of writers' emotions BIBREF0 as well as readers' emotions BIBREF1 . There are also some information extraction tasks defined in emotion analysis BIBREF2 , BIBREF3 , such as extracting the feeler of an emotion BIBREF4 . These methods assume that emotion expressions are already observed. Sometimes, however, we care more about the stimuli, or the cause of an emotion. For instance, Samsung wants to know why people love or hate Note 7 rather than the distribution of different emotions.
Ex.1 我的手机昨天丢了,我现在很难过。
Ex.1 Because I lost my phone yesterday, I feel sad now.
In an example shown above, “sad” is an emotion word, and the cause of “sad” is “I lost my phone”. The emotion cause extraction task aims to identify the reason behind an emotion expression. It is a more difficult task compared to emotion classification since it requires a deep understanding of the text that conveys an emotions.
Existing approaches to emotion cause extraction mostly rely on methods typically used in information extraction, such as rule based template matching, sequence labeling and classification based methods. Most of them use linguistic rules or lexicon features, but do not consider the semantic information and ignore the relation between the emotion word and emotion cause. In this paper, we present a new method for emotion cause extraction. We consider emotion cause extraction as a question answering (QA) task. Given a text containing the description of an event which [id=lq]may or may not cause a certain emotion, we take [id=lq]an emotion word [id=lq]in context, such as “sad”, as a query. The question to the QA system is: “Does the described event cause the emotion of sadness?”. The [id=lq]expected answer [id=lq]is either “yes” or “no”. (see Figure FIGREF1 ). We build our QA system based on a deep memory network. The memory network has two inputs: a piece of text, [id=lq]referred to as a story in QA systems, and a query. The [id=lq]story is represented using a sequence of word embeddings.
[id=lq]A recurrent structure is implemented to mine the deep relation between a query and a text. It measure[id=lq]s the [id=lq]importance of each word in the text by [id=lq]an attention mechanism. Based on the [id=lq]learned attention result, the network maps the text into a low dimensional vector space. This vector is [id=lq]then used to generate an answer. Existing memory network based approaches to QA use weighted sum of attentions to jointly consider short text segments stored in memory. However, they do not explicitly model [id=lq]sequential information in the context. In this paper, we propose a new deep memory network architecture to model the context of each word simultaneously by multiple memory slots which capture sequential information using convolutional operations BIBREF5 , and achieves the state-of-the-art performance compared to existing methods which use manual rules, common sense knowledge bases or other machine learning models.
The rest of the paper is organized as follows. Section SECREF2 gives a review of related works on emotion analysis. Section SECREF3 presents our proposed deep memory network based model for emotion cause extraction. Section SECREF4 discusses evaluation results. Finally, Section SECREF5 concludes the work and outlines the future directions.
A recurrent structure is implemented to mine the deep relation between a query and a text.
|
How to mine the deep relation between a query and a text?
|
A recurrent structure is implemented to mine the deep relation between a query and a text.
|
null | false
| 64
|
Deep learning systems have shown a lot of promise for extractive Question Answering (QA), with performance comparable to humans when large scale data is available. However, practitioners looking to build QA systems for specific applications may not have the resources to collect tens of thousands of questions on corpora of their choice. At the same time, state-of-the-art machine reading systems do not lend well to low-resource QA settings where the number of labeled question-answer pairs are limited (c.f. Table 2 ). Semi-supervised QA methods like BIBREF0 aim to improve this performance by leveraging unlabeled data which is easier to collect.
In this work, we present a semi-supervised QA system which requires the end user to specify a set of base documents and only a small set of question-answer pairs over a subset of these documents. Our proposed system consists of three stages. First, we construct cloze-style questions (predicting missing spans of text) from the unlabeled corpus; next, we use the generated clozes to pre-train a powerful neural network model for extractive QA BIBREF1 , BIBREF2 ; and finally, we fine-tune the model on the small set of provided QA pairs.
Our cloze construction process builds on a typical writing phenomenon and document structure: an introduction precedes and summarizes the main body of the article. Many large corpora follow such a structure, including Wikipedia, academic papers, and news articles. We hypothesize that we can benefit from the un-annotated corpora to better answer various questions – at least ones that are lexically similar to the content in base documents and directly require factual information.
We apply the proposed system on three datasets from different domains – SQuAD BIBREF3 , TriviaQA-Web BIBREF4 and the BioASQ challenge BIBREF5 . We observe significant improvements in a low-resource setting across all three datasets. For SQuAD and TriviaQA, we attain an F1 score of more than 50% by merely using 1% of the training data. Our system outperforms the approaches for semi-supervised QA presented in BIBREF0 , and a baseline which uses the same unlabeled data but with a language modeling objective for pretraining. In the BioASQ challenge, we outperform the best performing system from previous year's challenge, improving over a baseline which does transfer learning from the SQuAD dataset. Our analysis reveals that questions which ask for factual information and match to specific parts of the context documents benefit the most from pretraining on automatically constructed clozes.
We apply the proposed system on three datasets from different domains — SQuAD (Rajpurkar et al., 2016), TriviaQA-Web (Joshi et al., 2017) and the BioASQ challenge (Tsatsaronis et al., 2015).
|
What dataset is the proposed system applied on?
|
They apply the proposed system on three datasets from different domains — SQuAD (Rajpurkar et al., 2016), TriviaQA-Web (Joshi et al., 2017) and the BioASQ challenge (Tsatsaronis et al., 2015).
|
null | false
| null |
Classify each of the following as either Mexican food or Vietnamese food: pho, tacos, spring rolls, bun bo hue, burritos, frijoles, banh mi, nachos
|
Vietnamese food: pho, spring rolls, bun bo hue, banh mi
Mexican food: tacos, burritos, frijoles, nachos
|
|
null | false
| null |
Sam F.S. Chin (Chin Fan Siong, 曾帆祥, born August 1, 1954) is the Grandmaster and Gatekeeper of the martial art called I Liq Chuan, and also known as "the Martial Art of Awareness". Chin developed the grading system for this system.
Chin was trained since childhood by his father, Chin Lik Keong, founder of I Liq Chuan, and in 1978, he won the state heavy weight kickboxing championships of Selangor (Malaysia). In August 2009 Chin was officially named the Gate Keeper and Lineage Holder of the I Liq Chuan. He is also the founder and President of Chin Family I Liq Chuan Association that supports the martial art's growth.
Chin has taught workshops worldwide , written two books on his family's martial art, co-authored articles, and produced a series of DVDs. Chin was interviewed by Jess O'Brien for his 2007 book "Nei Jia Quan: Internal Martial Arts".
In August 2009, Chin was also recognized by the USA Wushu Kung-Fu Federation as a Hall of Fame Outstanding Master. In 2014, he was written about by Ashe Higgs in Kung Fu Tai Chi Magazine "Finding the Center: I Liq Chuan’s Three Essential Qualities for Offense and Defense as One". Chin was also the subject of the 2016 CCTV Documentary "Kung Fu Abroad: I Liq Chuan 中央电视台《功夫在外》纪录片".
On November 15, 2017 Sam Chin became an honorary professor at the University of Plovdiv, Bulgaria in recognition of his lifetime of achievement in spreading I Liq Chuan.
|
Who is the founder of the I Liq Chuan martial art?
|
The founder of I Liq Chuan is Chin Lik Keong.
|
null | false
| null |
what is the best song produced by the band Rush
|
Red Barchetta
|
|
null | false
| null |
What athlete created the 'beast quake' for the Seattle Seahawks?
|
Marshan Lynch
|
|
null | false
| null |
Classify each of the following as flexible or inflexible: gymnast, yoga instructor, steel, rubber, leather, wood
|
Gymnast: Flexible
Yoga Instructor: Flexible
Steel: Inflexible
Rubber: Flexible
Leather: Flexible
Wood: Inflexible
|
|
null | false
| null |
Give me a bulleted list of some things that could go wrong skateboarding down a steep hill
|
- You could hit a rock in the street and go flying off your board
- You could gain too much speed and be unable to slow down
- You could be unable to stop and run a stop sign in front of a cop car
- You could be hit by a car
- You could have your board fly into traffic and be broken by a car
|
|
null | false
| null |
What professional baseball team is based out of Seattle, WA.
|
The Seatttle Mariners are part of the American League within Major League Baseball.
|
|
null | false
| null |
Classify the overall sentiment of this piece of text as either positive or negative: The little girl really wanted to buy that beautiful dress but her mom did not have enough money.
|
Negative
|
|
null | false
| null |
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It states that "the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used".[page needed] It is named after computer scientist Gene Amdahl, and was presented at the American Federation of Information Processing Societies (AFIPS) Spring Joint Computer Conference in 1967.
|
Extract the creator of Amdahl's law from the Wikipedia paragraph.
|
The Wikipedia paragraph does not specify the creator of Amdahl's law. However, it does state that the law was named after the computer scientist Gene Amdahl. Given the context, it is likely that Gene Amdahl is the creator of Amdahl's law.
|
null | false
| null |
Succession is an American satirical black comedy-drama television series created by Jesse Armstrong. It premiered on June 3, 2018, on HBO. The series centers on the Roy family, the owners of Waystar RoyCo, a global media and entertainment conglomerate, who are fighting for control of the company amid uncertainty about the health of the family's patriarch, Logan Roy (Brian Cox). The series has been renewed for a fourth and final season, which premiered on March 26, 2023.
|
What is the name of the family at the centre of the TV show Succession?
|
Succession centres on the Roy family, the owners of Waystar RoyCo
|
null | false
| null |
What affects tiredness?
|
There are many factors which affect tiredness, including, but not limited to, amount of quality sleep, neurological state, nutrition, and both physical and mental health issues. It is extremely important to seek professional medical advice for tiredness if it is severely impacting your lifestyle and health.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.