paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Who is Magic Johnson?
|
Earvin “Magic” Johnson grew up in Michigan and attended Michigan State University. He played basketball for the Los Angeles Lakers of the National Basketball Association (NBA). His hall-of-fame career included five NBA championships. Following his basketball career, Magic has pursued coaching, entertainment and business.
|
|
null | false
| 30
|
In this paper, we evaluated the encoder-decoder-reconstructor on English-Japanese and Japanese-English translation tasks. In addition, we evaluate the effectiveness of pre-training by comparing it with a jointly-trained model of forward translation and back-translation. Experimental results show that the encoder-decoder-reconstructor offers significant improvement in BLEU scores and alleviates the problem of repeating and missing words in the translation on English-Japanese translation task, and the encoder-decoder-reconstructor can not be trained well without pre-training, so it proves that we have to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training.
Experimental results show that the encoder-decoder-reconstructor offers significant improvement in BLEU scores and alleviates the problem of repeating and missing words in the translation on English-Japanese translation task, and the encoder-decoder-reconstructor can not be trained well without pre-training, so it proves that we have to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training.
|
What do the results show?
|
Experimental results show that the encoder-decoder-reconstructor offers significant improvement in BLEU scores and alleviates the problem of repeating and missing words in the translation on English-Japanese translation task, and the encoder-decoder-reconstructor can not be trained well without pre-training, so it proves that they have to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training.
|
null | false
| null |
Musk attended Waterkloof House Preparatory School, Bryanston High School, and Pretoria Boys High School, from which he graduated. Musk applied for a Canadian passport through his Canadian-born mother, knowing that it would be easier to immigrate to the United States this way. While waiting for his application to be processed, he attended the University of Pretoria for five months.
Musk arrived in Canada in June 1989 and lived with a second cousin in Saskatchewan for a year, working odd jobs at a farm and lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania (UPenn), where he completed studies for a Bachelor of Arts degree in physics and a Bachelor of Science degree in economics from the Wharton School. Although Musk claims he earned the degrees in 1995, UPenn maintains it awarded them in 1997. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books.
In 1994, Musk held two internships in Silicon Valley: one at the energy storage startup Pinnacle Research Institute, which investigated electrolytic ultracapacitors for energy storage, and another at the Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a PhD program in materials science at Stanford University. However, Musk decided to join the Internet boom, instead dropping out two days after being accepted and applied for a job at Netscape, to which he reportedly never received a response.
|
From the passage mention the different educational institutions which Musk attended. Display the results in comma separated format.
|
Waterkloof House Preparatory School, Bryanston High School, Pretoria Boys High School, University of Pretoria, Queen's University, University of Pennsylvania (UPenn)
|
null | false
| null |
What are the primary elements of Odissi dance?
|
The primary elements of Odissi dance are derived from the tribhanga posture. The tribhanga posture (three breaks) is unique to the Odissi classical dance style. Each break refers to the starting position of a body part. The first break is the head and neck. The second break is the waist, and the third break is the hip.
|
|
null | false
| null |
How many continents are in the world?
|
There are seven continents in the world
|
|
null | false
| null |
A system of promotion and relegation exists between the Premier League and the EFL Championship. The three lowest placed teams in the Premier League are relegated to the Championship, and the top two teams from the Championship promoted to the Premier League, with an additional team promoted after a series of play-offs involving the third, fourth, fifth and sixth placed clubs. The number of clubs was reduced from 22 to 20 in 1995, when four teams were relegated from the league and only two teams promoted. The top flight had only been expanded to 22 teams at the start of the 1991–92 season – the year prior to the formation of the Premier League.
On 8 June 2006, FIFA requested that all major European leagues, including Italy's Serie A and Spain's La Liga, be reduced to 18 teams by the start of the 2007–08 season. The Premier League responded by announcing their intention to resist such a reduction. Ultimately, the 2007–08 season kicked off again with 20 teams.
|
how many teams relegate from the English premier league?
|
The Premier league consists of 20 teams. At the end of each season the bottom three teams gets relegated to the EFL and three teams from the EFL gets promoted to the premier league.
|
null | false
| null |
Good Friday is a Christian holiday commemorating the crucifixion of Jesus and his death at Calvary. It is observed during Holy Week as part of the Paschal Triduum. It is also known as Holy Friday, Great Friday, Great and Holy Friday (also Holy and Great Friday), and Black Friday.
|
Extract the various ways to say Good Friday from the text. Separate them with a new line.
|
Good Friday is also referred to as:
Holy Friday
Great Friday
Black Friday
Great and Holy Friday
Holy and Great Friday
|
null | false
| null |
Name an old wives tale
|
Swallowing gum will cause it to stick to your stomach for seven years. This is not true and is considered an old wives' tale
|
|
null | false
| null |
How long does it take me from the city center of Munich to the football stadium?
|
Assuming Marienplatz as the city center of Munich, a journey to Allianz Arena will take you approximately 20 minutes by car, 30 minutes by train, 37 minutes by bike, and 2 hours and 11 minutes by walk.
|
|
null | false
| null |
Sergio Michel "Checo" Pérez Mendoza (Spanish: [ˈseɾxjo ˈpeɾes] (listen); born 26 January 1990), is a Mexican racing driver who races in Formula One for Red Bull Racing, having previously driven for Sauber, McLaren, Force India, and Racing Point. He won his first Formula One Grand Prix at the 2020 Sakhir Grand Prix, breaking the record for the number of starts before a race win at 190. He also earned his first Formula One pole position at the 2022 Saudi Arabian Grand Prix, breaking the record for the most races before a first pole position at 215. Pérez has a contract with Red Bull Racing until the end of the 2024 season.
|
List all the Formula 1 teams that Sergio Perez has driven for
|
- Sauber
- McLaren
- Force India
- Racing Point
- Red Bull Racing
|
null | false
| null |
Why do people get sore after working out?
|
Muscle soreness occurs because muscle and the connective tissue around it get damaged during exercise, such as weight lifting or high intensity training. This kind of natural breakdown is natural and needed for muscle growth, since muscle grows back stronger during the body's repair process. During this process the affected muscle fibers increase in strength and thickness.
Additionally, people tend to experience lactic acid buildup prior to exercise as your body uses oxygen to convert glucose to energy. During exercise, if there is a lack of oxygen for this process, you may produce a substance called lactate. Many people believe that soreness comes from the buildup of this lactic acid, but the real contributor is the breakdown of muscle from usage.
|
|
null | false
| 28
|
The task of generating natural language descriptions of structured data (such as tables) BIBREF2 , BIBREF3 , BIBREF4 has seen a growth in interest with the rise of sequence to sequence models that provide an easy way of encoding tables and generating text from them BIBREF0 , BIBREF1 , BIBREF5 , BIBREF6 .
For text generation tasks, the only gold standard metric is to show the output to humans for judging its quality, but this is too expensive to apply repeatedly anytime small modifications are made to a system. Hence, automatic metrics that compare the generated text to one or more reference texts are routinely used to compare models BIBREF7 . For table-to-text generation, automatic evaluation has largely relied on BLEU BIBREF8 and ROUGE BIBREF9 . The underlying assumption behind these metrics is that the reference text is gold-standard, i.e., it is the ideal target text that a system should generate. In practice, however, when datasets are collected automatically and heuristically, the reference texts are often not ideal. Figure FIGREF2 shows an example from the WikiBio dataset BIBREF0 . Here the reference contains extra information which no system can be expected to produce given only the associated table. We call such reference texts divergent from the table.
We show that existing automatic metrics, including BLEU, correlate poorly with human judgments when the evaluation sets contain divergent references (§ SECREF36 ). For many table-to-text generation tasks, the tables themselves are in a pseudo-natural language format (e.g., WikiBio, WebNLG BIBREF6 , and E2E-NLG BIBREF10 ). In such cases we propose to compare the generated text to the underlying table as well to improve evaluation. We develop a new metric, PARENT (Precision And Recall of Entailed N-grams from the Table) (§ SECREF3 ). When computing precision, PARENT effectively uses a union of the reference and the table, to reward correct information missing from the reference. When computing recall, it uses an intersection of the reference and the table, to ignore extra incorrect information in the reference. The union and intersection are computed with the help of an entailment model to decide if a text n-gram is entailed by the table. We show that this method is more effective than using the table as an additional reference. Our main contributions are:
For table-to-text generation, automatic evaluation has largely relied on BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004).
|
What are the two existing metrics that are often used for the automatic evaluation of table-to-text generation?
|
BLEU and ROUGE.
|
null | false
| 122
|
A common issue for many Natural Language Processing (NLP) classification tasks is class imbalance, the situation where one of the class categories comprises a significantly larger proportion of the dataset than the other classes. It is especially prominent in real-world datasets and complicates classification when the identification of the minority class is of specific importance.
Models trained on the basis of minimising errors for imbalanced datasets tend to more frequently predict the majority class; achieving high accuracy in such cases can be misleading. Because of this, the macro-averaged F-score, chosen for this competition, is a more suitable metric as it weights the performance on each class equally.
As class imbalance is a widespread issue, multiple techniques have been developed that help alleviate it BIBREF7, BIBREF8, by either adjusting the model (e.g. changing the performance metric) or changing the data (e.g. oversampling the minority class or undersampling the majority class).
Cost-sensitive classification can be used when the “cost” of mislabelling one class is higher than that of mislabelling other classes BIBREF9, BIBREF10. For example, the real cost to a bank of miscategorising a large fraudulent transaction as authentic is potentially higher than miscategorising (perhaps only temporarily) a valid transaction as fraudulent. Cost-sensitive learning tackles the issue of class imbalance by changing the cost function of the model such that misclassification of training examples from the minority class carries more weight and is thus more `expensive'. This is achieved by simply multiplying the loss of each example by a certain factor. This cost-sensitive learning technique takes misclassification costs into account during model training, and does not modify the imbalanced data distribution directly.
Common methods that tackle the problem of class imbalance by modifying the data to create balanced datasets are undersampling and oversampling. Undersampling randomly removes instances from the majority class and is only suitable for problems with an abundance of data. Oversampling means creating more minority class instances to match the size of the majority class. Oversampling methods range from simple random oversampling, i.e. repeating the training procedure on instances from the minority class, chosen at random, to the more complex, which involves constructing synthetic minority-class samples. Random oversampling is similar to cost-sensitive learning as repeating the sample several times makes the cost of its mis-classification grow proportionally. Kolomiyets et al. kolomiyets2011model, Zhang et al. zhang2015character, and Wang and Yang wang2015s perform data augmentation using synonym replacement, i.e. replacing random words in sentences with their synonyms or nearest-neighbor embeddings, and show its effectiveness on multiple tasks and datasets. Wei et al. wei2019eda provide a great overview of `easy' data augmentation (EDA) techniques for NLP, including synonym replacement as described above, and random deletion, i.e. removing words in the sentence at random with pre-defined probability. They show the effectiveness of EDA across five text classification tasks. However, they mention that EDA may not lead to substantial improvements when using pre-trained models. In this work we test this claim by comparing performance gains of using cost-sensitive learning versus two data augmentation methods, synonym replacement and random deletion, with a pre-trained BERT model.
More complex augmentation methods include back-translation BIBREF11, translational data augmentation BIBREF12, and noising BIBREF13, but these are out of the scope of this study.
The sentence level classification task is an imbalanced binary classification problem that we address using BERT BIBREF0. We use BERTBASE, uncased, which consists of 12 self-attention layers, and returns a 768-dimension vector that representation a sentence. So as to make use of BERT for sentence classification, we include a fully connected layer on top of the BERT self-attention layers, which classifies the sentence embedding provided by BERT into the two classes of interest (propaganda or non-propaganda).
We attempt to exploit various data augmentation techniques to address the problem of class imbalance. Table TABREF17 shows the results of our experiments for different data augmentation techniques when, after shuffling the training data, we train the model on 75% of the training data and test it on the remaining 25% of the training data and the development data.
We observe that BERT without augmentation consistently outperforms BERT with augmentation in the experiments when the model is trained on 75% of the training data and evaluated on the rest, i.e trained and evaluated on similar data, coming from the same distribution. This is consistent with observations by Wei et al. wei2019eda that contextual word embeddings do not gain from data augmentation. The fact that we shuffle the training data prior to splitting it into training and testing subsets could imply that the model is learning to associate topic words, such as `Mueller', as propaganda. However, when we perform model evaluation using the development set, which is dissimilar to the training, we observe that synonym insertion and word dropping techniques also do not bring performance gains, while random oversampling increases performance over base BERT by 4%. Synonym insertion provides results very similar to base BERT, while random deletion harms model performance producing lower scores. We believe that this could be attributed to the fact that synonym insertion and random word dropping involve the introduction of noise to the data, while oversampling does not. As we are working with natural language data, this type of noise can in fact change the meaning of the sentence. Oversampling on the other hand purely increases the importance of the minority class by repeating training on the unchanged instances.
So as to better understand the aspects of oversampling that contribute to these gains, we perform a class-wise performance analysis of BERT with/without oversampling. The results of these experiments (Table TABREF18) show that oversampling increases the overall recall while maintaining precision. This is achieved by significantly improving the recall of the minority class (propaganda) at the cost of the recall of the majority class.
So far we have been able to establish that a) the training and test sets are dissimilar, thus requiring us to generalise our model, b) oversampling provides a method of generalisation, and c) oversampling does this while maintaining recall on the minority (and thus more interesting) class.
Given this we explore alternative methods of increasing minority class recall without a significant drop in precision. One such method is cost-sensitive classification, which differs from random oversampling in that it provides a more continuous-valued and consistent method of weighting samples of imbalanced training data; for example, random oversampling will inevitably emphasise some training instances at the expense of others. We detail our methods of using cost-sensitive classification in the next section. Further experiments with oversampling might have provided insights into the relationships between these methods, which we leave for future exploration.
Models trained on the basis of minimising errors for imbalanced datasets tend to more frequently predict the majority class; achieving high accuracy in such cases can be misleading. Because of this, the macro-averaged F-score, chosen for this competition, is a more suitable metric as it weights the performance on each class equally.
|
Why is macro-averaged F-score chosen in the study?
|
To reduce the possibility of misleading and to improve accuracy on the basis of imbalanced datasets.
|
null | false
| null |
What are some indoor sports I can play in winter?
|
Here is a list of indoor sports:
- Table tennis
- Indoor basketball
- Futsal
- Trampolining
- Ice hockey
- Yoga
- Badminton
- Fencing
|
|
null | false
| null |
Why do muscle twitch (or twinge)?
|
Muscle twitch could be related to too much stress, caffeine, poor diet or lack of exercise.
|
|
null | false
| null |
The Grand Canyon is a river valley in the Colorado Plateau that exposes uplifted Proterozoic and Paleozoic strata, and is also one of the six distinct physiographic sections of the Colorado Plateau province. Even though it is not the deepest canyon in the world (Kali Gandaki Gorge in Nepal is much deeper), the Grand Canyon is known for its visually overwhelming size and its intricate and colorful landscape. Geologically, it is significant because of the thick sequence of ancient rocks that are well preserved and exposed in the walls of the canyon. These rock layers record much of the early geologic history of the North American continent.
|
Why is Grand Canyon a big deal?
|
While Gran Canyon is not the deepest canyon on earth, it bears significant geological significance because of its exposed yet well-preserved rock layers. The rock layers record much of the early geologic history of the North American continent
|
1910.07181
| false
| null |
To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35.
To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23.
|
What are three downstream task datasets?
|
The answers are shown as follows:
* MNLI
* AG's News
* DBPedia
|
null | false
| null |
Who is the greatest tennis player of all time?
|
This is the most debated question in tennis and it seems to have multiple answers depending on how greatness is defined. While the tennis history gave us many great players, most answers pick one of the so called big three players: Novak Djokovic, Rafael Nadal, or Roger Federer.
If we look at the number of trophies and records, the most successful player is Novak Djokovic. The dominance achieved by Rafael Nadal on a single surface is unprecedented. Some people however argue that greatness cannot be measured in trophies and records and that Roger Federer’s game is the most elegant they’ve ever seen on court.
|
|
null | false
| null |
Gujarat Giants (formerly known as Gujarat Fortunegiants) is a men's kabaddi team based in Ahmedabad, Gujarat that plays in the Pro Kabaddi League. The team is currently led by Chandran Ranjith and coached by Ram mehar Singh. The team is owned by Adani Wilmar Ltd. The Giants play their home matches at The Arena by TransStadia. They reached the final in both of their attempts in 2017 and 2018, finishing runner's up on both occasions to the Patna Pirates and Bengaluru Bulls respectively.
|
From the passage provided, extract the name of the coach of Gujarat Giants
|
Gujarat Giants is coached by Ram mehar Singh
|
null | false
| 28
|
PARENT evaluates each instance INLINEFORM0 separately, by computing the precision and recall of INLINEFORM1 against both INLINEFORM2 and INLINEFORM3 .
The table is in a semistructured form, and hence not directly comparable to the unstructured generated or reference texts. To bridge this gap, we introduce the notion of entailment probability, which we define as the probability that the presence of an n-gram g in a text is “correct” given the associated table.
|
In what way does the team bridge the gap that a semistructured table is not directly comparable to the unstructured generated or reference texts?
|
To bridge this gap, they introduce the notion of entailment probability, which they define as the probability that the presence of an n-gram g in a text is “correct” given the associated table.
|
null | false
| null |
The original incarnation of the Hewlett-Packard Company, commonly shortened to Hewlett-Packard (/ˈhjuːlɪt ˈpækərd/ HYEW-lit PAK-ərd) or HP, was an American multinational information technology company headquartered in Palo Alto, California. HP developed and provided a wide variety of hardware components, as well as software and related services to consumers, small and medium-sized businesses (SMBs), and large enterprises, including customers in the government, health, and education sectors. The company was founded in a one-car garage in Palo Alto by Bill Hewlett and David Packard in 1939, and initially produced a line of electronic test and measurement equipment. The HP Garage at 367 Addison Avenue is now designated an official California Historical Landmark, and is marked with a plaque calling it the "Birthplace of 'Silicon Valley'".
|
Where is the Birthplace of 'Silicon Valley
|
The HP Garage at 367 Addison Avenue
|
null | false
| null |
"Big Big Deal" is a song by the British singer-songwriter Steve Harley, which was released as his debut, non-album solo single in 1974. The song, which was written and produced by Harley, would be his last release before scoring the UK number one hit "Make Me Smile (Come Up and See Me)" in 1975. "Big Big Deal" also preceded the formation of the second line-up of Harley's band Cockney Rebel.
|
Who sang Big Big Deal
|
Big Big Deal is a song by the British singer-songwriter Steve Harley, which was released as his debut, non-album solo single in 1974.
|
null | false
| 130
|
In recent years many datasets have been created for the task of automated stance detection, advancing natural language understanding systems for political science, opinion research and other application areas. Typically, such benchmarks BIBREF0 are composed of short pieces of text commenting on politicians or public issues and are manually annotated with their stance towards a target entity (e.g. Climate Change, or Trump). However, they are limited in scope on multiple levels BIBREF1.
First of all, it is questionable how well current stance detection methods perform in a cross-lingual setting, as the multilingual datasets available today are relatively small, and specific to a single target BIBREF2, BIBREF3. Furthermore, specific models tend to be developed for each single target or pair of targets BIBREF4. Concerns have been raised that cross-target performance is often considerably lower than fully supervised performance BIBREF1.
In this paper we propose a much larger dataset that combines multilinguality and a multitude of topics and targets. x-stance comprises more than 150 questions concerning Swiss politics and more than 67k answers given in the last decade by candidates running for political office in Switzerland.
Questions are available in four languages: English, Swiss Standard German, French, and Italian. The language of a comment depends on the candidate's region of origin.
We have extracted the data from the voting advice application Smartvote. On that platform, candidates respond to questions mainly in categorical form (yes / rather yes / rather no / no). They can also submit a free-text comment in order to justify, explain or differentiate their categorical answer. An example is given in Figure FIGREF1.
We transform the dataset into a stance detection task by interpreting the question as a natural-language representation of the target, and the commentary as the input to be classified.
The dataset is split into a multilingual training set and into multiple test sets to evaluate zero-shot cross-lingual and cross-target transfer. To provide a baseline, we fine-tune a multilingual Bert model BIBREF5 on x-stance. We show that the baseline accuracy is comparable to previous stance detection benchmarks while leaving ample room for improvement. In addition, multilingual Bert can generalize to a degree both cross-lingually and in a cross-target setting.
We have made the dataset and the code for reproducing the baseline model publicly available.
In this paper we propose a much larger dataset that combines multilinguality and a multitude of topics and targets.
|
Does the dataset only involve one topic?
|
No.
|
null | false
| null |
What are the types of space objects ?
|
The International Astronomical Union (IAU) classifies objects in the Solar system into 3 categories :
1. Planet : a celestial body orbiting the Sun, with enough mass to assume a (nearly round) hydrostatic equilibrium shape, and that has "cleared the neighbourhood" around its orbit,
2. Dwarf planet : a celestial body orbiting the Sun, with enough mass to assume a (nearly round) hydrostatic equilibrium shape, but that has not "cleared the neighbourhood" around its orbit and is not a satellite,
3. Small Solar System Bodies : all other objects orbiting the sun that are not satellites.
Small Solar System Bodies of the Solar system include :
- Moons : celestial bodies that orbit a planet or a dwarf planet,
- Asteroids : rocky non-spherical left-over objects from the early formation of the Solar system. They travel around the sun and most of them are located within the main asteroid belt between Mars and Jupiter,
- Comets : icy bodies that can also contain minerals, rocks and dust left over from the formation of the solar system. They travel around the sun and can be found mostly beyond Neptune in the Kuiper Belt, as well as far beyond Pluto in the Oort cloud,
- Meteoroids : objects in space that often come from other larger bodies such as comets, asteroids or even moons and planets. Meteoroids can be rocky, others can be metallic, or combinations of both. Their sizes range from dust grains to small asteroids.
|
|
1909.01013
| false
| null |
Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization.
FLOAT SELECTED: Table 1: Accuracy on MUSE and Vecmap.
Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12.
FLOAT SELECTED: Table 1: Accuracy on MUSE and Vecmap.
|
What 6 language pairs is experimented on?
|
EN<->ES
EN<->DE
EN<->IT
EN<->EO
EN<->MS
EN<->FI
|
null | false
| null |
Divine Child High School, commonly known as Divine Child (DC), is a private, Roman Catholic, college-preparatory, parish high school in Dearborn, Michigan, United States. Divine Child is a highly ranked private high school in Michigan, scoring in the top 15 percent of private schools in the State. Notably, it is the seventh-largest private high school in Michigan and the largest co-educational Catholic high school in the State.
|
Is Divine Child High School a private or public school?
|
private school
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Casterly?
|
Corlos, son of Caster
|
|
1909.05017
| false
| null |
Existing question generating systems reported in the literature involve human-generated templates, including cloze type BIBREF0, rule-based BIBREF1, BIBREF2, or semi-automatic questions BIBREF3, BIBREF4, BIBREF5. On the other hand, machine learned models developed recently have used recurrent neural networks (RNNs) to perform sequence transduction, i.e. sequence-to-sequence BIBREF6, BIBREF7. In this work, we investigated an automatic question generation system based on a machine learning model that uses transformers instead of RNNs BIBREF8, BIBREF9. Our goal was to generate questions without templates and with minimal human involvement using machine learning transformers that have been demonstrated to train faster and better than RNNs. Such a system would benefit educators by saving time to generate quizzes and tests.
Our goal was to generate questions without templates and with minimal human involvement using machine learning transformers that have been demonstrated to train faster and better than RNNs. Such a system would benefit educators by saving time to generate quizzes and tests.
|
What is the motivation behind the work? Why question generation is an important task?
|
The answers are shown as follows:
* Such a system would benefit educators by saving time to generate quizzes and tests.
|
null | false
| null |
Tell me whether each of these positions is in soccer or baseball: shortstop, outfielder, midfielder, forward, goalie, catcher
|
Shortstop: Baseball
Outfielder: Baseball
Midfielder: Soccer
Forward: Soccer
Goalie: Soccer
Catcher: Baseball
|
|
null | false
| null |
Brainstorm sea shanty lyrics
|
There once was a ship that went to sea, and they named her the O'Malley. It bobbed in the waves and tipped you see! The sea was rough and the weather went sour. Then the crew heaved every hour! Oh no, down she goes! Oh no, down she goes; Down O'Malley into the sea!
|
|
null | false
| null |
KITS (105.3 FM, "105.3 Dave FM") is a commercial radio station in San Francisco, California. Owned by Audacy, Inc., it broadcasts an adult hits radio format. The studios and offices are co-located with formerly co-owned KPIX-TV on Battery Street in the North Beach district of San Francisco.
KITS's transmitter is located on Radio Road, at San Bruno Mountain in Daly City. It has an effective radiated power (ERP) of 15,000 watts, from a tower at 366 meters (1,201 feet) in height above average terrain. KITS broadcasts in the HD Radio format. Its HD2 subchannel formerly aired a gold-based modern rock format as Classic Live 105, inspired by its previous long-time tenure in the format as Live 105. The HD3 subchannel formerly carried "Radio Zindagi," a South Asian radio service. These subchannels have since been turned off.
|
Given this paragraph about KITS, tell me what station it's on and where its radio tower is located.
|
KITS plays on 105.3FM in San Francisco, California. Its radio tower is located on Radio Road at San Bruno Mountain in Daly City.
|
null | false
| 130
|
The default setup preserves horizontal language consistency in that the language of the questions always matches the language of the comments. For example, the Italian test instances are combined with the Italian version of the questions, even though during training the model has only ever seen the German and French versions of the questions.
An alternative concept is vertical language consistency, whereby the questions are consistently presented in one language, regardless of the comment. To test whether horizontal or vertical consistency is more helpful, we train and evaluate M-Bert on a dataset variant where all questions are in their English version. We chose English as a lingua franca because it had the largest share of data during the pretraining of M-Bert.
The results are shown in Table TABREF39. While the effect is negligible in most settings, the cross-lingual performance clearly increases when all questions are given in English.
To test whether horizontal or vertical consistency is more helpful, we train and evaluate M-BERT on a dataset variant where all questions are in their English version.
|
What has been done to test whether horizontal or vertical consistency is more helpful?
|
The authors train and evaluate M-BERT on a dataset variant where all questions are in their English version.
|
null | false
| 350
|
Machine reading comprehension and question answering has becomes a crucial application problem in evaluating the progress of AI system in the realm of natural language processing and understanding BIBREF0 . The computational linguistics communities have devoted significant attention to the general problem of machine reading comprehension and question answering.
However, most of existing reading comprehension tasks only focus on shallow QA tasks that can be tackled very effectively by existing retrieval-based techniques BIBREF1 . For example, recently we have seen increased interest in constructing extractive machine reading comprehension datasets such as SQuAD BIBREF2 and NewsQA BIBREF3 . Given a document and a question, the expected answer is a short span in the document. Question context usually contains sufficient information for identifying evidence sentences that entail question-answer pairs. For example, 90.2% questions in SQuAD reported by Min BIBREF4 are answerable from the content of a single sentence. Even in some multi-turn conversation tasks, the existing models BIBREF5 mostly focus on retrieval-based response matching.
In this paper, we focus on multiple-choice reading comprehension datasets such as RACE BIBREF6 in which each question comes with a set of answer options. The correct answer for most questions may not appear in the original passage which makes the task more challenging and allow a rich type of questions such as passage summarization and attitude analysis. This requires a more in-depth understanding of a single document and leverage external world knowledge to answer these questions. Besides, comparing to traditional reading comprehension problem, we need to fully consider passage-question-answer triplets instead of passage-question pairwise matching.
In this paper, we propose a new model, Dual Co-Matching Network, to match a question-answer pair to a given passage bidirectionally. Our network leverages the latest breakthrough in NLP: BERT BIBREF7 contextual embedding. In the origin BERT paper, the final hidden vector corresponding to first input token ([CLS]) is used as the aggregation representation and then a standard classification loss is computed with a classification layer. We think this method is too rough to handle the passage-question-answer triplet because it only roughly concatenates the passage and question as the first sequence and uses question as the second sequence, without considering the relationship between the question and the passage. So we propose a new method to model the relationship among the passage, the question and the candidate answer.
Firstly we use BERT as our encode layer to get the contextual representation of the passage, question, answer options respectively. Then a matching layer is constructed to get the passage-question-answer triplet matching representation which encodes the locational information of the question and the candidate answer matched to a specific context of the passage. Finally we apply a hierarchical aggregation method over the matching representation from word-level to sequence-level and then from sequence level to document-level. Our model improves the state-of-the-art model by 2.6 percentage on the RACE dataset with BERT base model and further improves the result by 3 percentage with BERT large model.
In this work, we propose dual co-matching network (DMN) which incorporates all the pairwise relationships among the { P , Q , A } triplet bidirectionally.
|
What network do they propose?
|
Dual co-matching network.
|
null | false
| null |
The term dead man zone was coined by members of the CSIRO research team in Australia who were investigating the spread of bushfires (Project Vesta). The dead man zone is approximately 8 kilometres (5.0 mi) around the fire that is likely to burn. This dead man zone can be a safety net for firefighters if the fire is moving at a high speed. The dead man zone can also be prepared in a way that slows down the fire. Firefighters can get outside the dead man zone safely without seeing what the fire is going to do and develop a plan of attack or defense upon the fire's next move. If the dead man zone is not used, wind change can sneak up on the firefighters and resulting in potentially unsafe distances where they are not able to defend themselves. This can be a life or death situation if not approached properly, and there have been examples of firefighters that became trapped and ultimately killed in Australia. Project Vesta, headed by scientist Phil Cheney, found that when the wind changes direction, the line of fire will move out at its maximum rate of spread almost immediately, and that the spread speed was nearly three times what was previously thought. Project Vesta's research into bushfire behavior makes up the majority of what is known about bushfires today.
|
Given this reference paragraph, who coined the term dead man zone?
|
The team was created by the Australian CSIRO research team
|
null | false
| null |
Richard Wayne Bogosian (born July 18, 1937) is an American diplomat.
|
Who is Richard Wayne Bogosian?
|
Richard Wayne Bogosian (born July 18, 1937) is an American diplomat. He was the United States Ambassador to Chad from 1990 to 1993 and Niger from 1985 to 1988.
He graduated from Tufts College with an A.B. in 1959 and the University of Chicago with a J.D. in 1962. He joined the U.S. Foreign Service in 1962. He oversaw many diplomatic posts, including at the Foreign Service Institute from 1962 to 1963, in the Bureau of Near Eastern and South Asian Affairs at the Department of State in 1963, at the U.S. Embassy in Baghdad, Iraq from 1963 to 1965, at the Foreign Service Institute again in 1965, as vice consul for the U.S. Embassy in Paris, France, from 1966 to 1968, in the Bureau of Near East and South Asian Affairs at the Department of State from 1968 to 1969, in the Bureau for Intelligence and Research at the Department of State from 1969 to 1971, and in the Foreign Service Institute for Economics in 1972.
He was chief of the economic section at the U.S. Embassy in Kuwait from 1972 to 1976, and Deputy Chief of Mission at the U.S. Embassy in Khartoum, Sudan, from 1976 to 1979. From 1976 to 1979, Bogosian was the chief of the Aviation Negotiations Division. He was the Director of East African Affairs from 1982 to 1985, and was nominated to be the United States Ambassador to Niger on August 1, 1985, by President Ronald Reagan.
From 1988 to 1990 he was the Director of the Office of Monetary Affairs.In 1990, Bogosian became the United States Ambassador to Chad from 1990 to 1993. From 1993 to 1994, he was the Coordinator at the U.S. Liaison Office in Mogadishu, Somalia, and Coordinator for Rwanda and Burundi from 1996 to 1997, after the Rwandan genocide.
Bogosian speaks Arabic and French. He is married to Claire Marie Bogosian, and has three children.
|
null | false
| null |
What does “that’s sus” mean?
|
“Sus” is slang for “suspicious” or “suspect” and the phrase is used by young people to express skepticism, as in “that new kid who started a fight is sus.”
|
|
null | false
| 11
|
In this paper, we showed how we can effectively leverage a prodigious blog dataset. Not only does the dataset bring out the extensive linguistic content reflected in the blog posts, but also includes location information and rich metadata. These data allow for the generation of maps that reflect the demographics of the population, variations in language use, and differences in psycholinguistic and semantic categories. These mappings can be valuable to both psychologists and linguists, as well as lexicographers. A prototype demo has been made available together with the code used to collect our dataset.
A prototype demo has been made available together with the code used to collect our dataset
|
What does the author's study do?
|
A prototype demo has been made available together with the code used to collect their dataset
|
null | false
| null |
What is considered as DACH region?
|
The DACH region comprises the three countries in the central Europe: Germany, Austria, and Switzerland. The abbreviation comes a mix of first letters of countries as per international country codes, i.e. Germany (DE), Austria (AT), and Switzerland (CH).
|
|
1907.12108
| false
| null |
In contrast to such modularized dialogue system, end-to-end systems learn all components as a single model in a fully data-driven manner, and mitigate the lack of labeled data by sharing representations among different modules. In this paper, we build an end-to-end empathetic chatbot by fine-tuning BIBREF1 the Generative Pre-trained Transformer (GPT) BIBREF2 on the PersonaChat dataset BIBREF3 and the Empathetic-Dialogue dataset BIBREF4 . We establish a web-based user interface which allows multiple users to asynchronously chat with CAiRE online. CAiRE can also collect user feedback and continuously improve its response quality and discard undesirable generation behaviors (e.g. unethical responses) via active learning and negative training.
In this paper, we build an end-to-end empathetic chatbot by fine-tuning BIBREF1 the Generative Pre-trained Transformer (GPT) BIBREF2 on the PersonaChat dataset BIBREF3 and the Empathetic-Dialogue dataset BIBREF4 .
|
What pretrained LM is used?
|
The answers are shown as follows:
* Generative Pre-trained Transformer (GPT)
|
null | false
| 120
|
We assembled a dataset of 1,949 cleaned pathology reports. Each report is associated with one of the 37 different primary diagnoses based on IDC-O codes. The reports are collected from four different body parts or primary sites from multiple patients. The distribution of reports across different primary diagnoses and primary sites is reported in tab:report-distribution. The dataset was developed in three steps as follows.
Collecting pathology reports: The total of 11,112 pathology reports were downloaded from NCI's Genomic Data Commons (GDC) dataset in PDF format BIBREF9 . Out of all PDF files, 1,949 reports were selected across multiple patients from four specific primary sites—thymus, testis, lung, and kidney. The selection was primarily made based on the quality of PDF files.
Cleaning reports: The next step was to extract the text content from these reports. Due to the significant time expense of manually re-typing all the pathology reports, we developed a new strategy to prepare our dataset. We applied an Optical Character Recognition (OCR) software to convert the PDF reports to text files. Then, we manually inspected all generated text files to fix any grammar/spelling issues and irrelevant characters as an artefact produced by the OCR system.
Splitting into training-testing data: We split the cleaned reports into 70% and 30% for training and testing, respectively. This split resulted in 1,364 training, and 585 testing reports.
We assembled a dataset of 1,949 cleaned pathology reports. Each report is associated with one of the 37 different primary diagnoses based on IDC-O codes.
|
What dataset does the author collect?
|
They assembled a dataset of 1,949 cleaned pathology reports. Each report is associated with one of the 37 different primary diagnoses based on IDC-O codes.
|
null | false
| null |
The blood hammer phenomenon is a sudden increase of the upstream blood pressure in a blood vessel (especially artery or arteriole) when the bloodstream is abruptly blocked by vessel obstruction. The term "blood-hammer" was introduced in cerebral hemodynamics by analogy with the hydraulic expression "water hammer", already used in vascular physiology to designate an arterial pulse variety, the "water-hammer pulse". Complete understanding of the relationship between mechanical parameters in vascular occlusions is a critical issue, which can play an important role in the future diagnosis, understanding and treatment of vascular diseases.
|
Given a reference text about the blood hammer phenomenon, tell me the background of the term "blood-hammer".
|
The term "blood-hammer" was introduced by analogy with the hydraulic expression "water hammer" that was already used in vascular physiology.
|
null | false
| null |
What is the difference between a fruit and vegetable?
|
Fruits contain seeds and are usually developed from the flowering part of a plant while Vegetables are an edible plant or part of a plant.
|
|
null | false
| null |
What is the wealthiest zip code in the United States?
|
Atherton, California
|
|
1808.00265
| false
| null |
In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels. Our framework achieves competitive state-of-the-art VQA performance, while generating visual groundings that outperform other algorithms that use human annotated attention during training.
In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels.
|
How do they obtain region descriptions and object annotations?
|
they are available in the Visual Genome dataset
|
null | false
| null |
Albert Einstein (/ˈaɪnstaɪn/ EYEN-styne; German: [ˈalbɛʁt ˈʔaɪnʃtaɪn] (listen); 14 March 1879 – 18 April 1955) was a German-born theoretical physicist, widely acknowledged to be one of the greatest and most influential physicists of all time. Einstein is best known for developing the theory of relativity, but he also made important contributions to the development of the theory of quantum mechanics. Relativity and quantum mechanics are the two pillars of modern physics. His mass–energy equivalence formula E = mc2, which arises from relativity theory, has been dubbed "the world's most famous equation". His work is also known for its influence on the philosophy of science. He received the 1921 Nobel Prize in Physics "for his services to theoretical physics, and especially for his discovery of the law of the photoelectric effect", a pivotal step in the development of quantum theory. His intellectual achievements and originality resulted in "Einstein" becoming synonymous with "genius". Einsteinium, one of the synthetic elements in the periodic table, was named in his honor.
In 1905, a year sometimes described as his annus mirabilis ('miracle year'), Einstein published four groundbreaking papers. These outlined the theory of the photoelectric effect, explained Brownian motion, introduced special relativity, and demonstrated mass–energy equivalence. Einstein thought that the laws of classical mechanics could no longer be reconciled with those of the electromagnetic field, which led him to develop his special theory of relativity. He then extended the theory to gravitational fields; he published a paper on general relativity in 1916, introducing his theory of gravitation. In 1917, he applied the general theory of relativity to model the structure of the universe. He continued to deal with problems of statistical mechanics and quantum theory, which led to his explanations of particle theory and the motion of molecules. He also investigated the thermal properties of light and the quantum theory of radiation, which laid the foundation of the photon theory of light.
|
What were the four groundbreaking papers that Albert Einstein published
|
Albert Einstein, 14 March 1879 – 18 April 1955 was a German-born theoretical physicist, widely acknowledged to be one of the greatest and most influential physicists of all time. Einstein is best known for developing the theory of relativity, but he also made important contributions to the development of the theory of quantum mechanics. Relativity and quantum mechanics are the two pillars of modern physics. His mass–energy equivalence formula E = mc2, which arises from relativity theory, has been dubbed "the world's most famous equation". His work is also known for its influence on the philosophy of science. He received the 1921 Nobel Prize in Physics "for his services to theoretical physics, and especially for his discovery of the law of the photoelectric effect", a pivotal step in the development of quantum theory. His intellectual achievements and originality resulted in "Einstein" becoming synonymous with "genius". Einsteinium, one of the synthetic elements in the periodic table, was named in his honor.
In 1905, a year sometimes described as his annus mirabilis ('miracle year'), Einstein published four groundbreaking papers. These outlined the theory of the photoelectric effect, explained Brownian motion, introduced special relativity, and demonstrated mass–energy equivalence. Einstein thought that the laws of classical mechanics could no longer be reconciled with those of the electromagnetic field, which led him to develop his special theory of relativity. He then extended the theory to gravitational fields; he published a paper on general relativity in 1916, introducing his theory of gravitation. In 1917, he applied the general theory of relativity to model the structure of the universe. He continued to deal with problems of statistical mechanics and quantum theory, which led to his explanations of particle theory and the motion of molecules. He also investigated the thermal properties of light and the quantum theory of radiation, which laid the foundation of the photon theory of light.
|
null | false
| null |
Meanwhile, in the cabin, goalkeeper Harry Gregg was regaining consciousness, thinking that he was dead. He felt blood on his face and "didn't dare put hand up. [He] thought the top of head had been taken off, like a hard boiled egg." Just above him, light shone into the cabin, so Gregg kicked the hole wide enough for him to escape. He also managed to save some passengers, among them teammates Bobby Charlton and Dennis Viollet, who were strapped into their seats away from the wreckage.
|
Extract the list of players mentioned in the text. Separate them with a comma.
|
Harry Gregg, Bobby Charlton, Dennis Viollet
|
null | false
| null |
Oyster is the common name for a number of different families of salt-water bivalve molluscs that live in marine or brackish habitats. In some species, the valves are highly calcified, and many are somewhat irregular in shape. Many, but not all oysters are in the superfamily Ostreoidea.
|
Where are oysters found?
|
Oyster is the common name for a number of different families of salt-water bivalve molluscs that live in marine or brackish habitats.
|
null | false
| 114
|
We compared our model with MLE, RL and GAN baselines. Since COCO and EMNLP2017 WMT don't have input while WeiboDial regards posts as input, we chose the following baselines respectively:
MLE: a RNN model trained with MLE objective BIBREF4 . Its extension, Seq2Seq, can work on the dialogue dataset BIBREF2 .
SeqGAN: The first text GAN model that updates the generator with policy gradient based on the rewards from the discriminator BIBREF7 .
LeakGAN: A variant of SeqGAN that provides rewards based on the leaked information of the discriminator for the generator BIBREF11 .
MaliGAN: A variant of SeqGAN that optimizes the generator with a normalized maximum likelihood objective BIBREF8 .
IRL: This inverse reinforcement learning method replaces the discriminator with a reward approximator to provide dense rewards BIBREF12 .
RAML: A RL approach to incorporate MLE objective into RL training framework, which regards BLEU as rewards BIBREF17 .
DialogGAN: An extension of SeqGAN tuned to dialogue generation task with MLE objective added to the adversarial objective BIBREF16 .
DPGAN: A variant of DialogGAN which uses a language model based discriminator and regards cross-entropy as rewards BIBREF13 .
Note that MLE, SeqGAN, LeakGAN, MaliGAN and IRL are the baselines on COCO and EMNLP2017 WMT, while MLE, RAML, DialogGAN, and DPGAN on WeiboDial. The original codes are used to test the baselines.
As for the details of the baselines, the generators of all the baselines except LeakGAN are the same as ours.
|
Are the generators of all the baselines the same as the authors'?
|
No, they aren't.
|
null | false
| null |
Why is India Pale Ale (IPA) the best type of beer?
|
India Pale Ales (IPAs) are an ideal type of beer for both producers and drinkers for a variety of reasons.
Producers like IPAs because they are cheap and fast to produce compared to other types of beers. In contrast to a lager, an IPA can be brewed and ready for market in less than half the time.
Drinkers like IPAs for their complex flavors which can vary depending on style from East Coast or West Cost, dry hopped or hazy. Additionally, IPA often have a higher alcohol content than many other similar beers - which lends to a higher "band for buck" value.
|
|
1806.00722
| true
| null |
Table TABREF32 shows the results for De-En, Tr-En, Tr-En-morph datasets, where the best accuracy for models with the same depth and of similar sizes are marked in boldface. In almost all genres, DenseNMT models are significantly better than the baselines. With embedding size 256, where all models achieve their best scores, DenseNMT outperforms baselines by 0.7-1.0 BLEU on De-En, 0.5-1.3 BLEU on Tr-En, 0.8-1.5 BLEU on Tr-En-morph. We observe significant gain using other embedding sizes as well.
In almost all genres, DenseNMT models are significantly better than the baselines.
|
did they outperform previous methods?
|
Yes.
|
2003.04973
| false
| null |
We have used the following hardware for the experimentation: Windows 10 Education desktop consisting of intel core i-7 processor and 16GB RAM. We have used python 3.6 and Google colab notebook to execute our model and obtained the results discussed below: The train and test data have divided into 70-30 ratio and we got these results as shown in Table TABREF17 for the individual dataset and the combination of both. The pre-trained network was already trained and we used the target data Queensland flood which provided 96% accuracy with 0.118 Test loss in only 11 seconds provided we used only 70% of training labeled data. The second target data is Alberta flood with the same configuration of train-test split which provided 95% accuracy with 0.118 Test loss in just 19 seconds. As we can see it takes very less time to work with 20,000 of tweets (combined) and at times of emergency it can handle a huge amount of unlabeled data to classify into meaningful categories in minutes.
The train and test data have divided into 70-30 ratio and we got these results as shown in Table TABREF17 for the individual dataset and the combination of both. The pre-trained network was already trained and we used the target data Queensland flood which provided 96% accuracy with 0.118 Test loss in only 11 seconds provided we used only 70% of training labeled data. The second target data is Alberta flood with the same configuration of train-test split which provided 95% accuracy with 0.118 Test loss in just 19 seconds.
|
What were the model's results on flood detection?
|
The answers are shown as follows:
* Queensland flood which provided 96% accuracy
* Alberta flood with the same configuration of train-test split which provided 95% accuracy
|
null | false
| null |
How do you gain muscle?
|
Gaining muscle mass requires proper exercise and a good diet. A good diet is important to give the body the nutrient it needs in order to sustain muscle growth. On the flip side, properly stressing the muscles will stimulate them to grow. This requires a variance of different exercises to work the muscle group. Otherwise, the muscle will build resistance to an often repeated exercise, and growth for that muscle will plateau.
|
|
null | false
| null |
A rainbow is an optical phenomenon that can occur under certain meteorological conditions. It is caused by reflection, refraction and dispersion of light in water droplets resulting in an effect similar to a spectrum of light appearing in the sky. It takes the form of a multicoloured circular arc. Rainbows caused by sunlight always appear in the section of sky directly opposite the Sun.
Rainbows can be full circles. However, the observer normally sees only an arc formed by illuminated droplets above the ground, and centered on a line from the Sun to the observer's eye.
In a primary rainbow, the arc shows red on the outer part and violet on the inner side. This rainbow is caused by light being refracted when entering a droplet of water, then reflected inside on the back of the droplet and refracted again when leaving it.
In a double rainbow, a second arc is seen outside the primary arc, and has the order of its colours reversed, with red on the inner side of the arc. This is caused by the light being reflected twice on the inside of the droplet before leaving it.
|
Without quoting directly from the text, give me a summary of how rainbows are formed.
|
Rainbows are formed when light shines through water droplets, creating an optical phenomenon. The light is bent and reflected, creating a multicolored arc. Rainbows created by sunlight can be found in the sky, opposite the sun. Rainbows are usually observed as an arc, but can appear as full circles. A double rainbow appears when the light shining through each water droplet is reflected twice, creating a second arc, with the colours appearing in reverse order. The colors of the rainbow include; red, orange, yellow, green, blue, and violet. The colors appear in order of their wavelength, from longest to shortest.
|
null | false
| null |
Which months were added to the calendar and why?
|
In order for the Roman calendar year which was ten months long, to be in sync with the lunar year, the months of January and February were added.
|
|
null | false
| null |
Howe retired for the first time in 1971 and was immediately inducted into the Saskatchewan Sports Hall of Fame that same year. He was then inducted into the Hockey Hall of Fame the next year, but came back two years later to join his sons Mark and Marty on the Houston Aeros of the WHA. Although in his mid-40s, he scored over 100 points twice in six years, won two straight Avco World Trophies (1974 and 1975) and was named most valuable player in 1974. He made a brief return to the NHL in 1979–80, playing one season with the Hartford Whalers, then retired at age 52. His involvement with the WHA was central to their brief pre-NHL merger success, forcing the NHL to recruit European talent and expand to new markets.
|
How many times did Gordie Howe retire?
|
Based on the provided information, Howe retired two times.
Howe retired for the first time in 1971, but then later joined his sons Mark and Marty on the Houston Aeros of the WHA. Howe would then return to the NHL in 1979 where he played one season before retiring for a second time.
|
null | false
| null |
Machine learning (ML) is a field of inquiry devoted to understanding and building methods that "learn" – that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence.
Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, agriculture, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.
A subset of machine learning is closely related to computational statistics, which focuses on making predictions using computers, but not all machine learning is statistical learning. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning.
Some implementations of machine learning use data and neural networks in a way that mimics the working of a biological brain.
In its application across business problems, machine learning is also referred to as predictive analytics.
Overview
Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as "since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well". They can be nuanced, such as "X% of families have geographically separate species with color variants, so there is a Y% chance that undiscovered black swans exist".
Machine learning programs can perform tasks without being explicitly programmed to do so. It involves computers learning from data provided so that they carry out certain tasks. For simple tasks assigned to computers, it is possible to program algorithms telling the machine how to execute all steps required to solve the problem at hand; on the computer's part, no learning is needed. For more advanced tasks, it can be challenging for a human to manually create the needed algorithms. In practice, it can turn out to be more effective to help the machine develop its own algorithm, rather than having human programmers specify every needed step.
The discipline of machine learning employs various approaches to teach computers to accomplish tasks where no fully satisfactory algorithm is available. In cases where vast numbers of potential answers exist, one approach is to label some of the correct answers as valid. This can then be used as training data for the computer to improve the algorithm(s) it uses to determine correct answers. For example, to train a system for the task of digital character recognition, the MNIST dataset of handwritten digits has often been used.
History and relationships to other fields
See also: Timeline of machine learning
The term machine learning was coined in 1959 by Arthur Samuel, an IBM employee and pioneer in the field of computer gaming and artificial intelligence. The synonym self-teaching computers was also used in this time period.
By the early 1960s an experimental "learning machine" with punched tape memory, called CyberTron, had been developed by Raytheon Company to analyze sonar signals, electrocardiograms, and speech patterns using rudimentary reinforcement learning. It was repetitively "trained" by a human operator/teacher to recognize patterns and equipped with a "goof" button to cause it to re-evaluate incorrect decisions. A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification. Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report was given on using teaching strategies so that a neural network learns to recognize 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.
Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".
Modern-day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. A machine learning algorithm for stock trading may inform
|
What is machine learning, and when did it first emerge?
|
Machine learning is a branch of AI concerned with understanding and developing methods that learn," that is, methods that use data to improve performance on a set of tasks. Machine learning algorithms construct a model from sample data, referred to as training data, in order to make predictions or decisions without being explicitly programmed to do so. It first emerged in the 1960s.
|
null | false
| null |
The Khongjom Day is celebrated by which Indian state
|
Khongjom Day is celebrated by Manipur to honor the brave heroes of 1891 Anglo-Manipur war.
|
|
1908.11546
| false
| null |
In this paper, we introduce a novel policy model to output multiple actions per turn (called multi-act), generating a sequence of tuples and expanding agents' expressive power. Each tuple is defined as $(\textit {continue}, \textit {act}, \textit {slots})$, where continue indicates whether to continue or stop producing new acts, act is an act type (e.g., inform or request), and slots is a set of slots (names) associated with the current act type. Correspondingly, a novel decoder (Figure FIGREF5) is proposed to produce such sequences. Each tuple is generated by a cell called gated Continue Act Slots (gCAS, as in Figure FIGREF7), which is composed of three sequentially connected gated units handling the three components of the tuple. This decoder can generate multi-acts in a double recurrent manner BIBREF18. We compare this model with baseline classifiers and sequence generation models and show that it consistently outperforms them.
In this paper, we introduce a novel policy model to output multiple actions per turn (called multi-act), generating a sequence of tuples and expanding agents' expressive power. Each tuple is defined as $(\textit {continue}, \textit {act}, \textit {slots})$, where continue indicates whether to continue or stop producing new acts, act is an act type (e.g., inform or request), and slots is a set of slots (names) associated with the current act type. Correspondingly, a novel decoder (Figure FIGREF5) is proposed to produce such sequences. Each tuple is generated by a cell called gated Continue Act Slots (gCAS, as in Figure FIGREF7), which is composed of three sequentially connected gated units handling the three components of the tuple. This decoder can generate multi-acts in a double recurrent manner BIBREF18.
|
What is specific to gCAS cell?
|
It has three sequentially connected units to output continue, act and slots generating multi-acts in a doble recurrent manner.
|
1709.01256
| false
| null |
The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data.
The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data.
|
How large is the Wikipedia revision dump dataset?
|
The answers are shown as follows:
* eight GB
|
null | false
| null |
Name every month that has 30 days
|
January, March, April, May, June, July, August, September, October, November, December
|
|
null | false
| null |
Ross was born in Nairn, in Middlesex County, Ontario, the son of Donald Ross and Margaret Halbert. His grandfather was part of Duke Wellington's body guard at Waterloo, and was an escort to Napoleon Bonaparte on his exile to St. Helena. Ross was educated at Nairn Public School, and moved to Manitoba in 1874, where he worked on the first telegraph line. In 1877, he established a real estate business. Ross served on the Winnipeg City Council for eight years, the Winnipeg School Board for twenty-three years, and the Winnipeg Public Parks Board for eleven years. In 1880, he married Margaret McIvor.
|
Who did Donald's grandfather body guard for?
|
Donald's grandfather was a body guard for Duke Wellington.
|
null | false
| null |
Maison Ikkoku is a manga series written and illustrated by Rumiko Takahashi. It was serialized in Shogakukan's Big Comic Spirits from November 15, 1980 to April 20, 1987. It has been adapted into an anime series and a live action film and two episode TV drama. The plot follows Yusaku Godai, a rōnin who resides in an old apartment complex named Maison Ikkoku as he studies for the university entrance exam. Yusaku's life progression and interactions progress episodically and focuses on his growing relationship with the apartment manager, Kyoko Otonashi.
|
Is Maison Ikkoku a manga series
|
Maison Ikkoku is a manga series written and illustrated by Rumiko Takahashi. It was serialized in Shogakukan's Big Comic Spirits from November 15, 1980 to April 20, 1987. It has been adapted into an anime series and a live action film and two episode TV drama.
|
null | false
| null |
Classify the cities based on whether they are located in Malawi or Norfolk Island.
Mzuzu, Blantyre, Kingston, Lilongwe
|
Malawi - Blantyre, Lilongwe, Mzuzu
Norfolk Island - Kingston
|
|
null | false
| null |
What are the pros and cons of high interest rates?
|
Rising interest rates are an advantage for savers who deposit money. Banks pay savers a higher return on their savings accounts. High interest rates also dampen high inflation as they make borrowing more expensive. This reduces the demand and can lead to falling prices.
On the other hand, high interest rates make things like financing a house more difficult. Furthermore, the decreased demand can reduce economic growth.
|
|
null | false
| null |
Ice hockey (or simply hockey) is a team sport played on ice skates, usually on an ice skating rink with lines and markings specific to the sport. It belongs to a family of sports called hockey. In ice hockey, two opposing teams use ice hockey sticks to control, advance, and shoot a closed, vulcanized, rubber disc called a "puck" into the other team's goal. Each goal is worth one point. The team which scores the most goals is declared the winner. In a formal game, each team has six skaters on the ice at a time, barring any penalties, one of whom is the goaltender. Ice hockey is a full contact sport, and is considered to be one of the more physically demanding sports.
Ice hockey is one of the sports featured in the Winter Olympics while its premiere international competition, the IIHF World Championships, are governed by the International Ice Hockey Federation (IIHF) for both men's and women's competitions.
In North America and parts of Europe the sport is known simply as hockey[citation needed]. However, in other countries, "hockey" usually refers to field hockey. In 1994 hockey was officially recognized as Canada's national winter sport.
The modern sport of ice hockey was developed in Canada, most notably in Montreal, where the first indoor game was played on March 3, 1875. Some characteristics of that game, such as the length of the ice rink and the use of a puck, have been retained to this day. Amateur ice hockey leagues began in the 1880s, and professional ice hockey originated around 1900. The Stanley Cup, emblematic of ice hockey club supremacy, was initially commissioned in 1892 as the "Dominion Hockey Challenge Cup" and was first awarded in 1893 to recognize the Canadian amateur champion and later became the championship trophy of the National Hockey League (NHL). In the early 1900s, the Canadian rules were adopted by the Ligue Internationale de Hockey Sur Glace, in Paris, France, the precursor of the IIHF. The sport was played for the first time at the Olympics during the 1920 Summer Olympics.
While women also played during the game's early formative years, it was not until organizers began to officially remove body checking from female ice hockey beginning in the mid-1980s that it began to gain greater popularity, which by then had spread to Europe and a variety of other countries. The first IIHF Women's World Championship was held in 1990, and women's play was introduced into the Olympics in 1998.
|
Why should I watch ice hockey
|
Ice hockey is a team sport played on ice skates, usually on an ice skating rink with lines and markings specific to the sport. It belongs to a family of sports called hockey. In ice hockey, two opposing teams use ice hockey sticks to control, advance, and shoot a closed, vulcanized, rubber disc called a "puck" into the other team's goal. Each goal is worth one point. The team which scores the most goals is declared the winner. In a formal game, each team has six skaters on the ice at a time, barring any penalties, one of whom is the goaltender. Ice hockey is a full contact sport, and is considered to be one of the more physically demanding sports.
Ice hockey is fast, intense and fun to watch. The fans in the stadium are sitting close to the ice rink and cheer and push their favorite teams to reach their goals.
|
null | false
| null |
What was the Rolling Stones first no 1 hit
|
Its all over now
|
|
null | false
| null |
Manchester City F.C. 3–2 Queens Park Rangers F.C was the final match of the 2011–12 Premier League season, contested at the Etihad Stadium between Manchester City and Queens Park Rangers on 13 May 2012. This match is considered the greatest Premier League moment in history.
City entered the match in first place on goal difference, being level on points with local rivals Manchester United, and needed to match United's result to win the league. Meanwhile, QPR entered the match one spot above the relegation zone, knowing that a draw would secure safety at the expense of Bolton Wanderers. City won the match in dramatic fashion, reversing a 2–1 deficit by scoring two goals in stoppage time, with the winner being scored by Sergio Agüero, to clinch their first league title since the 1967–68 season. The win began a period of dominance in English football for City, with the club winning five more Premier League titles over the subsequent 10 years.
The match, and more specifically the final goal of the match, has repeatedly been described as the greatest moment in Premier League history. The terms "93:20" and "Agueroooooo" have entered Manchester City folklore as a result, the former describing the exact second of the match at which point the final goal was scored, while the latter was commentator Martin Tyler's live response to the goal.
|
Which player scored the Premier League winning goal for Manchester City vs. QPR in 2012?
|
Sergio Aguero, in his first season for the club following a £30+ million move from Athletico Madrid ended the season as the club's leading goal scorer. Ten plus years later, he ended as the club's all-time leading goal scorer and holds the record for number of Premier League Hat-tricks.
In 2012, Manchester City and local rivals, Manchester United were leading a two horse race to win the Premier League. Manchester United had previously won the league title 19 times where as this would be Manchester City's first title in over 40 years.
It was Manchester City's to lose as they hosted relegation-bounded QPR at the Etihad Stadium and all was going to plan going into the first half following Zabaleta's goal to make it 1-0 (an unlikely goal scorer to say the least).
However, following a mistake from centre back, Joleon Lescott, a dramatic red card for former Manchester City player, Joey Barton and a surprise second goal from QPR to make it 2-1, Manchester City looked as though they'd let the title slip through their hands.
Manchester United had won their away game at Sunderland 1-0 so it was all down the blue half of Manchester's result.
Just after the 90th minute, they were in stoppage time which was extended due to the commotions with Joey Barton lashing out at Carlos Tevez and Sergio Aguero.
Striker, Edin Dzeko, scored a header from a David Silva cross to make it 2-2, but to many, this felt as too little, too late.
Late drama unfolded as Mario Balotelli registered his one and only assist in almost 4 years at the club as he laid it off to Aguero who scored the winning goal with the last kick of the game to make it 3-2.
Manchester City went on to win the league and QPR avoided relegation despite the loss due to results elsewhere.
|
null | false
| null |
My trophy does not fit my suitcase because it is to small. I need to travel. What could I do?
|
- Get a bigger suite case
- Don't bring the trophy
- Send the trophy in a box
|
|
null | false
| 356
|
There are many tasks in natural language processing which require the classification of sentences or longer paragraphs into a set of predefined categories. Typical applications are for instance topic identification (e.g. sports, news, $\ldots $ ) or product reviews (positive or negative). There is a large body of research on approaches for document classification. An important aspect to compare these different approaches is the availability of high quality corpora to train and evaluate them. Unfortunately, most of these evaluation tasks focus on the English language only, while there is an ever increasing need to perform document classification in many other languages. One could of course collect and label training data for other languages, but this would be costly and time consuming. An interesting alternative is “cross-lingual document classification”. The underlying idea is to use a representation of the words or whole documents which is independent of the language. By these means, a classifier trained on one language can be transferred to a different one, without the need of resources in that transfer language. Ideally, the performance obtained by cross-lingual transfer should be as close as possible to training the entire system on language specific resources. Such a task was first proposed by BIBREF0 using the Reuters Corpus Volume 2. The aim was to first train a classifier on English and then to transfer it to German, and vice versa. An extension to the transfer between English and French and Spanish respectively was proposed by BIBREF1 . However, only few comparative results are available for these transfer directions.
The contributions of this work are as follows. We extend previous works and use the data in the Reuters Corpus Volume 2 to define new cross-lingual document classification tasks for eight very different languages, namely English, French, Spanish, Italian, German, Russian, Chinese and Japanese. For each language, we define a train, development and test corpus. We also provide strong reference results for all transfer directions between the eight languages, e.g. not limited to the transfer between a foreign language and English. We compare two approaches, based either on multilingual word or sentence embeddings respectively. By these means, we hope to define a clear evaluation environment for highly multilingual document classification.
We extend previous works and use the data in the Reuters Corpus Volume 2 to define new cross-lingual document classification tasks for eight very different languages, namely English, French, Spanish, Italian, German, Russian, Chinese and Japanese.
|
What data did the authors use to define new cross-lingual document classification tasks?
|
The data in the Reuters Corpus Volume 2 to define new cross-lingual document classification tasks.
|
null | false
| null |
Why can't the whole world be 2 countries?
|
The whole world cannot be split into two countries because it would be impossible to maintain the balance of power between the two countries.
|
|
null | false
| 453
|
Performing graph reasoning in large domains, such as predicting the relationship between two entities based on the input facts, is an important practical problem that arises in reasoning about molecular modeling, knowledge networks, and collections of objects in the physical world. The necessary inference rules are generally unknown and must be inferred from data, which in general are large. In this paper, we focus on learning neural networks for graph reasoning tasks. Consider the problem of learning a rule that explains the grandparent relationship. Given a dataset of labeled family relationship graphs, we aim to build machine-learning algorithms that can learn to predict a specific relationship (e.g., grandparent) based on other relationships, such as father(x, y) and mother(x, y).
Neural Logic Machines (NLM; present a method for solving graph reasoning tasks with a structured neural network. NLMs keep track of hyperedge representations for all tuples consisting of up to B entities. Thus, they can infer more complex finitely-quantified logical relations than standard graph neural networks that only consider binary relationships between entities. However, there are two disadvantages of such a dense hypergraph representation. First, the training and inference of NLMs requires simultaneously considering all entities in a domain, such as all of the N people in a family relationship database. Second, they scale polynomially with respect to the number of entities considered in a single inference. Even inferring a single relation like grandparent requires O(N 3 ) time and space complexity. In practice, for large graphs, these limitations make the training and inference intractable and hinder the application of NLMs in large-scale real-world domains.
To address these two challenges, we present a novel framework, called Sparse and Local Neural Logic Machines (SpaLoc), for inducing sparse relational rules from data in large domains. Our key idea is to exploit locality and sparsity in data: determining a relationship between entities The overall pipeline of SpaLoc: a multi-layer neural network that applies to hypergraphs. I and V denote the index tensor and value tensor respectively. To facilitate training and inference in large domains, we employ a sub-graph sampling procedure and sparsification.
usually only requires consideration of a small number of additional entities, and the resulting relation usually only holds for a small number of tuples of entities. Our contribution is three-fold. First, we develop a sparse tensor-based representation for representing hyperedge relationships among entities and making inferences about them. Second, during both training and inference, SpaLoc employs a sub-graph sampling technique based on an information-theoretic measure, information sufficiency, which quantifies the amount of information contained in a sub-graph with respect to whether a predicate is true for some tuple of entities. Since the sub-graph sampling may violate the closed-world assumption (i.e., the information in a sub-sampled graph may be insufficient in predicting the relationship between a pair of entities), we also use the information sufficiency measure to calibrate training labels. Third, to further speed up inference on large graphs, we encourage neural networks recover sparse relationships among objects by using a regularization term based on graph sparsity measurements.
We evaluate SpaLoc on two benchmarks: relational reasoning in synthetic datasets (family trees and general graph reasoning) and real-world knowledge-graph reasoning. First, we show that, with our sparsity regularization, the computation complexity for inference can be reduced to the same order as the optimal complexity, which significantly outperforms the base model NLM. Second, we show that training via sub-graph sampling and label calibration enables us to learn relational rules in real-world knowledge graphs with more than 10K nodes, whereas the original NLM can be barely applied to graphs with more than 100 nodes. Finally, SpaLoc achieves state-of-the-art performance on several real-world knowledge graph reasoning benchmarks.
We evaluate SpaLoc on two benchmarks: relational reasoning in synthetic datasets (family trees and general graph reasoning) and real-world knowledge-graph reasoning.
****The inputs predicates are: Father(x, y), Mother(x, y), Son(x, y), Daughter(x, y).
The target predicates are:
****
• Family-of-three(x, y, z) = Father(x, y) ∧ Mother(x, z)
• Three-generations(x, y, z) = Parent(x, y) ∧ Parent(y, z)
|
Since the method is developed for hyperedges, why is there not an experiment where the goal is to infer relations between more than a pair of entities?
|
First, we would like to reiterate the importance of high-order edges: even if the input graph contains only binary relationships, incorporating high-order edges in intermediate features is still helpful. See also our general response.
Per request, we add two new synthetic family tree reasoning tasks with input and output ternary predicates to address this point: family-of-three and three-generations reasoning. Our model still achieves perfect accuracy on both tasks, while existing models except NLM cannot handle it.
|
null | false
| null |
How many planets are there?
|
There are eight planets in our native solar system. We used to consider there to be nine, but then scientists decided that Pluto doesn't fit the requirements to be a planet. I don't really know what that is, but that's what they decided. Mercury is the closest planet to the sun, and doesn't really have an atmosphere. Venus is subject to a runaway greenhouse effect. Earth is where we live. Mars is red, and slightly smaller than Earth. We've recently found evidence of water there. The asteroid belt comes next, which isn't really a planet. Following the asteroid belt are the gas giants Jupiter, Saturn, and Uranus. They have rings. Neptune comes last. Other solar systems seem to have planets too - in fact this appears to be far more common than we would have thought.
|
|
1910.04269
| false
| null |
In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel).
In Table TABREF1, we summarize the quantitative results of the above previous studies.
In Table TABREF1, we summarize the quantitative results of the above previous studies.
|
What is the accuracy reported by state-of-the-art methods?
|
Answer with content missing: (Table 1)
Previous state-of-the art on same dataset: ResNet50 89% (6 languages), SVM-HMM 70% (4 languages)
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Hightower?
|
Uthor of the High Tower
|
|
null | false
| null |
Chennai has a dry-summer tropical wet and dry climate under the (Köppen climate classification). The city lies on the thermal equator and is also on the coast, which prevents extreme variation in seasonal temperature. The hottest part of the year is late May to early June, known regionally as Agni Nakshatram ("fire star") or as Kathiri Veyyil, with maximum temperatures around 35–40 °C (95–104 °F). The coolest part of the year is January, with minimum temperatures around 19–25 °C (66–77 °F). The lowest recorded temperature was 13.9 °C (57.0 °F) on 11 December 1895 and 29 January 1905. The highest recorded temperature was 45 °C (113 °F) on 31 May 2003. The average annual rainfall is about 140 cm (55 in).
|
From the passage provided, extract which month is the coolest in Chennai
|
The coolest part of the year is January
|
null | false
| null |
Brandon Winn Sanderson (born December 19, 1975) is an American author of high fantasy and science fiction. He is best known for the Cosmere fictional universe, in which most of his fantasy novels, most notably the Mistborn series and The Stormlight Archive, are set. Outside of the Cosmere, he has written several young adult and juvenile series including The Reckoners, the Skyward series, and the Alcatraz series. He is also known for finishing Robert Jordan's high fantasy series The Wheel of Time. Sanderson has created several graphic novel fantasy series, including White Sand and Dark One.
He created Sanderson's Laws of Magic and popularized the idea of "hard magic" and "soft magic" systems. In 2008, Sanderson started a podcast with author Dan Wells and cartoonist Howard Tayler called Writing Excuses, involving topics about creating genre writing and webcomics. In 2016, the American media company DMG Entertainment licensed the movie rights to Sanderson's entire Cosmere universe, but the rights have since reverted back to Sanderson. Sanderson's March 2022 Kickstarter campaign became the most successful in history, finishing with 185,341 backers pledging $41,754,153.
|
Who is Brandon Sanderson?
|
Brandon Sanderson is an American author who primarily writes high fantasy and science fiction. He is known for creating the Cosmere, a fantasy world in which many of his books are set. Among his notable works Cosmere are The Stormlight Archive and the Mistborn series. Beyond the Cosmere, he is also known for writing YA fiction, such as The Reckoners and Skyward series. Additionally, he is known for popularizing the notions of "hard" and "soft" magic systems in fantasy worlds.
|
null | false
| null |
What is Arsenal FC?
|
Arsenal Football Club is an English professional football club based in Islington, London. Arsenal plays in the Premier League, the top flight of English football. The club has won 13 league titles (including one unbeaten title), a record 14 FA Cups, two League Cups, 16 FA Community Shields, one European Cup Winners' Cup, and one Inter-Cities Fairs Cup. In terms of trophies won, it is the third-most successful club in English football.
Arsenal was the first club from the South of England to join the Football League in 1893, and they reached the First Division in 1904. Relegated only once, in 1913, they continue the longest streak in the top division, and have won the second-most top-flight matches in English football history. In the 1930s, Arsenal won five League Championships and two FA Cups, and another FA Cup and two Championships after the war. In 1970–71, they won their first League and FA Cup Double. Between 1989 and 2005, they won five League titles and five FA Cups, including two more Doubles. They completed the 20th century with the highest average league position. Between 1998 and 2017, Arsenal qualified for the UEFA Champions League for nineteen consecutive seasons.
Herbert Chapman, who changed the fortunes of Arsenal forever, won the club its first silverware, and his legacy led the club to dominate the 1930s decade; Chapman, however, died of pneumonia in 1934, aged 55. He helped introduce the WM formation, floodlights, and shirt numbers; he also added the white sleeves and brighter red to the club's jersey. Arsène Wenger is the longest-serving manager and won the most trophies. He won a record seven FA Cups, and his title-winning team set an English record for the longest top-flight unbeaten league run at 49 games between 2003 and 2004, receiving the nickname The Invincibles.
In 1886, munitions workers at the Royal Arsenal in Woolwich founded the club as Dial Square. In 1913 the club crossed the city to Arsenal Stadium in Highbury, becoming close neighbours of Tottenham Hotspur, and creating the North London derby. In 2006, they moved to the nearby Emirates Stadium. With an annual revenue of £340.3m in the 2019–20 season, Arsenal was estimated to be worth US$2.68 billion by Forbes, making it the world's eighth most valuable club, while it is one of the most followed on social media.The motto of the club has long been Victoria Concordia Crescit, Latin for "Victory Through Harmony".
|
|
null | false
| null |
Cerro Sarisariñama is a tepui, a flat-topped mountain in Jaua-Sarisariñama National Park at the far south-west of Bolívar State, Venezuela, near the border with Brazil. Its altitude range is between 300 m (980 ft) and 2,350 m (7,710 ft). The name of the mountain originates from the tale of local Ye'kuana Indians about an evil spirit living in caves up in the mountain and devouring human flesh with a sound "Sari... sari...".
|
What is Sarisariñama?
|
Cerro Sarisariñama is a tepui, a flat-topped mountain in Jaua-Sarisariñama National Park at the far south-west of Bolívar State, Venezuela, near the border with Brazil. Its altitude range is between 300 m (980 ft) and 2,350 m (7,710 ft). The name of the mountain originates from the tale of local Ye'kuana Indians about an evil spirit living in caves up in the mountain and devouring human flesh with a sound "Sari... sari...".
The tepui is in one of the most remote areas in the country, with the closest road being 35 miles to the northeast.[citation needed]
|
null | false
| null |
What are some kinds of embroidery stitches for writing letters?
|
Some embroidery stitches that can be used to embroider letters are the back stitch, the split stitch, the stem stitch, the chain stitch, the couching stitch, and the satin stitch.
|
|
null | false
| null |
How does a person get new hobbies once they are older?
|
Often times, once people reach adult hood they struggle to pick up new hobbies. They may think they are too old to learn something new, too busy, or its just out of their comfort zone. However, it is important to continue to pursue things you are passionate about, and we also need to recognize its likely these things will change over time. This creates the need to continue to be willing to explore and learn new things. While there are many types of hobbies and one is not better than another per se, there is great value in having hobbies that you engage in physically, yourself.
|
|
null | false
| null |
If I was trying to build a garden shed, categorise each of the following as either "useful" or "not useful". A banana, a hammer, nails, screws, personal protective equipment, an assistant, wood, kitchen knife, spoon, ice, magnifying glass
|
The following items would be "useful" to help you build a garden shed:
A hammer
Nails
Screws
Personal Protective Equipment
An assistant
Wood
The following items are "not useful" to help you build a garden shed:
Banana
Kitchen knife
Spoon
Ice
Magnifying glass
|
|
null | false
| null |
Identify which car manufacturer is German or American: Artega, Brammo, Vaydor
|
Brammo and Vaydor are American, Artega is German
|
|
null | false
| 69
|
A major proportion of a text summary includes important entities found in the original text. These entities build up the topic of the summary. Moreover, they hold commonsense information once they are linked to a knowledge base. Based on these observations, this paper investigates the usage of linked entities to guide the decoder of a neural text summarizer to generate concise and better summaries. To this end, we leverage on an off-the-shelf entity linking system (ELS) to extract linked entities and propose Entity2Topic (E2T), a module easily attachable to a sequence-to-sequence model that transforms a list of entities into a vector representation of the topic of the summary. Current available ELS's are still not sufficiently effective, possibly introducing unresolved ambiguities and irrelevant entities. We resolve the imperfections of the ELS by (a) encoding entities with selective disambiguation, and (b) pooling entity vectors using firm attention. By applying E2T to a simple sequence-to-sequence model with attention mechanism as base model, we see significant improvements of the performance in the Gigaword (sentence to title) and CNN (long document to multi-sentence highlights) summarization datasets by at least 2 ROUGE points.
Based on these observations, this paper investigates the usage of linked entities to guide the decoder of a neural text summarizer to generate concise and better summaries.
|
What do the authors investigate in the paper?
|
The usage of linked entities to guide the decoder of a neural text summarizer to generate concise and better summaries.
|
null | false
| null |
What is intermittent fasting and can it help?
|
Intermittent fasting is the idea of cycling between periods of fasting and eating usually on a daily schedule. For example, many people fast for 16 hours in a day then give themselves 8 hours to eat before starting the cycle again. 12 hours or more are needed since 12 hours is the amount of time it takes your body to switch from using glucose for energy to using fat for energy.
The main benefit to intermittent fasting is that it can help you loose weight although many also believe that it can help your immunity system and also live a longer life.
|
|
null | false
| null |
The Old English Bible translations are the partial translations of the Bible prepared in medieval England into the Old English language. The translations are from Latin texts, not the original languages.
Many of these translations were in fact Bible glosses, prepared to assist clerics whose grasp of Latin was imperfect and circulated in connection with the Vulgate Latin Bible that was standard in Western Christianity at the time. Old English was one of very few early medieval vernacular languages the Bible was translated into, and featured a number of incomplete Bible translations, some of which were meant to be circulated, like the Paris Psalter or Ælfric's Hexateuch.
|
Why were the original Old English Bible translations created?
|
The Old English Bible translations were created to assist clerics who didn't now much Latin.
|
null | false
| 146
|
Our first aim is to answer the following questions: 1) How much answer diversity arises for visual questions? and 2) Why do people disagree?
We establish unique answers by pre-processing each answer to eliminate cosmetic differences and then applying exact string matching to identify the number of different answers.
|
How do they establish unique answers?
|
By pre-processing each answer to eliminate cosmetic differences and then applying exact string matching to identify the number of different answers.
|
null | false
| null |
Richard Brandon Morris (July 24, 1904 – March 3, 1989) was an American historian best known for his pioneering work in colonial American legal history and the early history of American labor. In later years, he shifted his research interests to the constitutional, diplomatic, and political history of the American Revolution and the making of the United States Constitution.
Background
Richard Brandon Morris was born on July 24, 1904, in New York City. He attended high school at Towsend Harris Hall in New York City. In 1924, he received a BA degree from City College. In 1925, he received an MA from Columbia University, and in 1930 he received a PhD in history at the university with Evarts Boutell Greene as his dissertation advisor. Morris' dissertation, published by Columbia University Press as Studies in the History of American Law, with Special Reference to the Seventeenth and Eighteenth Centuries (1930), still defines the research agenda for historians working on early American law, though at the time it attracted bitter denunciations from law school practitioners of legal historym, including Julius Goebel, Jr. and Karl Llewellyn, both then Columbia Law School faculty members.
|
Given a reference text about Richard Brandon Morris, tell me what when and where he was born as well as the different colleges and universities he attended.
|
Richard Brandon Morris was born on July 24, 1904. He attended City College and Columbia University.
|
null | false
| null |
Why do people like the game Diplomacy?
|
Diplomacy is a strategic board game that does not rely on dice or other game elements to introduce randomness into results of gameplay. In the game Diplomacy, players negotiate, form alliances, and betray each other in an effort to occupy geographic provinces and control of "supply centers" to win the game.
People like playing the game Diplomacy because it is unique compared to other strategic board games. In Diplomacy, all players submit their "orders" to move their pieces around the board into a box and those orders are adjudicated at the same time, making it difficult to gain an advantage over another player without communicating and planning out moves with other players. The only variable in the game is each player's ability to convince the other players to do what they want. Some people find this aspect of negotiation-based game play compelling, while others find the "anything goes" nature of these negotiations repelling. Therefore, Diplomacy is sometimes referred to as the "game that ruins friendships."
|
|
null | false
| null |
In Greek mythology, Nike (/ˈnaɪki/ (listen); Ancient Greek: Νίκη, lit. 'victory', ancient: [nǐː.kɛː], modern: [ˈni.ci]) was a goddess who personified victory in any field including art, music, war, and athletics. She is often portrayed in Greek art as Winged Victory in the motion of flight; however, she can also appear without wings as "Wingless Victory" when she is being portrayed as an attribute of another deity such as Athena. In Greek literature Nike is described as both an attribute and attendant to the gods Zeus and Athena. Nike gained this honored role beside Zeus during the Titanomachy where she was one of the first gods to offer her allegiance to Zeus. At Athens, Nike became a servant to Athena as well as an attribute of her due to the prominent status Athena held in her patron city. The fusion of the two goddesses at Athens has contributed to the ambiguity surrounding Nike's origins. It is unclear whether she originated from a character trait of the Greek goddess Athena or has always existed as an independent deity. Her origin story in Greek mythology is also slightly ambiguous, with the Theogony claiming Nike to be the daughter of Styx and Pallas while the Homeric Hymns describe Ares as being Nike's father. Her Roman equivalent was Victoria.
|
Who was Nike in Greek Mythology?
|
Nike is the god of 'victory' in Greek mythology, and is often portrayed in art Winged Victory in the motion of flight.
|
null | false
| null |
Give me a bulleted list of the 5 most recent Pakistani Prime Ministers
|
- Mian Muhammad Shehbaz Sharif
- Mr. Imran Khan
- Justice (Retd) Nasir-ul-Mulk (Caretaker)
- Mr. Shahid Khaqan Abbasi
- Mr. Muhammad Nawaz Sharif
|
|
null | false
| null |
What should you do if a black bear approaches you while you are hiking?
|
If a black bear approaches but does not see you, back away slowly. If the bear does see you, stand your ground and do not run away. Wave your arms and talk to the bear in a calm voice.
|
|
null | false
| null |
Why is my baby crying right now?
|
The first thing to check is if your baby is hungry. It’s by far the most common reason they cry. You can tell if they’re hungry by lightly touching their cheek and seeing if they have a “rooting” reflex. This is where they open their mouth and turn towards your touch, as if they were looking for the source of food.
If they’re not hungry, they’re possibly tired. Babies can become over tired and cry and fuss, even when you are just trying to get them to sleep. If this is the case you’ll need to calm them down first, and then you can get them to sleep.
Finally sometimes babies just cry for no reason. Isn’t parenting fun?
|
|
null | false
| 478
|
Previous time series contrastive learning methods generate augmentations with either rule of thumb guided by prefabricated human priors or tedious trial-and-errors, which are designed for specific datasets and learning tasks. In this part, we discuss how to adaptively select the optimal augmentations with a meta-learner network based on the proposed information-aware criteria. We can regard its choice of optimal augmentation as a kind of prior selection. We first choose a set of candidate transformations T, such as jittering and time warping. Each candidate transformation t i ∈ T is associated with an important weight p i ∈ (0, 1), inferring the probability of selecting transformation t i . For an instance x, the augmented instance v i through transformation t i can be computed by:
Considering multiple transformations, we pad all v i to be with the same length. Then, the adaptive augmented instance can be achieved by combing candidate ones, v = 1 |T| i v i . To enable the efficient optimization with gradient-based methods, we approximate discrete Bernoulli processes with binary concrete distributions. Specifically, we approximate a i in Eq. () with ∼ Uniform(0, 1)
where σ(•) is the sigmoid function and τ is the temperature controlling the approximation. The rationality of such approximation is given in Appendix A. Moreover, with temperature τ > 0, the gradient ∂v ∂pi is well-defined. Therefore, our meta-network is end-to-end differentiable. Detailed algorithm is shown in Appendix B.
Considering multiple transformations, we pad all vi to be with the same length. Then, the adaptive augmented instance can be achieved by combing candidate ones****We compare performances of variants that each applies a single transformation to generate augmented instances in Table 5. From the table, we know that augmentation with subsequence benefits the most for the Electricity dataset. We visualize the weight updating process of InfoTS in Figure 8, with each line representing the normalized importance score of the corresponding transformation. The weight for subsequence increase with the epoch, showing that InfoTS tends to adopt subsequence as the optimal transformation. Consistency between accuracy performance and weight updating process demonstrates the effectiveness of InfoTS to adaptively select feasible transformations. Besides, as shown in Table 5, InfoTS outperforms the variant that uses subsequence only. This comparison shows that the meta-network learns to consider the combinations, which is better than any (single) candidate augmentation.
|
In fact, from Fig. 7 does it seem that the model just learns to select one best augmentation? Rather than a best combination of all of them.
|
Figure 7 shows the weight updating process of the meta-network on the Electricity dataset. In the initial version, we didn't normalize weights. We have updated the figure with normalized weights in the revised version. The most important conclusion from the figure is that the optimal augmentation selected by our meta-network is consistent with the one selected with testing accuracy performance. As explained in Section 2.3.2, each candidate augmentation is associated with a weight, and the *"adaptive augmented instance can be achieved by combing candidate ones'' (Page 6)*. In other words, theoretically, the meta-network considers the combination instead of just selecting the best one. In addition, we have empirical observations in Table 5, InfoTS outperforms the variant that uses subsequence only (the last column). This comparison empirically shows that the meta-network learns to consider the combinations, which is better than any (single) candidate augmentation.
|
null | false
| null |
An aurora (pl: auroras or aurorae), also commonly known as the polar lights, is a natural light display in Earth's sky, predominantly seen in high-latitude regions (around the Arctic and Antarctic). Auroras display dynamic patterns of brilliant lights that appear as curtains, rays, spirals, or dynamic flickers covering the entire sky.
Auroras are the result of disturbances in the magnetosphere caused by the solar wind. Major disturbances result from enhancements in the speed of the solar wind from coronal holes and coronal mass ejections. These disturbances alter the trajectories of charged particles in the magnetospheric plasma. These particles, mainly electrons and protons, precipitate into the upper atmosphere (thermosphere/exosphere). The resulting ionization and excitation of atmospheric constituents emit light of varying colour and complexity. The form of the aurora, occurring within bands around both polar regions, is also dependent on the amount of acceleration imparted to the precipitating particles.
In northern latitudes, the effect is known as the aurora borealis or the northern lights. The former term was coined by Galileo in 1619, from the Roman goddess of the dawn and the Greek name for the north wind. The southern counterpart, the aurora australis or the southern lights, has features almost identical to the aurora borealis and changes simultaneously with changes in the northern auroral zone. The aurora australis is visible from high southern latitudes in Antarctica, Chile, Argentina, South Africa, New Zealand and Australia. The aurora borealis is visible from areas around the Arctic such as Alaska, the Canadian Territories, Iceland, Greenland, Norway, Sweden, Finland and Siberia. On rare occasions the aurora borealis can be seen as far south as the Mediterranean and the southern states of the US.
|
From which countries is the aurora australis visible?
|
The aurora australis is visible from high southern latitudes in Antarctica, Chile, Argentina, South Africa, New Zealand and Australia.
|
null | false
| null |
Anime differs from other forms of animation by its art styles, methods of animation, its production, and its process. Visually, anime works exhibit a wide variety of art styles, differing between creators, artists, and studios. While no single art style predominates anime as a whole, they do share some similar attributes in terms of animation technique and character design.
Anime is fundamentally characterized by the use of limited animation, flat expression, the suspension of time, its thematic range, the presence of historical figures, its complex narrative line and, above all, a peculiar drawing style, with characters characterized by large and oval eyes, with very defined lines, bright colors and reduced movement of the lips
The cinematic effects of anime differentiates itself from the stage plays found in American animation. Anime is cinematically shot as if by camera, including panning, zooming, distance and angle shots to more complex dynamic shots that would be difficult to produce in reality. In anime, the animation is produced before the voice acting, contrary to American animation which does the voice acting first.
|
Given this passage from wikipedia about the attributes of Japanese Anime, summarize the key differences between Japanese anime and other forms of animation
|
Anime separates itself from other forms of animation through its depiction of characters, as well as portrayal of cinematic effects. Characters in anime are often drawn with very well defined features, highlighted by bold lines and bright colors, along with large oval eyes. Cinematic effects are often highlighted by dynamic shots, including panning, zooming, angle shots, as if the scene was shot on a camera.
|
1908.07816
| false
| null |
We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN. We chose S2S and HRAN as baselines because we would like to evaluate our model's capability to keep track of the multi-turn context and to produce emotionally more appropriate responses, respectively. In order to adapt S2S to the multi-turn setting, we concatenate all the history utterances in the context into one.
We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN.
|
What two baseline models are used?
|
The answers are shown as follows:
* sequence-to-sequence model (denoted as S2S)
* HRAN
|
2001.08051
| false
| null |
FLOAT SELECTED: Table 3: Spoken data collected during different evaluation campaigns. Column “#Q” indicates the total number of different (written) questions presented to the pupils.
Table reports some statistics extracted from the acquired spoken data. Speech was recorded in classrooms, whose equipment depended on each school. In general, around 20 students took the test together, at the same time and in the same classrooms, so it is quite common that speech of mates or teachers often overlaps with the speech of the student speaking in her/his microphone. Also, the type of microphone depends on the equipment of the school. On average, the audio signal quality is nearly good, while the main problem is caused by a high percentage of extraneous speech. This is due to the fact that organisers decided to use a fixed duration - which depends on the question - for recording spoken utterances, so that all the recordings for a given question have the same length. However, while it is rare that a speaker has not enough time to answer, it is quite common that, especially after the end of the utterance, some other speech (e.g. comments, jokes with mates, indications from the teachers, etc.) is captured. In addition, background noise is often present due to several sources (doors, steps, keyboard typing, background voices, street noises if the windows are open, etc). Finally, it has to be pointed out that many answers are whispered and difficult to understand.
FLOAT SELECTED: Table 3: Spoken data collected during different evaluation campaigns. Column “#Q” indicates the total number of different (written) questions presented to the pupils.
Table reports some statistics extracted from the acquired spoken data.
|
How many utterances are in the corpus?
|
Total number of utterances available is: 70607 (37344 ENG + 33263 GER)
|
null | false
| null |
How many hands do humans have?
|
Humans without any genetic defects or major limb injuries have two hands. However a very small number of people have zero, one, or three hands.
|
|
null | false
| null |
Assume the email format is {first_initial}{last_name}@somecompany.com
What would the email addresses be for the following people?
Jon Doe
Richard Smith
Tom Jenkins
Nick Parsons
|
Following the {first_initial}{last_name}@somecompany.com format, the emails for those people would be:
Jon Doe: jdoe@somecompany.com
Richard Smith: rsmith@somecompany.com
Tom Jenkins: tjenkins@somecompany.com
Nick Parsons: nparsons@somecompany.com
|
|
null | false
| 183
|
If the translation of a pronoun requires context beyond the current sentence (see the example in Figure FIGREF1 ), a natural extension of sentence-level NMT models is to condition the model prediction on this necessary context. In the following, we describe a number of existing approaches to making models “aware” of additional context.
The simplest possible extension is to translate units larger than sentences. BIBREF5 concatenate each sentence with the sentence that precedes it, for the source side of the corpus or both sides. All of their models are standard sequence-to-sequence models built with recurrent neural networks (RNNs), since the method does not require any architectural change. BIBREF11 use the same concatenation technique with a Transformer architecture BIBREF2 , and experiment with wider context.
A number of works do propose changes to the NMT architecture. A common technique is to extend a standard encoder-decoder model by additional encoders for the context sentence(s), with a modified attention mechanism BIBREF6 , BIBREF9 , BIBREF8 . One aspect that differs between these works is the architecture of the encoder and attention. While BIBREF6 , BIBREF9 extend an RNN encoder-decoder with a second encoder that the decoder attends to, BIBREF8 extend the Transformer architecture with an encoder that is attended to by the main encoder. BIBREF8 also introduce parameter sharing between the main encoder and the context encoder, but do not empirically demonstrate its importance.
While the number of encoded sentences in the previous work is fixed, BIBREF7 , BIBREF10 explore the integration of variable-size context through a hierarchical architecture, where a first-level RNN reads in words to produce sentence vectors, which are then fed into a second-level RNN to produce a document summary.
Apart from differences in the architectures, related work varies in whether it considers source context, target context, or both (see Table TABREF9 for an overview of language arcs and context types). Some work considers only source context, but for pronoun translation, target-side context is intuitively important for disambiguation, especially if the antecedent itself is ambiguous. In our evaluation, we therefore emphasize models that take into account both source and target context.
Our experiments are based on models from BIBREF9 , who have released their source code. We extend their models with parameter sharing, which was shown to be beneficial by BIBREF8 . Additionally, we consider a concatenative baseline, similar to BIBREF5 , and Transformer-based models BIBREF8 .
This section describes several context-aware NMT models that we use in our experiments. They fall into two major categories: models based on RNNs and models based on the Transformer architecture BIBREF2 . We experiment with additional context on the source side and target side.
This section describes several context-aware NMT models that we use in our experiments. They fall into two major categories: models based on RNNs and models based on the Transformer architecture (Vaswani et al., 2017). We experiment with additional context on the source side and target side.
|
What are the two major categories of context-aware NMT models that we use in the experiments?
|
They fall into two major categories: models based on RNNs and models based on the Transformer architecture.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.