paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
What are the best rock bands that were part of the British invasion?
|
While tastes may differ, the top answers are generally the Rolling Stones, The Beatles and The Who. There are others on many lists, for example The Kinks.
|
|
null | false
| null |
Classify each of the following as either a television show or a movie: The Wire, The Sopranos, Game of Thrones, Mad Men, Harry Potter, Westworld, Iron Man.
|
The Wire is a television show that aired on HBO.
The Sopranos is a television show that aired on HBO.
Game of Thrones is a television show that aired on HBO.
Mad Men is a television show that aired on Showtime.
Harry Potter is a movie that was released by Warner Brothers Studios.
Westworld is a television show that aired on HBO.
Iron Man is a movie that was released by Marvel Studios.
|
|
null | false
| 80
|
Rouge has been the most widely used family of metrics in summarization evaluation. In the following, we briefly describe the different variants of Rouge:
Rouge-L, Rouge-W, Rouge-S and Rouge-SU were later extended to consider both the recall and precision. In calculating Rouge, stopword removal or stemming can also be considered, resulting in more variants.
In the summarization literature, despite the large number of variants of Rouge, only one or very few of these variants are often chosen (arbitrarily) for evaluation of the quality of the summarization approaches. When Rouge was proposed, the original variants were only recall-oriented and hence the reported correlation results BIBREF1 . The later extension of Rouge family by precision were only reflected in the later versions of the Rouge toolkit and additional evaluation of its effectiveness was not reported. Nevertheless, later published work in summarization adopted this toolkit for its ready implementation and relatively efficient performance.
The original Rouge metrics show high correlations with human judgments of the quality of summaries on the DUC 2001-2003 benchmarks. However, these benchmarks consist of newswire data and are intrinsically very different than other summarization tasks such as summarization of scientific papers. We argue that Rouge is not the best metric for all summarization tasks and we propose an alternative metric for evaluation of scientific summarization. The proposed alternative metric shows much higher and more consistent correlations with manual judgments in comparison with the well-established Rouge.
ROUGE has been the most widely used family of metrics in summarization evaluation. In the following, we briefly describe the different variants of ROUGE: – ROUGE-N: ROUGE-N was originally a recall oriented metric that considered N-gram recall between a system generated summary and the corresponding gold human summaries. In later versions, in addition to the recall, precision was also considered in ROUGE-N, which is the precision of N-grams in the system generated summary with respect to the gold human summary. To combine both precision and recall, F1 scores are often reported. Common values of N range from 1 to 4. – ROUGE-L: This variant of ROUGE compares the system generated summary and the human generated summary based on the Longest Common Subsequences (LCS) between them. The premise is that, longer LCS between the system and human summaries shows more similarity and therefore higher quality of the system summary – ROUGE-W: One problem with ROUGE-L is that all LCS with same lengths are rewarded equally. The LCS can be either related to a consecutive set of words or a long sequence with many gaps. While ROUGE-L treats all sequence matches equally, it makes sense that sequences with many gaps receive lower scores in comparison with consecutive matches. ROUGE-W considers an additional weighting function that awards consecutive matches more than non-consecutive ones. – ROUGE-S: ROUGE-S computes the skip-bigram co-occurrence statistics between the two summaries. It is similar to ROUGE-2 except that it allows gaps between the bigrams by skipping middle tokens. – ROUGE-SU: ROUGE-S does not give any credit to a system generated sentence if the sentence does not have any word pair co-occurring in the reference sentence. To solve this potential problem, ROUGE-SU was proposed which is an extension of ROUGE-S that also considers unigram matches between the two summaries. ROUGE-L, ROUGE-W, ROUGE-S and ROUGE-SU were later extended to consider both the recall and precision. In calculating ROUGE, stopword removal or stemming can also be considered, resulting in more variants. In the summarization literature, despite the large number of variants of ROUGE, only one or very few of these variants are often chosen (arbitrarily) for evaluation of the quality of the summarization approaches. When ROUGE was proposed, the original variants were only recall-oriented and hence the reported correlation results (Lin, 2004).
|
Are there any variants of ROUGE?
|
Yes. Such as ROUGE-N, ROUGE-L, ROUGE-W, ROUGE-S, ROUGE-2, ROUGE-SU,.
|
null | false
| 43
|
We present GluonCV and GluonNLP, the deep learning toolkits for computer vision and natural language processing based on Apache MXNet (incubating). These toolkits provide state-of-the-art pre-trained models, training scripts, and training logs, to facilitate rapid prototyping and promote reproducible research. We also provide modular APIs with flexible building blocks to enable efficient customization. Leveraging the MXNet ecosystem, the deep learning models in GluonCV and GluonNLP can be deployed onto a variety of platforms with different programming languages. Benefiting from open source under the Apache 2.0 license, GluonCV and GluonNLP have attracted 100 contributors worldwide on GitHub. Models of GluonCV and GluonNLP have been downloaded for more than 1.6 million times in fewer than 10 months.
We also provide modular APIs with flexible building blocks to enable efficient customization
|
Is the customization function available in the toolkit?
|
Yes.
|
null | false
| null |
Which of these animals is not a bear: brown bear, black bear, polar bear, koala bear, panda bear.
|
The answer is koala bear. Despite the name, the “koala bear” is actually a marsupial, not a bear.
|
|
null | false
| null |
What are five totally distinct, creative ways to describe the usefulness of an idle mind?
|
1. An idle mind is like a garden waiting to be planted with the seeds of creative ideas.
2. An idle mind is like a blank canvas waiting to be filled with fresh and original thoughts.
3. An idle mind is like a time machine that can transport us to a different world of possibilities.
4. An idle mind is like a treasure chest full of hidden riches waiting to be unearthed.
5. An idle mind is like a garden hose that can irrigate us with ideas and solutions.
|
|
null | false
| null |
Classify each of the numbers as prime or composite - 1, 2, 3, 10, 15
|
Modern mathematicians define a number as prime if it is divided by exactly two numbers. Since 1 can be divided by 1 and the number itself, which is also 1, so 1 is a prime number. By the same definition 2 and 3 are also prime numbers. Whereas, 10 and 15 are composite numbers as they can be divided by 4 numbers each. 10 can be divided by 1, 2, 5 and 10. 15 can be divided by 1, 3, 5 and 15.
|
|
null | false
| 109
|
In this work, we developed pre-trained sentiment, emotion and personality models for identifying sarcastic text using CNN, which are found to be very effective for sarcasm detection. In the future, we plan to evaluate the performance of the proposed method on a large corpus and other domain-dependent corpora. Future work will also focus on analyzing past tweets and activities of users in order to better understand their personality and profile and, hence, further improve the disambiguation between sarcastic and non-sarcastic text.
In this work, we developed pre-trained sentiment, emotion and personality models for identifying sarcastic text using CNN, which are found to be very effective for sarcasm detection.
|
Whether features extracted using the pre-trained sentiment, emotion and personality models improve sarcasm detection performance?
|
Yes.
|
null | false
| null |
What is ADSL?
|
A high-bandwidth digital transmission technology that uses existing phone lines and also allows voice transmissions over the same lines
|
|
null | false
| null |
What are some foods to cook to celebrate St. Patrick's Day?
|
Most people in Ireland and elsewhere like to celebrate St. Patrick's Day with traditional (or regionally adapted) Irish foods. These can include things like: corned beef and cabbage, potatoes, scotched eggs, shoulder bacon, soda bread, shepard's pie, beef stew, corned beef hash, colcannon.
|
|
null | false
| null |
Pink Floyd are an English rock band formed in London in 1965. Gaining an early following as one of the first British psychedelic groups, they were distinguished by their extended compositions, sonic experimentation, philosophical lyrics and elaborate live shows. They became a leading band of the progressive rock genre, cited by some as the greatest progressive rock band of all time.
Pink Floyd were founded in 1965 by Syd Barrett (guitar, lead vocals), Nick Mason (drums), Roger Waters (bass guitar, vocals), and Richard Wright (keyboards, vocals). Under Barrett's leadership, they released two charting singles and the successful debut album The Piper at the Gates of Dawn (1967). The guitarist and vocalist David Gilmour joined in January 1968; Barrett left in April 1968 due to deteriorating mental health. Waters became the primary lyricist and thematic leader, devising the concepts behind Pink Floyd's most successful albums, The Dark Side of the Moon (1973), Wish You Were Here (1975), Animals (1977) and The Wall (1979). The musical film based on The Wall, Pink Floyd – The Wall (1982), won two BAFTA Awards. Pink Floyd also composed several film scores.
Following personal tensions, Wright left Pink Floyd in 1981, followed by Waters in 1985. Gilmour and Mason continued as Pink Floyd, rejoined later by Wright. They produced the albums A Momentary Lapse of Reason (1987) and The Division Bell (1994), backed by major tours, before entering a long hiatus. In 2005, all but Barrett reunited for a performance at the global awareness event Live 8. Barrett died in 2006, and Wright in 2008. The last Pink Floyd studio album, The Endless River (2014), was based on unreleased material from the Division Bell recording sessions. In 2022, Gilmour and Mason reformed Pink Floyd to release the song "Hey, Hey, Rise Up!" in protest of the Russo-Ukrainian War.
By 2013, Pink Floyd had sold more than 250 million records worldwide, making them one of the best-selling music artists of all time. The Dark Side of the Moon and The Wall were inducted into the Grammy Hall of Fame, and these albums and Wish You Were Here are among the best-selling albums of all time. Four Pink Floyd albums topped the US Billboard 200, and five topped the UK Albums Chart. Pink Floyd's hit singles include "See Emily Play" (1967), "Money" (1973), "Another Brick in the Wall, Part 2" (1979), "Not Now John" (1983), "On the Turning Away" (1987) and "High Hopes" (1994). They were inducted into the US Rock and Roll Hall of Fame in 1996 and the UK Music Hall of Fame in 2005. In 2008, Pink Floyd were awarded the Polar Music Prize in Sweden for their contribution to modern music.
|
Given a reference text about the rock band Pink Floy: What was the name of Pink Floyd's last album and when was it released?
|
The last album of Pink Floyd was called "The Endless River" and was released in 2014.
|
1910.08210
| false
| null |
FLOAT SELECTED: Table 1: Final win rate on simplest variant of RTFM. The models are trained on one set of dynamics (e.g. training set) and evaluated on another set of dynamics (e.g. evaluation set). “Train” and “Eval” show final win rates on training and eval environments.
FLOAT SELECTED: Table 1: Final win rate on simplest variant of RTFM. The models are trained on one set of dynamics (e.g. training set) and evaluated on another set of dynamics (e.g. evaluation set). “Train” and “Eval” show final win rates on training and eval environments.
|
How better is performance of proposed model compared to baselines?
|
Proposed model achive 66+-22 win rate, baseline CNN 13+-1 and baseline FiLM 32+-3 .
|
null | false
| null |
The symptoms of COVID-19 are variable depending on the type of variant contracted, ranging from mild symptoms to a potentially fatal illness. Common symptoms include coughing, fever, loss of smell (anosmia) and taste (ageusia), with less common ones including headaches, nasal congestion and runny nose, muscle pain, sore throat, diarrhea, eye irritation, and toes swelling or turning purple, and in moderate to severe cases, breathing difficulties. People with the COVID-19 infection may have different symptoms, and their symptoms may change over time. Three common clusters of symptoms have been identified: one respiratory symptom cluster with cough, sputum, shortness of breath, and fever; a musculoskeletal symptom cluster with muscle and joint pain, headache, and fatigue; and a cluster of digestive symptoms with abdominal pain, vomiting, and diarrhea. In people without prior ear, nose, or throat disorders, loss of taste combined with loss of smell is associated with COVID-19 and is reported in as many as 88% of symptomatic cases.
|
According to the text what are the most common symptoms of COVID-19?
|
coughing, fever, loss of smell (anosmia), loss of taste (ageusia)
|
1911.03562
| false
| null |
A. As of June 2019, AA had $\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles. Figure FIGREF6 shows a graph of the number of papers published in each of the years from 1965 to 2018.
As of June 2019, AA had $\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles.
|
How many papers are used in experiment?
|
The answers are shown as follows:
* 44,896 articles
|
null | false
| 192
|
To estimate the quality of topic models in a real task, we chose Islam informational portal "Golos Islama" (Islam Voice) (in Russian). This portal contains both news articles related to Islam and articles discussing Islam basics. We supposed that the thematic analysis of this specialized site can be significantly improved with domain-specific knowledge described in the thesaurus form. We extracted the site contents using Open Web Spider and obtained 26,839 pages.
To combine knowledge with a topic model, we used RuThes thesaurus together with the additional block of the Islam thesaurus. The Islam thesaurus contains more than 5 thousand Islam-related terms including single words and expressions.
For each combined model, we ran two experiments with 100 topics and with 200 topics. The generated topics were evaluated by two linguists, who had previously worked on the Islam thesaurus. The evaluation task was formulated as follows: the experts should read the top elements of the generated topics and try to formulate labels of these topics. The labels should be different for each topic in the set generated with a specific model. The experts should also assign scores to the topics' labels:
Then we can sum up all the scores for each model under consideration and compare the total scores in value. Thus, maximum values of the topic score are 200 for a 100-topic model and 400 for a 200-topic model. In this experiment we do not measure inter-annotator agreement for each topic, but try to get expert's general impression.
Due to the complicated character of the Islam portal contents for automatic extraction (numerous words and names difficult for Russian morphological analyzers), we did not use automatic extraction of multiword expressions and exploited only phrases described in RuThes or in the Islam Thesaurus. We added thesaurus phrases in two ways: most frequent 1000 phrases (as in BIBREF14 , BIBREF16 ) and phrases with frequency more than 10 (More10phrases): the number of such phrases is 9351.
The results of the evaluation are shown in Table 5. The table contains the overall expert scores for a topic model (Score), kernel uniqueness as in the previous section (KernU), perplexity (Prpl). Also for each model kernels, we calculated the average number of known relations between topics’s elements: thesaurus relations (synonyms and direct relations between concepts) and component-based relations between phrases (Relc).
It can be seen that if we add phrases without accounting component similarity (Runs 2, 3), the quality of topics decreases: the more phrases are added, the more the quality degrades. The human scores also confirm this fact. But if the similarity between phrase components is considered then the quality of topics significantly improves and becomes better than for unigram models (Runs 4, 5). All measures are better. Relational coherence between kernel elements also grows. The number of added phrases is not very essential.
Adding unary synonyms decreases the quality of the models (Run 6) according to human scores. But all other measures behave differently: kernel uniqueness is high, perplexity decreases, relational coherence grows. The problem of this model is in that non-topical, general words are grouped together, reinforce one another but do not look as related to any topic. Adding all thesaurus relations is not very beneficial (Runs 7, 8). If we consider all relations except hyponyms, the human scores are better for corresponding runs (Runs 9, 10). Relational coherence in topics’ kernels achieves very high values: the quarter of all elements have some relations between each other, but it does not help to improve topics. The explanation is the same: general words can be grouped together.
At last, we removed General Lexicon concepts from the RuThes data, which are top-level, non-thematic concepts that can be met in arbitrary domains BIBREF19 and considered all-relations and without-hyponyms variants (Runs 11, 12). These last variants achieved maximal human scores because they add thematic knowledge and avoid general knowledge, which can distort topics. Kernel uniqueness is also maximal.
Table 6 shows similar topics obtained with the unigram, phrase-enriched (Run 5) and the thesaurus-enriched topic model (Run 12). The Run-5 model adds thesaurus phrases with frequency more than 10 and accounts for the component similarity between phrases. The Run-12 model accounts both component relations and hypernym thesaurus relations. All topics are of high quality, quite understandable. The experts evaluated them with the same high scores.
Phrase-enriched and thesaurus-enriched topics convey the content using both single words and phrases. It can be seen that phrase-enriched topics contain more phrases. Sometimes the phrases can create not very convincing relations such as Russian church - Russian language. It is explainable but does not seem much topical in this case.
The thesaurus topics seem to convey the contents in the most concentrated way. In the Syrian topic general word country is absent; instead of UN (United Nations), it contains word rebel, which is closer to the Syrian situation. In the Orthodox church topic, the unigram variant contains extra word year, relations of words Moscow and Kirill to other words in the topic can be inferred only from the encyclopedic knowledge.
Also for each model kernels, we calculated the average number of known relations between topics’s elements: thesaurus relations (synonyms and direct relations between concepts) and component-based relations between phrases (Relc). It can be seen that if we add phrases without accounting component similarity (Runs 2, 3), the quality of topics decreases: the more phrases are added, the more the quality degrades.
|
What conclusion do they get after calculating the average number of known relations between topics’ elements?
|
If phrases are added without accounting component similarity (Runs 2, 3), the quality of topics decreases: the more phrases are added, the more the quality degrades.
|
null | false
| null |
How many rings on the Olympic flag
|
Five
|
|
null | false
| null |
Why is the Alabama's football team called the Crimson Tide?
|
The football team was called the Thin Red Line or some may know it as the Crimson White. During 1907 The University of Alabama played their rival Auburn in Birmingham, AL and the soil described as an iron rich color later turned into red mud and stained the Alabama football players white jerseys. It is known that Sports editor Hugh Roberts said the team played like a "Crimson Tide" and since then the name remained.
|
|
null | false
| null |
What is Python?
|
Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation via the off-side rule.
Python is dynamically typed and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. It is often described as a "batteries included" language due to its comprehensive standard library.
|
|
null | false
| null |
Who are ''The Lumières" ?
|
The Lumières (literally in English: The Lights) was a cultural, philosophical, literary and intellectual movement beginning in the second half of the 17th century, originating in western Europe and spreading throughout the rest of Europe. It included philosophers such as Baruch Spinoza, David Hume, John Locke, Edward Gibbon, Voltaire, Jean-Jacques Rousseau, Denis Diderot, Pierre Bayle and Isaac Newton. This movement is influenced by the scientific revolution in southern Europe arising directly from the Italian renaissance with people like Galileo Galilei. Over time it came to mean the Siècle des Lumières, in English the Age of Enlightenment.[Note 1]
Members of the movement saw themselves as a progressive élite, and battled against religious and political persecution, fighting against what they saw as the irrationality, arbitrariness, obscurantism and superstition of the previous centuries. They redefined the study of knowledge to fit the ethics and aesthetics of their time. Their works had great influence at the end of the 18th century, in the American Declaration of Independence and the French Revolution.
This intellectual and cultural renewal by the Lumières movement was, in its strictest sense, limited to Europe. These ideas were well understood in Europe, but beyond France the idea of "enlightenment" had generally meant a light from outside, whereas in France it meant a light coming from within oneself.
In the most general terms, in science and philosophy, the Enlightenment aimed for the triumph of reason over faith and belief; in politics and economics, the triumph of the bourgeois over nobility and clergy.
|
|
null | false
| null |
Identify which animal species is alive or extinct: Iguanodon, Irrawaddy Dolphin
|
Irrawaddy Dolphin is alive, Iguanodon is extinct.
|
|
null | false
| 100
|
Apart from term frequency methods, question classification methods developed on one dataset generally do not exhibit strong transfer performance to other datasets BIBREF3 . While BERT-QC achieves large gains over existing methods on the ARC dataset, here we demonstrate that BERT-QC also matches state-of-the-art performance on TREC BIBREF6 , while surpassing state-of-the-art performance on the GARD corpus of consumer health questions BIBREF3 and MLBioMedLAT corpus of biomedical questions BIBREF4 . As such, BERT-QC is the first model to achieve strong performance across more than one question classification dataset.
TREC question classification is divided into separate coarse and fine-grained tasks centered around inferring the expected answer types of short open-domain factoid questions. TREC-6 includes 6 coarse question classes (abbreviation, entity, description, human, location, numeric), while TREC-50 expands these into 50 more fine-grained types. TREC question classification methods can be divided into those that learn the question classification task, and those that make use of either hand-crafted or semi-automated syntactic or semantic extraction rules to infer question classes. To date, the best reported accuracy for learned methods is 98.0% by Xia et al. BIBREF8 for TREC-6, and 91.6% by Van-tu et al. BIBREF24 for TREC-50. Madabushi et al. BIBREF7 achieve the highest to-date performance on TREC-50 at 97.2%, using rules that leverage the strong syntactic regularities in the short TREC factoid questions.
We compare the performance of BERT-QC with recently reported performance on this dataset in Table TABREF11 . BERT-QC achieves state-of-the-art performance on fine-grained classification (TREC-50) for a learned model at 92.0% accuracy, and near state-of-the-art performance on coarse classification (TREC-6) at 96.2% accuracy.
Because of the challenges with collecting biomedical questions, the datasets and classification taxonomies tend to be small, and rule-based methods often achieve strong results BIBREF45 . Roberts et al. BIBREF3 created the largest biomedical question classification dataset to date, annotating 2,937 consumer health questions drawn from the Genetic and Rare Diseases (GARD) question database with 13 question types, such as anatomy, disease cause, diagnosis, disease management, and prognoses. Roberts et al. BIBREF3 found these questions largely resistant to learning-based methods developed for TREC questions. Their best model (CPT2), shown in Table TABREF17 , makes use of stemming and lists of semantically related words and cue phrases to achieve 80.4% accuracy. BERT-QC reaches 84.9% accuracy on this dataset, an increase of +4.5% over the best previous model. We also compare performance on the recently released MLBioMedLAT dataset BIBREF4 , a multi-label biomedical question classification dataset with 780 questions labeled using 88 classification types drawn from 133 Unified Medical Language System (UMLS) categories. Table TABREF18 shows BERT-QC exceeds their best model, focus-driven semantic features (FDSF), by +0.05 Micro-F1 and +3% accuracy.
While BERT-QC achieves large gains over existing methods on the ARC dataset, here we demonstrate that BERT-QC also matches state-of-the-art performance on TREC (Li and Roth, 2002), while surpassing state-of-the-art performance on the GARD corpus of consumer health questions (Roberts et al., 2014) and MLBioMedLAT corpus of biomedical questions (Wasim et al., 2019).
|
What conclusions can be drawn from the comparison between Bert-QC and ARC data sets?
|
While BERT-QC achieves large gains over existing methods on the ARC dataset, here they demonstrate that BERT-QC also matches state-of-the-art performance on TREC, while surpassing state-of-the-art performance on the GARD corpus of consumer health questions and MLBioMedLAT corpus of biomedical questions.
|
null | false
| null |
Keanu Charles Reeves was born in Beirut, Lebanon, on September 2, 1964, the son of Patricia (née Taylor), a costume designer and performer, and Samuel Nowlin Reeves Jr. His mother is English, originating from Essex. His American father is from Hawaii, and is of Native Hawaiian, Chinese, English, Irish, and Portuguese descent. His grandmother from his father's side is Chinese Hawaiian. His mother was working in Beirut when she met his father, who abandoned his wife and family when Reeves was three years old. Reeves last met his father on the Hawaiian island of Kauai when he was 13.
|
Where are Keanu Reeves and his parents from?
|
Keanu himself is from Lebanon, his mother is from the United Kingdom, and his father is from the United States.
|
null | false
| null |
Folk music is a music genre that includes traditional folk music and the contemporary genre that evolved from the former during the 20th-century folk revival. Some types of folk music may be called world music. Traditional folk music has been defined in several ways: as music transmitted orally, music with unknown composers, music that is played on traditional instruments, music about cultural or national identity, music that changes between generations (folk process), music associated with a people's folklore, or music performed by custom over a long period of time. It has been contrasted with commercial and classical styles. The term originated in the 19th century, but folk music extends beyond that.
|
Tell me some points about Foljk music using the given paragraph as a reference
|
1. During the 20th century's folk revival, traditional folk music and the contemporary genre that resulted from it both fell under the umbrella term of folk music.
2. Some folk music subgenres may be referred to as world music. Traditional folk music has been described in a variety of ways, including as music that has been passed down orally, has unknown composers, is played on traditional instruments, is about cultural or national identity, changes between generations (folk process), is connected to a people's folklore, or is performed as part of long-standing custom.
3. It has been put in opposition to classical and commercial styles.
|
null | false
| null |
Tell me whether these athletes play golf, soccer, or football: Tiger Woods, Lionel Messi, Lavonte David, Russell Wilson, Derek Carr, Rafael Leao, Tyler Adams, Dustin Johnson, Jordan Spieth
|
Tiger Woods: Golf
Lionel Messi: Soccer
Lavonte David: Football
Russell Wilson: Football
Derek Carr: Football
Rafael Leao: Soccer
Tyler Adams: Soccer
Dustin Johnson: Golf
Jordan Spieth: Golf
|
|
null | false
| null |
What is the difference between an evergreen and deciduous tree?
|
Evergreen trees keep their leaves year around, whereas deciduous trees lose their leaves during the fall and winter and regrow them in the spring and summer
|
|
null | false
| null |
In 1993, Adam Sandler appeared in the film Coneheads with Farley, David Spade, Dan Aykroyd, Phil Hartman, and Jane Curtin. In 1994, he co-starred in Airheads with Brendan Fraser and Steve Buscemi.
Sandler starred in Billy Madison (1995) playing a grown man repeating grades 1–12 to earn back his father's respect and the right to inherit his father's multimillion-dollar hotel empire. The film was successful at the box office despite negative reviews. He followed this film with Bulletproof (1996), and the financially successful comedies Happy Gilmore (1996) and The Wedding Singer (1998). He was initially cast in the bachelor–party–themed comedy/thriller Very Bad Things (1998) but had to back out due to his involvement in The Waterboy (1998), one of his first hits.
|
Please extract the names of the 1990s films Adam Sandler was in separating them with commas.
|
The 1990s films Adam Sandler was in include: Coneheads, Airheads, Billy Madison, Bulletproof, Happy Gilmore, The Wedding Singer, The Waterboy.
|
null | false
| null |
Classify the cities based on whether they are located in India or Syria.
Kolkāta, Mumbai, Aleppo, Delhi, Homs, Damascus
|
India - Delhi, Mumbai, Kolkāta
Syria - Aleppo, Damascus, Homs
|
|
null | false
| 137
|
Pre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown promising results for rapidly building applications for low resource languages. BIBREF2 show the potential of multilingual-BERT BIBREF0 in zero-shot transfer for a large number of languages from different language families on five NLP tasks, namely, natural language inference, document classification, named entity recognition, part-of-speech tagging, and dependency parsing.
Although multilingual models are an important ingredient for enhancing language technology in many languages, recent research on improving pre-trained models puts much emphasis on English BIBREF3, BIBREF4, BIBREF5. The current state of affairs makes it difficult to translate advancements in pre-training from English to non-English languages. To our best knowledge, there are only three available multilingual pre-trained models to date: (1) the multilingual-BERT (mBERT) that supports 104 languages, (2) cross-lingual language model BIBREF6 that supports 100 languages, and (3) Language Agnostic SEntence Representations BIBREF7 that supports 93 languages. Among the three models, LASER is based on neural machine translation approach and strictly requires parallel data to train.
Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are:
We propose a fast adaptation method for obtaining a bilingual BERT$_{\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU.
We evaluate our bilingual LMs for six languages on two zero-shot cross-lingual transfer tasks, namely natural language inference BIBREF9 and universal dependency parsing. We show that our models offer competitive performance or even better that mBERT.
We illustrate that our bilingual LMs can serve as an excellent feature extractor in supervised dependency parsing task.
We illustrate that our bilingual LMs can serve as an excellent feature extractor in supervised dependency parsing task.
|
In which kind of task the authors' bilingual LMs can serve as an excellent feature extractor?
|
In supervised dependency parsing task.
|
1610.00879
| true
| null |
FLOAT SELECTED: Figure 1: Word cloud for drunk tweets
FLOAT SELECTED: Figure 1: Word cloud for drunk tweets
|
Do they report results only on English data?
|
Yes.
|
1602.01595
| false
| null |
mcdonald:11 established that, when no treebank annotations are available in the target language, training on multiple source languages outperforms training on one (i.e., multi-source model transfer outperforms single-source model transfer). In this section, we evaluate the performance of our parser in this setup. We use two strong baseline multi-source model transfer parsers with no supervision in the target language:
Following guo:16, for each target language, we train the parser on six other languages in the Google universal dependency treebanks version 2.0 (de, en, es, fr, it, pt, sv, excluding whichever is the target language), and we use gold coarse POS tags. Our parser uses the same word embeddings and word clusters used in guo:16, and does not use any typology information.
mcdonald:11 established that, when no treebank annotations are available in the target language, training on multiple source languages outperforms training on one (i.e., multi-source model transfer outperforms single-source model transfer).
Following guo:16, for each target language, we train the parser on six other languages in the Google universal dependency treebanks version 2.0 (de, en, es, fr, it, pt, sv, excluding whichever is the target language), and we use gold coarse POS tags.
|
How does the model work if no treebank is available?
|
The answers are shown as follows:
* train the parser on six other languages in the Google universal dependency treebanks version 2.0 (de, en, es, fr, it, pt, sv, excluding whichever is the target language), and we use gold coarse POS tags
|
null | false
| 219
|
We compared our full method (TextTiling with heuristic-max based on embeddings trained by virtual sentences) with several baselines:
Random. We randomly segmented conversation sessions. In this baseline, we were equipped with the prior probability of segmentation.
MMD. We applied the MinMax-Dotplotting (MMD) approach proposed by Ye et al. BIBREF24 . We ran the executable program provided by the authors.
TextTiling w/ tf INLINEFORM0 idf features. We implemented TextTiling ourselves according to BIBREF12 .
We tuned the hyperparameter INLINEFORM0 in Equation ()on the validation set to make the number of segmentation close to that of manual annotation, and reported precision, recall, and the F-score on the test set in Table TABREF18 . As seen, our approach significantly outperforms baselines by a large margin in terms of both precision and recall. Besides, we can see that MMD obtains low performance, which is mainly because the approach cannot be easily adapted to other datasets like short sentences of conversation utterances. In summary, we achieve an INLINEFORM1 -score higher than baseline methods by more than 20%, showing the effectiveness of enhancing TextTiling with modern word embeddings.
We further conducted in-depth analysis of different strategies of training word-embeddings and matching heuristics in Table TABREF21 . For word embeddings, we trained them on the 3M-sentence dataset with three strategies: (1) virtual-sentence context proposed in our paper; (2) within-sentence context, where all words (except the current one) within a sentence (either a query or reply) are regarded as the context; (3) window-based context, which is the original form of BIBREF25 : the context is the words in a window (previous 2 words and future 2 words in the sentence). We observe that our virtual-sentence strategy consistently outperforms the other two in all three matching heuristics. The results suggest that combining a query and a reply does provide more information in learning dialogue-specific word embeddings.
Regarding matching heuristics, we find that in the second and third strategies of training word embeddings, the complicated heuristic-max method yields higher INLINEFORM0 -scores than simple sum pooling by 2–3%. However, for the virtual-sentence strategy, heuristic-max is slightly worse than the sum pooling. (The degradation is only 0.1% and not significant.) This is probably because both heuristic-max and virtual sentences emphasize the rich interaction between a query and its corresponding reply; combining them does not result in further gain.
We also notice that heuristic-avg is worse than other similarity measures. As this method is mathematically equivalent to the average of word-by-word similarity, it may have an undesirable blurring effect.
To sum up, our experiments show that both the proposed embedding learning approach and the similarity heuristic are effective for session segmentation. The embedding-enhanced TextTiling approach largely outperforms baselines.
We conducted an external experiment to show the effect of session segmentation in dialogue systems. We integrated the segmentation mechanism into a state-of-the-practice retrieval-based system and evaluated the results by manual annotation, similar to our previous work BIBREF27 , BIBREF31 , BIBREF32 .
Concretely, we compared our session segmentation with fixed-length context, used in BIBREF11 . That is to say, the competing method always regards two previous utterances as context. We hired three workers to annotate the results with three integer scores (0–2 points, indicating bad, borderline, and good replies, respectively.) We sampled 30 queries from the test set of 100 sessions. For each query, we retrieved 10 candidates and computed p@1 and nDCG scores BIBREF33 (averaged over three annotators). Provided with previous utterances as context, each worker had up to 1000 sentences to read during annotation.
Table TABREF26 presents the results of the dialogue system with session segmentation. As demonstrated, our method outperforms the simple fixed-context approach in terms of both metrics. We computed the inner-annotator agreement: std INLINEFORM0 0.309; 3-discrete-class Fleiss' kappa score INLINEFORM1 0.411, indicating moderate agreement BIBREF34 .
Case Study. We present a case study on our website: https://sites.google.com/site/sessionsegmentation/. From the case study, we see that the proposed approach is able to segment the dialogue session appropriately, so as to better utilize background information from a conversation session.
In this paper, we addressed the problem of session segmentation for open-domain dialogue systems. We proposed an embedding-enhanced TextTiling approach, where we trained embeddings with the novel notion of virtual sentences; we also proposed several heuristics for similarity measure. Experimental results show that both our embedding learning and similarity measuring are effective in session segmentation, and that with our approach, we can improve the performance of a retrieval-based dialogue system.
We thank anonymous reviewers for useful comments and Jingbo Zhu for sharing the MMD executable program. This paper is partially supported by the National Natural Science Foundation of China (NSFC Grant Nos. 61272343 and 61472006), the Doctoral Program of Higher Education of China (Grant No. 20130001110032), and the National Basic Research Program (973 Program No. 2014CB340405).
We compared our full method (TextTiling with heuristic-max based on embeddings trained by virtual sentences) with several baselines: • Random. We randomly segmented conversation sessions. In this baseline, we were equipped with the prior probability of segmentation. • MMD. We applied the MinMax-Dotplotting (MMD) approach proposed by Ye et al. [25]. We ran the executable program provided by the authors. • TextTiling. We implemented TextTiling ourselves according to [13].
|
What are their baselines?
|
Random, MMD, and TextTiling.
|
null | false
| null |
Name the nation that was hit badly by Typhoon Hagibis in the last 60 years?
|
Japan was hit badly by Typhoon Hagibis in the last 60 years
|
|
null | false
| null |
Classify each of the following as an "Indoor Activity" or an "Outdoor Activity": Hiking, Watching TV, Cooking, Surfing, Gardening
|
Hiking: Outdoor Activity
Watching TV: Indoor Activity
Cooking: Indoor Activity
Surfing: Outdoor Activity
Gardening: Outdoor Activity
|
|
null | false
| 30
|
Recently, neural machine translation (NMT) has gained popularity in the field of machine translation. The conventional encoder-decoder NMT proposed by Cho2014 uses two recurrent neural networks (RNN): one is an encoder, which encodes a source sequence into a fixed-length vector, and the other is a decoder, which decodes the vector into a target sequence. A newly proposed attention-based NMT by DzmitryBahdana2014 can predict output words using the weights of each hidden state of the encoder by the attention mechanism, improving the adequacy of translation.
Even with the success of attention-based models, a number of open questions remain in NMT. Tu2016 argued two of the common problems are over-translation: some words are repeatedly translated unnecessary and under-translation: some words are mistakenly untranslated. This is due to the fact that NMT can not completely convert the information from the source sentence to the target sentence. Mi2016a and Feng2016 pointed out that NMT lacks the notion of coverage vector in phrase-based statistical machine translation (PBSMT), so unless otherwise specified, there is no way to prevent missing translations.
Another problem in NMT is an objective function. NMT is optimized by cross-entropy; therefore, it does not directly maximize the translation accuracy. Shen2016 pointed out that optimization by cross-entropy is not appropriate and proposed a method of optimization based on a translation accuracy score, such as expected BLEU, which led to improvement of translation accuracy. However, BLEU is an evaluation metric based on n-gram precision; therefore, repetition of some words may be present in the translation even though the BLEU score is improved.
To address to problem of repeating and missing words in the translation, tu2016neural introduce an encoder-decoder-reconstructor framework that optimizes NMT by back-translation from the output sentences into the original source sentences. In their method, after training the forward translation in a manner similar to the conventional attention-based NMT, they train a back-translation model from the hidden state of the decoder into the source sequence by a new decoder to enforce agreement between source and target sentences.
In order to confirm the language independence of the framework, we experiment on two parallel corpora of English-Japanese and Japanese-English translation tasks using encode-decoder-reconstructor. Our experiments show that their method offers significant improvement in BLEU scores and alleviates the problem of repeating and missing words in the translation on English-Japanese translation task, though the difference is not significant on Japanese-English translation task.
In addition, we jointly train a model of forward translation and back-translation without pre-training, and then evaluate this model. As a result, the encoder-decoder-reconstructor can not be trained well without pre-training, so it proves that we have to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training.
The main contributions of this paper are as follows:
The main contributions of this paper are as follows: • Experimental results show that encodedecoder-reconstructor framework achieves significant improvements in BLEU scores (1.0-1.4) for English-Japanese translation task. • Experimental results show that encodedecoder-reconstructor framework has to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training.
|
What are the main contributions of this paper?
|
• Experimental results show that encodedecoder-reconstructor framework achieves significant improvements in BLEU scores (1.0-1.4) for English-Japanese translation task. • Experimental results show that encodedecoder-reconstructor framework has to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training.
|
null | false
| null |
What is the best way to travel from Philadelphia to Raleigh?
|
The quickest way to travel from Philadelphia, PA to Raleigh, NC is by airplane, which is an average travel time of 1h and 25m. However, this option can be costly and, as a result, many people prefer to drive. Driving from Philadelphia to Raleigh is an average travel time of 6h and 43m.
|
|
null | false
| null |
Why does treasury bond price drop when bond yield increases?
|
First, let's see the definition of a bond's price and yield. A bond's price is what investors are willing to pay for an existing bond. A bond's yield is the return to an investor from the bond's interest, or coupon, payments. The typical treasury bond has a fixed yield for X years (X = 5, 10, etc) from the time it is purchased. When we say bond yield increases, it is referring to the newly issued bond will have a higher yield than previously issued bond. Because those new bonds provide higher yield, the existing bonds will need to drop its price otherwise people won't buy them, they would just buy the newly issued bonds.
|
|
null | false
| 120
|
NLP approaches for information extraction within the biomedical research areas range from rule-based systems BIBREF3 , to domain-specific systems using feature-based classification BIBREF1 , to the recent deep networks for end-to-end feature extraction and classification BIBREF0 . NLP has had varied degree of success with free-text pathology reports BIBREF4 . Various studies have acknowledge the success of NLP in interpreting pathology reports, especially for classification tasks or extracting a single attribute from a report BIBREF4 , BIBREF5 .
The Cancer Text Information Extraction System (caTIES) BIBREF6 is a framework developed in a caBIG project focuses on information extraction from pathology reports. Specifically, caTIES extracts information from surgical pathology reports (SPR) with good precision as well as recall.
Another system known as Open Registry BIBREF7 is capable of filtering the reports with disease codes containing cancer. In BIBREF8 , an approach called Automated Retrieval Console (ARC) is proposed which uses machine learning models to predict the degree of association of a given pathology or radiology with the cancer. The performance ranges from an F-measure of 0.75 for lung cancer to 0.94 for colorectal cancer. However, ARC uses domain-specific rules which hiders with the generalization of the approach to variety of pathology reports.
This research work is inspired by themes emerging in many of the above studies. Specifically, we are evaluating the task of predicting the primary diagnosis from the pathology report. Unlike previous approaches, the system does not rely on custom rule-based knowledge, domain specific features, balanced dataset with fewer number of classes.
NLP approaches for information extraction within the biomedical research areas range from rule-based systems, to domain-specific systems using feature-based classification, to the recent deep networks for end-to-end feature extraction and classification. NLP has had varied degree of success with free-text pathology reports. Various studies have acknowledge the success of NLP in interpreting pathology reports, especially for classification tasks or extracting a single attribute from a report.
|
What is the research background of this paper?
|
NLP has had varied degree of success with free-text pathology reports. Various studies have acknowledge the success of NLP in interpreting pathology reports, especially for classification tasks or extracting a single attribute from a report.
|
null | false
| null |
What are the advantages and disadvantages of allowing people to perform delivery tasks by walking instead of requiring the use of a bike or car? When do you think it is more pragmatic for deliveries to be fulfilled by walking versus biking or driving?
|
There are both advantages and disadvantages to allowing delivery tasks to be completed by walking instead of biking or driving. The following is a list of advantages and disadvantages:
Advantages:
- Additional supply of individuals capable of performing deliveries (i.e., people without a bike or car)
- Potential for shorter delivery times in densely populated metropolitan areas
- Positive impact on the environment and sustainability
- Stronger contribution margins for shorter deliveries
Disadvantages:
- Weather can heavily impact if items are delivered or not as well as the quality of items when delivered (e.g., wet paper towels)
- Limitations on the number and weight of items being delivered
- Less deliveries can be performed together (i.e., batching of deliveries)
- Distance items can be delivered is substantially shorter
Overall, permitting deliveries to be fulfilled by walking likely provides incremental benefits in certain situations, especially in densely populated urban centers. However, it is not feasible to deliver items by walking in suburban or rural areas given the distance that needs to be covered to successfully deliver items.
|
|
null | false
| null |
Is Leonidas from Sparta a real man? What was he famous for?
|
Yes, Leonidas from Sparta was a real man. He was famous for his courage and leadership during the Battle of Thermopylae in 480 BC.
|
|
null | false
| null |
In late 2019, Fincher began developing a television prequel to the 1974 film Chinatown with its screenwriter Robert Towne, and he has also expressed interest in eventually making a third season of Mindhunter, which was put on indefinite hold in 2020. In 2023, Fincher confirmed that Netflix will not be making a third season of Mindhunter, saying "I’m very proud of the first two seasons. But it’s a very expensive show and, in the eyes of Netflix, we didn’t attract enough of an audience to justify such an investment [for Season 3]."
In February 2021, it was reported that Fincher will direct an adaptation of the graphic novel The Killer for Netflix, with Andrew Kevin Walker writing the screenplay and Michael Fassbender attached to star.
|
GIven this paragraph about American film director David Fincher, what are his reported upcoming projects.
|
David Fincher has begun development on a prequel to the 1974 film Chinatown. It has also been reported that he will direct an adaptation of the graphic novel The Killer.
|
null | false
| null |
What is veganism?
|
Veganism is a lifestyle that abstains from using animal products and other commodities derived from animals. The Vegan Society defines veganism as "a philosophy and way of living which seeks to exclude—as far as is possible and practicable—all forms of exploitation of, and cruelty to, animals for food, clothing or any other purpose." This means that vegans may still use animal products for critical purposes, such as required medicine and vaccines.
Veganism differentiates from the term "plant-based," which typically is used to describe someone who follows a vegan lifestyle only in the form of diet.
|
|
null | false
| null |
It really depends on where you will ride the bicycle, how often you will use it, if the bicycle be used for competing or pleasure and your body type. One more thing to consider will be the budget you want to invest for the bicycle.
|
More than just the budget you should also consider the purpose of the bicycle - pleasure or compete and your body size. If you consider it for pleasure my suggestion would be to search online the different models and have a look of them into a shop, try the bicycle and then find the best price.
|
|
1909.08191
| false
| null |
Task processing: converting data exploration tasks to algebraic operations on the embedding space by following task-specific conversion templates. Some important tasks and their conversion templates are discussed in Section SECREF5.
Query processing: executing semantic query on the embedding space and return results. Note that the algebraic operations on embedding vectors are linear and can be performed in parallel. Therefore, the semantic query is efficient.
Task processing: converting data exploration tasks to algebraic operations on the embedding space by following task-specific conversion templates. Some important tasks and their conversion templates are discussed in Section SECREF5.
Query processing: executing semantic query on the embedding space and return results. Note that the algebraic operations on embedding vectors are linear and can be performed in parallel. Therefore, the semantic query is efficient.
|
What data exploration is supported by the analysis of these semantic structures?
|
The answers are shown as follows:
* Task processing: converting data exploration tasks to algebraic operations on the embedding space
* Query processing: executing semantic query on the embedding space and return results
|
1701.04653
| false
| null |
Social media data has been used in many domains to find links to the real-world attributes. Data generated on QA platforms, however, has not been used in the past for predicting such attributes. In this paper, we use discussions on Yahoo! Answers QA platform to make predictions of demographic attribute of city neighbourhoods. Previous work in this domain has mainly focused on predicting the deprivation index of areas BIBREF4 . In this work, we look at a wide range of attributes and report prediction results on 62 demographic attributes. Additionally, work in urban prediction uses geolocation-based platforms such as Twitter. QA data that has been utilised in this paper does not include geolocation information. Utilising such data presents its own challenges.
In this work, we look at a wide range of attributes and report prediction results on 62 demographic attributes.
|
How many demographic attributes they try to predict?
|
The answers are shown as follows:
* 62
|
null | false
| null |
What are the benefits of ranked voting?
|
In elections for political office, ranked voting allows second, third and lower preference choices to be given weight in the event a voter's higher ranked choices do not show the electoral strength necessary to win a given election. In this way, overall social utility is maximized in the sense that the optimal level of voter happiness will be achieved. Ranked voting will give centrist and more moderate candidates and parties a greater chance to succeed in elections, reducing the power of the extreme elements on both sides of the political spectrum. It will encourage more moderate candidates to step forward and more moderate third parties to form. The extreme partisanship currently existing in politics will be mitigated. Donald Trump would likely not have won the Republican presidential nomination in 2016 because, although he had a very committed backing of a large minority of Republican voters, he faced a divided field of more moderate and traditional candidates who split the more rational majority of the party. Trump would likely not have been the second choice for many of the backers of other candidates. A more moderate candidate would likely have emerged as the eventual nominee. Opponents of ranked voting cite possible complexity and greater threat of computerized hacking or errors impacting election results. Entrenched politicians of both major parties oppose ranked voting because they do not want to encourage the formation of centrist third parties that could jeopardize the standing of their parties and their own political careers.
|
|
null | false
| null |
Boston College (BC) is a private Jesuit research university in Chestnut Hill, Massachusetts. Founded in 1863, the university has more than 9,300 full-time undergraduates and nearly 5,000 graduate students. Although Boston College is classified as an R1 research university, it still uses the word "college" in its name to reflect its historical position as a small liberal arts college. Its main campus is a historic district and features some of the earliest examples of collegiate gothic architecture in North America. In accordance with its Jesuit heritage, the university offers a liberal arts curriculum with a distinct emphasis on formative education and service to others.
Undergraduate admission is highly selective. The university offers bachelor's degrees, master's degrees, and doctoral degrees through its eight colleges and schools: Morrissey College of Arts & Sciences, Carroll School of Management, Lynch School of Education and Human Development, Connell School of Nursing, Graduate School of Social Work, Boston College Law School, Boston College School of Theology and Ministry, Woods College of Advancing Studies.
Boston College athletic teams are the Eagles. Their colors are maroon and gold and their mascot is Baldwin the Eagle. The Eagles compete in NCAA Division I as members of the Atlantic Coast Conference in all sports offered by the ACC. The men's and women's ice hockey teams compete in Hockey East. Boston College's men's ice hockey team has won five national championships.
Alumni and affiliates of the university include governors, ambassadors, members of Congress, scholars, writers, medical researchers, Hollywood actors, and professional athletes. Boston College has graduated 3 Rhodes, 22 Truman, and more than 171 Fulbright scholars. Other notable alumni include a U.S. Speaker of the House, a U.S. Secretary of State, and chief executives of Fortune 500 companies.
|
What is the best private Jesuit research university in Boston ?
|
Boston College is the best Jesuit private university in Boston.
|
null | false
| null |
Who is Sergey or Sergiy Korsunsky
|
Sergiy Korsunsky (born August 10, 1962 in Kyiv, Ukrainian SSR) is a Ukrainian diplomat. He previously served as Chargé d'affaires of Ukraine to the United States during 2005 and Ambassador Extraordinary and Plenipotentiary of Ukraine to Turkey (2008-2016). Director of the Hennadii Udovenko Diplomatic Academy of Ukraine in 2017-2020.
Ambassador of Ukraine to Japan (since April 2020).
|
|
null | false
| null |
What are three popular movies with Steve Carell?
|
Three popular movies with Steve Carell are:
1. The 40-Year-Old Virgin
2. Crazy Stupid Love
3. The Big Short
|
|
null | false
| null |
Which dynasties rules in Morocco ?
|
Here is the list of the several dynasties ruled in Morocco over years : Idrisside, Almoravid, Almohad, Merinide, Wattassides, Saadian and the Alaouite.
|
|
null | false
| 92
|
For the first round, we used a BERT-Large model BIBREF10 trained on a concatenation of SNLI BIBREF1 and MNLI BIBREF22, and selected the best-performing model we could train as the starting point for our dataset collection procedure. For Round 1 contexts, we randomly sampled short multi-sentence passages from Wikipedia (of 250-600 characters) from the manually curated HotpotQA training set BIBREF23. Contexts are either ground-truth contexts from that dataset, or they are Wikipedia passages retrieved using TF-IDF BIBREF24 based on a HotpotQA question.
For the first round, we used a BERT-Large model (Devlin et al., 2018) trained on a concatenation of SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2017), and selected the best-performing model we could train as the starting point for our dataset collection procedure.
|
What did the authors use to train on a concatenation of SNLI and MNLI in round 1?
|
A BERT-Large model.
|
null | false
| null |
What is Sleet?
|
Sleet occurs when snowflakes only partially melt when they fall through a shallow layer of warm air. These slushy drops refreeze as they next fall through a deep layer of freezing air above the surface, and eventually reach the ground as frozen rain drops that bounce on impact. Depending on the intensity and duration, sleet can accumulate on the ground much like snow.
|
|
null | false
| null |
Penguins (order Sphenisciformes /sfɪˈnɪsəfɔːrmiːz/, family Spheniscidae /sfɪˈnɪsɪdiː, -daɪ/) are a group of aquatic flightless birds. They live almost exclusively in the Southern Hemisphere: only one species, the Galápagos penguin, is found north of the Equator. Highly adapted for life in the water, penguins have countershaded dark and white plumage and flippers for swimming. Most penguins feed on krill, fish, squid and other forms of sea life which they catch with their bills and swallow whole while swimming. A penguin has a spiny tongue and powerful jaws to grip slippery prey.
|
Can Penguins fly?
|
Penguins are a group of aquatic flightless birds.
|
null | false
| null |
Identify which instrument is string or percussion: Ranat thum, Tumbi
|
Tumbi is string, Ranat thum is percussion.
|
|
null | false
| 468
|
Our method utilizes the target node vtarget from all classes, not only from the identical minor class of vminor. The intention of constructing the target node pool from entire classes is to guarantee sufficient diversity of synthesized ego networks. If the target nodes are restricted to a minor class at highly imbalanced scenarios, identical neighbor nodes would be redundantly duplicated. This issue makes it hard to mitigate the neighbor memorization problem. To demonstrate this hypothesis, we conduct an experiment and compare our design choice (all classes) to select a target from the identical class as the minor node vminor. In the Figure 3, we confirm that exploiting entire classes for the target node achieves superior performances. From this design choice of the target node, our method can utilize the manifold assumption that ‘similar predictions of neural networks indicate the close proximity in the manifold’, which are commonly utilized in semi-supervised learning (Van Engelen & Hoos, 2020). We aim to enlarge and smooth the decision boundary of the minor class by interpolating the minor nodes and the target node of the entire classes (but excluding target-specific attributes using saliency) as investigated in Verma et al. (2019). Moreover, our method considers the prediction of the ego network (not a single node) to reflect the structural aspects of graph.
|
Could you please explain what’s the target node and its difference to the nodes with class?
|
We first apologize for the ambiguous notations. We will keep your comments in mind and carefully revise the notations of node types in alternative ways for better presentation in our final revision.
Meanwhile, we here reiterate our notations below for you: The key concept of our method is to generate an ego network for the minor class using two nodes: vminor and vtarget. Node features of these two nodes are mixed to determine a central node of the ego network via convex combination. vminor is the node sampled from the minor class to be augmented (oversampled). That is, the central node of a synthesized ego network acquires an identical label of vminor. On the other hand, there is no restriction for vtarget: vtarget is sampled from the entire class and exploited to synthesize the ego network of generated/augmented node. Thus, the target node vtarget does not have to be a minor node. Detailed justification of this strategy for target node selection is discussed in Section 5.2 and our answers to Q2 below.
|
null | false
| null |
Dried fruit is fruit from which the majority of the original water content has been removed either naturally, through sun drying, or through the use of specialized dryers or dehydrators. Dried fruit has a long tradition of use dating back to the fourth millennium BC in Mesopotamia, and is prized because of its sweet taste, nutritive value and long shelf life.
Today, dried fruit consumption is widespread. Nearly half of the dried fruits sold are raisins, followed by dates, prunes, figs, apricots, peaches, apples, and pears. These are referred to as "conventional" or "traditional" dried fruits: fruits that have been dried in the sun or in heated wind tunnel dryers. Many fruits such as cranberries, blueberries, cherries, strawberries, and mango are infused with a sweetener (e.g. sucrose syrup) prior to drying. Some products sold as dried fruit, like papaya, kiwifruit and pineapple, are most often candied fruit.
Dried fruits retain most of the nutritional value of fresh fruits. The specific nutrient content of the different dried fruits reflects their fresh counterpart and the processing method.
|
What techniques are used to male dried fruit?
|
Dried fruit is fruit from which the majority of the original water content has been removed either naturally, through sun drying, or through the use of specialized dryers or dehydrators.
|
null | false
| null |
How many more keys does a standard piano have over a harpsichord?
|
28 keys
|
|
null | false
| null |
Consequences
The most obvious consequence of overfitting is poor performance on the validation dataset. Other negative consequences include:
A function that is overfitted is likely to request more information about each item in the validation dataset than does the optimal function; gathering this additional unneeded data can be expensive or error-prone, especially if each individual piece of information must be gathered by human observation and manual data-entry.
A more complex, overfitted function is likely to be less portable than a simple one. At one extreme, a one-variable linear regression is so portable that, if necessary, it could even be done by hand. At the other extreme are models that can be reproduced only by exactly duplicating the original modeler's entire setup, making reuse or scientific reproduction difficult.
It may be possible to reconstruct details of individual training instances from an overfitted machine learning model's training set. This may be undesirable if, for example, the training data includes sensitive personally identifiable information (PII). This phenomenon also presents problems in the area of artificial intelligence and copyright, with the developers of some generative deep learning models such as Stable Diffusion and GitHub Copilot being sued for copyright infringement because these models have been found to be capable of reproducing certain copyrighted items from their training data.
|
Please give me the potential implications of overfitting described in the following text
|
The potential implications of overfitting are:
- poor performance of the validation set.
- a function that is overfitted is likely to request more information about each item in the validation dataset.
- A overfitted function is likely to be less portable than a simple one.
- it may be possible to reconstruct details of individual training instances from an overfitted machine learning model's training set.
|
null | false
| 139
|
It is well known that language has certain structural properties which allows natural language speakers to make “infinite use of finite means" BIBREF3 . This structure allows us to generalize beyond the typical machine learning definition of generalization BIBREF4 (which considers performance on the distribution that generated the training set), permitting the understanding of any utterance sharing the same structure, regardless of probability. For example, sentences of length 100 typically do not appear in natural text or speech (our personal 'training set'), but can be understood regardless due to their structure. We refer to this notion as linguistic generalization .
Many problems in NLP are treated as sequence to sequence tasks with solutions built on seq2seq-attention based models. While these models perform very well on standard datasets and also appear to capture some linguistic structure BIBREF5 , BIBREF6 , BIBREF7 , they also can be quite brittle, typically breaking on uncharacteristic inputs BIBREF8 , BIBREF1 , indicating that the extent of linguistic generalization these models achieve is still somewhat lacking.
Due to the high capacity of these models, it is not unreasonable to expect them to learn some structure from the data. However, learning structure is not a sufficient condition to achieving linguistic generalization. If this structure is to be usable on data outside the training distribution, the model must learn the structure without additionally learning (overfitting on) patterns specific to the training data. One may hope, given the right hyperparameter configuration and regularization, that a model converges to a solution that captures the reusable structure without overfitting too much on the training set. While this solution exists in theory, in practice, it may be difficult to find.
In this work, we look at the feasibility of training and tuning seq2seq-attention models towards a solution that generalizes in this linguistic sense. In particular, we train models on a symbol replacement task with a well defined generalizable structure. The task is simple enough that all models achieve near perfect accuracy on the standard test set, i.e., where the inputs are drawn from the same distribution as that of the training set. We then test these models for linguistic generalization by creating test sets of uncharacteristic inputs, i.e., inputs that are not typical in the training distribution but still solvable given that the generalizable structure was learned. Our results indicate that generalization is highly sensitive; such that even changes in the random seed can drastically affect the ability to generalize. This dependence on an element that is not (or ideally should not be) a hyperparameter suggests that the line between generalization and failure is quite fine, and may not be feasible to reach simply by hyperparameter tuning alone.
In this work, we look at the feasibility of training and tuning seq2seq-attention models towards a solution that generalizes in this linguistic sense.
|
Why do the authors examine the feasibility of training and tuning seq2seq-attention models?
|
They want to find a solution that generalizes in this linguistic sense.
|
null | false
| null |
Which is a species of fish? Wahoo or Yahoo
|
Wahoo
|
|
null | false
| null |
Classify each of these as a red fruit or an orange fruit: strawberry, cherry, orange, kumquat, tangerine, raspberry
|
Red fruit: strawberry, cherry, raspberry
Orange fruit: Orange, kumquat, tangerine
|
|
null | false
| 272
|
With the rapid growth of the internet, huge amounts of text data are generated in social networks, online shopping and news websites, etc. These data create demand for powerful and efficient text analysis techniques. Probabilistic topic models such as Latent Dirichlet Allocation (LDA) BIBREF0 are popular approaches for this task, by discovering latent topics from text collections. Many conventional topic models discover topics purely based on the word-occurrences, ignoring the meta information (a.k.a., side information) associated with the content. In contrast, when we humans read text it is natural to leverage meta information to improve our comprehension, which includes categories, authors, timestamps, the semantic meanings of the words, etc. Therefore, topic models capable of using meta information should yield improved modelling accuracy and topic quality.
In practice, various kinds of meta information are available at the document level and the word level in many corpora. At the document level, labels of documents can be used to guide topic learning so that more meaningful topics can be discovered. Moreover, it is highly likely that documents with common labels discuss similar topics, which could further result in similar topic distributions. For example, if we use authors as labels for scientific papers, the topics of the papers published by the same researcher can be closely related.
At the word level, different semantic/syntactic features are also accessible. For example, there are features regarding word relationships, such as synonyms obtained from WordNet BIBREF1 , word co-occurrence patterns obtained from a large corpus, and linked concepts from knowledge graphs. It is preferable that words having similar meaning but different morphological forms, like “dog” and “puppy”, are assigned to the same topic, even if they barely co-occur in the modelled corpus. Recently, word embeddings generated by GloVe BIBREF2 and word2vec BIBREF3 , have attracted a lot of attention in natural language processing and related fields. It has been shown that the word embeddings can capture both the semantic and syntactic features of words so that similar words are close to each other in the embedding space. It seems reasonable to expect that these word embedding will improve topic modelling BIBREF4 , BIBREF5 .
Conventional topic models can suffer from a large performance degradation over short texts (e.g., tweets and news headlines) because of insufficient word co-occurrence information. In such cases, meta information of documents and words can play an important role in analysing short texts by compensating the lost information in word co-occurrences. At the document level, for example, tweets are usually associated with hashtags, users, locations, and timestamps, which can be used to alleviate the data sparsity problem. At the word level, word semantic similarity and embeddings obtained or trained on large external corpus (e.g., Google News or Wikipedia) have been proven useful in learning meaningful topics from short texts BIBREF6 , BIBREF7 .
The benefit of using document and word meta information separately is shown in several models such as BIBREF8 , BIBREF9 , BIBREF5 . However, in existing models this is usually not efficient enough due to non-conjugacy and/or complex model structures. Moreover, only one kind of meta information (either at document level or at word level) is used in most existing models. In this paper, we propose MetaLDA, a topic model that can effectively and efficiently leverage arbitrary document and word meta information encoded in binary form. Specifically, the labels of a document in MetaLDA are incorporated in the prior of the per-document topic distributions. If two documents have similar labels, their topic distributions should be generated with similar Dirichlet priors. Analogously, at the word level, the features of a word are incorporated in the prior of the per-topic word distributions, which encourages words with similar features to have similar weights across topics. Therefore, both document and word meta information, if and when they are available, can be flexibly and simultaneously incorporated using MetaLDA. MetaLDA has the following key properties:
We conduct extensive experiments with several real datasets including regular and short texts in various domains. The experimental results demonstrate that MetaLDA achieves improved performance in terms of perplexity, topic coherence, and running time.
The experimental results demonstrate that MetaLDA achieves improved performance in terms of perplexity, topic coherence, and running time.
|
In which terms does the MetaLDA achieve improved performance?
|
The experimental results demonstrate that MetaLDA achieves improved performance in terms of perplexity, topic coherence, and running time.
|
null | false
| null |
Give me a list of the most recent Men's Indian Cricket Team captains in ODIs
|
- Hardik Pandya
- KL Rahul
- Shikhar Dhawan
- Rohit Sharma
- Ajinkya Rahane
- Virat Kohli
- Gautam Ganbhir
- Suresh Raina
- MS Dhoni
|
|
null | false
| null |
What us foosball?
|
Foosball is also known as table football. Its a table game where two players use rods or sticks that can be pulled and pushed that that control plastic figures that kick a small plastic soccer ball. This is a great table games that is fun at parties and social events.
|
|
null | false
| 306
|
BERT (Bidirectional Encoder Representations from Transformers) BIBREF14 is a new language representation model, which uses bidirectional transformers to pre-train a large unlabeled corpus, and fine-tunes the pre-trained model on other tasks. BERT has been widely used and shows great improvement on various natural language processing tasks, e.g., word segmentation, named entity recognition, sentiment analysis, and question answering. We use BERT to extract contextual feature for each character instead of BiLSTM in the original work BIBREF13. To further improve the performance, we optimize the pre-training process of BERT by introducing a semantic-enhanced task.
Original google BERT is pre-trained using two unsupervised tasks, masked language model (MLM) and next sentence prediction (NSP). MLM task enables the model to capture the discriminative contextual feature. NSP task makes it possible to understand the relationship between sentence pairs, which is not directly captured by language modeling. We further design a semantic-enhanced task to enhance the performance of BERT. It incorporate previous sentence prediction and document level prediction. We pre-train BERT by combining MLM, NSP and the semantic-enhanced task together.
We pre-train BERT by combining MLM, NSP and the semantic-enhanced task together [19].
|
How to pre-train BERT?
|
The authors pre-train BERT by combining MLM, NSP and the semantic-enhanced task together.
|
null | false
| 146
|
What would be possible if a person had an oracle that could immediately provide the answer to any question about the visual world? Sight-impaired users could quickly and reliably figure out the denomination of their currency and so whether they spent the appropriate amount for a product BIBREF0 . Hikers could immediately learn about their bug bites and whether to seek out emergency medical care. Pilots could learn how many birds are in their path to decide whether to change course and so avoid costly, life-threatening collisions. These examples illustrate several of the interests from a visual question answering (VQA) system, including tackling problems that involve classification, detection, and counting. More generally, the goal for VQA is to have a single system that can accurately answer any natural language question about an image or video BIBREF1 , BIBREF2 , BIBREF3 .
Entangled in the dream of a VQA system is an unavoidable issue that, when asking multiple people a visual question, sometimes they all agree on a single answer while other times they offer different answers (Figure FIGREF1 ). In fact, as we show in the paper, these two outcomes arise in approximately equal proportions in today's largest publicly-shared VQA benchmark that contains over 450,000 visual questions. Figure FIGREF1 illustrates that human disagreements arise for a variety of reasons including different descriptions of the same concept (e.g., “minor" and “underage"), different concepts (e.g., “ghost" and “photoshop"), and irrelevant responses (e.g., “no").
Our goal is to account for whether different people would agree on a single answer to a visual question to improve upon today's VQA systems. We propose multiple prediction systems to automatically decide whether a visual question will lead to human agreement and demonstrate the value of these predictions for a new task of capturing the diversity of all plausible answers with less human effort.
Our work is partially inspired by the goal to improve how to employ crowds as the computing power at run-time. Towards satisfying existing users, gaining new users, and supporting a wide range of applications, a crowd-powered VQA system should be low cost, have fast response times, and yield high quality answers. Today's status quo is to assume a fixed number of human responses per visual question and so a fixed cost, delay, and potential diversity of answers for every visual question BIBREF2 , BIBREF0 , BIBREF4 . We instead propose to dynamically solicit the number of human responses based on each visual question. In particular, we aim to accrue additional costs and delays from collecting extra answers only when extra responses are needed to discover all plausible answers. We show in our experiments that our system saves 19 40-hour work weeks and $1800 to answer 121,512 visual questions, compared to today's status quo approach BIBREF0 .
Our work is also inspired by the goal to improve how to employ crowds to produce the information needed to train and evaluate automated methods. Specifically, researchers in fields as diverse as computer vision BIBREF2 , computational linguistics BIBREF1 , and machine learning BIBREF3 rely on large datasets to improve their VQA algorithms. These datasets include visual questions and human-supplied answers. Such data is critical for teaching machine learning algorithms how to answer questions by example. Such data is also critical for evaluating how well VQA algorithms perform. In general, “bigger" data is better. Current methods to create these datasets assume a fixed number of human answers per visual question BIBREF2 , BIBREF4 , thereby either compromising on quality by not collecting all plausible answers or cost by collecting additional answers when they are redundant. We offer an economical way to spend a human budget to collect answers from crowd workers. In particular, we aim to actively allocate additional answers only to visual questions likely to have multiple answers.
The key contributions of our work are as follows:
We show in our experiments that our system saves 19 40-hour work weeks and $1800 to answer 121,512 visual questions, compared to today's status quo approach.
|
Is the new system better than today's status quo approach?
|
Yes.
|
null | false
| null |
Write a paragraph to refute a claim by a colleague that ancient structures such as Stonehenge, the Great Pyramid are evidence of UFO activities on Earth
|
There are several strong arguments against concluding that alien races built the structures like the Stonehenge or the Pyramids. First, there is evidence that these magnificent structures were built by humans. For example, tombs of the pyramid builders were found in the vicinity of the Great Pyramind (with human remains!). The technology to move and assemble the massive stones of the Stonehenge has been shown to be available to humans at that time. Second, given that space is vast the chance of finding one tiny planet among billions is infinitesimal. If alien intelligence had indeed found our one inhabited planet they are unlikely to have just created couple of structures and avoided any further contact with Earth and its inhabitants. In science, Occam’s Razor (“the best explanation is the simplest one”) is a guiding principle and until there is much more evidence of alien visitation it would be inadvisable to jump to a conclusion that aliens built these structures.
|
|
null | false
| null |
Which is a species of fish? Nurse or Nurse shark
|
Nurse shark
|
|
null | false
| 222
|
Negotiations, either between individuals or entities, are ubiquitous in everyday human interactions ranging from sales to legal proceedings. Being a good negotiator is a complex skill, requiring the ability to understand the partner's motives, ability to reason and to communicate effectively, making it a challenging task for an automated system. While research in building automatically negotiating agents has primarily focused on agent-agent negotiations BIBREF0, BIBREF1, there is a recent interest in agent-human negotiations BIBREF2 as well. Such agents may act as mediators or can be helpful for pedagogical purposes BIBREF3.
Efforts in agent-human negotiations involving free-form natural language as a means of communication are rather sparse. Researchers BIBREF4 recently studied natural language negotiations in buyer-seller bargaining setup, which is comparatively less restricted than previously studied game environments BIBREF5, BIBREF6. Lack of a well-defined structure in such negotiations allows humans or agents to express themselves more freely, which better emulates a realistic scenario. Interestingly, this also provides an exciting research opportunity: how can an agent leverage the behavioral cues in natural language to direct its negotiation strategies? Understanding the impact of natural language on negotiation outcomes through a data-driven neural framework is the primary objective of this work.
We focus on buyer-seller negotiations BIBREF4 where two individuals negotiate the price of a given product. Leveraging the recent advancements BIBREF7, BIBREF8 in pre-trained language encoders, we attempt to predict negotiation outcomes early on in the conversation, in a completely data-driven manner (Figure FIGREF3). Early prediction of outcomes is essential for effective planning of an automatically negotiating agent. Although there have been attempts to gain insights into negotiations BIBREF9, BIBREF10, to the best of our knowledge, we are the first to study early natural language cues through a data-driven neural system (Section SECREF3). Our evaluations show that natural language allows the models to make better predictions by looking at only a fraction of the negotiation. Rather than just realizing the strategy in natural language, our empirical results suggest that language can be crucial in the planning as well. We provide a sample negotiation from the test set BIBREF4 along with our model predictions in Table TABREF1.
We design ablation studies and a probing technique for BERT to pinpoint which linguistic features help more than others for early outcome prediction task.
|
What do the authors design to pinpoint which linguistic features help more than others for early outcome prediction tasks?
|
The authors design ablation studies and a probing technique for BERT to pinpoint which linguistic features help more than others for early outcome prediction task.
|
null | false
| 303
|
Online social networks have become one of the most disruptive communication platforms, as everyday billions of individuals use them to interact with each other. Their penetration in our everyday lives seems ever-growing and has in turn generated a massive volume of publicly available data open to analysis. The digital footprints left across these multiple media platforms provide us with a unique source to study and understand how the linguistic phenotype of a given user is related to social attributes such as socioeconomic status (SES).
The quantification and inference of SES of individuals is a long lasting question in the social sciences. It is a rather difficult problem as it may depend on a combination of individual characteristics and environmental variables BIBREF0 . Some of these features can be easier to assess like income, gender, or age whereas others, relying to some degree on self-definition and sometimes entangled with privacy issues, are harder to assign like ethnicity, occupation, education level or home location. Furthermore, individual SES correlates with other individual or network attributes, as users tend to build social links with others of similar SES, a phenomenon known as status homophily BIBREF1 , arguably driving the observed stratification of society BIBREF2 . At the same time, shared social environment, similar education level, and social influence have been shown to jointly lead socioeconomic groups to exhibit stereotypical behavioral patterns, such as shared political opinion BIBREF3 or similar linguistic patterns BIBREF4 . Although these features are entangled and causal relation between them is far from understood, they appear as correlations in the data.
Datasets recording multiple characteristics of human behaviour are more and more available due to recent developments in data collection technologies and increasingly popular online platforms and personal digital devices. The automatic tracking of online activities, commonly associated with profile data and meta-information; the precise recording of daily activities, interaction dynamics and mobility patterns collected through mobile personal devices; together with the detailed and expert annotated census data all provide new grounds for the inference of individual features or behavioral patterns BIBREF5 . The exploitation of these data sources has already been proven to be fruitful as cutting edge recommendation systems, advanced methods for health record analysis, or successful prediction tools for social behaviour heavily rely on them BIBREF6 . Nevertheless, despite the available data, some inference tasks, like individual SES prediction, remain an open challenge.
The precise inference of SES would contribute to overcome several scientific challenges and could potentially have several commercial applications BIBREF7 . Further, robust SES inference would provide unique opportunities to gain deeper insights on socioeconomic inequalities BIBREF8 , social stratification BIBREF2 , and on the driving mechanisms of network evolution, such as status homophily or social segregation.
In this work, we take a horizontal approach to this problem and explore various ways to infer the SES of a large sample of social media users. We propose different data collection and combination strategies using open, crawlable, or expert annotated socioeconomic data for the prediction task. Specifically, we use an extensive Twitter dataset of 1.3M users located in France, all associated with their tweets and profile information; 32,053 of them having inferred home locations. Individual SES is estimated by relying on three separate datasets, namely socioeconomic census data; crawled profession information and expert annotated Google Street View images of users' home locations. Each of these datasets is then used as ground-truth to infer the SES of Twitter users from profile and semantic features similar to BIBREF9 . We aim to explore and assess how the SES of social media users can be obtained and how much the inference problem depends on annotation and the user's individual and linguistic attributes.
We provide in Section SECREF2 an overview of the related literature to contextualize the novelty of our work. In Section SECREF3 we provide a detailed description of the data collection and combination methods. In Section SECREF4 we introduce the features extracted to solve the SES inference problem, with results summarized in Section SECREF5 . Finally, in Section SECREF6 and SECREF7 we conclude our paper with a brief discussion of the limitations and perspectives of our methods.
We aim to explore and assess how the SES of social media users can be obtained and how much the inference problem depends on annotation and the user's individual and linguistic attributes.
|
What do the authors aim to explore and assess in this paper?
|
The purpose of this paper is to explore and assess how the socioeconomic status (SES) of a large sample of social media users can be obtained and how much the inference problem depends on annotation and the user's individual and linguistic attributes.
|
null | false
| null |
What is a pronoun?
|
A pronoun is word which is used to refer to something when a noun is not used. Examples of pronoun words are he, she, it, myself, herself, and himself. In a sentence you will see the pronoun used like this "John went the store to buy himself and apple and crackers." In this context John is a proper noun and the pronoun is himself which refers to John. Another way to consider a pronoun is that it is a less explicit or indirect reference to something.
|
|
null | false
| null |
What is the value of x if x^2 = 4?
|
The value of x is either 2 or -2. This is because the square of both those numbers is 4 as shown below:
2 * 2 = 4
-2 * -2 = 4
|
|
null | false
| 82
|
Automatic TS is a complicated natural language processing (NLP) task, which consists of lexical and syntactic simplification levels BIBREF12 . It has attracted much attention recently as it could make texts more accessible to wider audiences, and used as a pre-processing step, improve performances of various NLP tasks and systems BIBREF13 , BIBREF14 , BIBREF15 . Usually, hand-crafted, supervised, and unsupervised methods based on resources like English Wikipedia and Simple English Wikipedia (EW-SEW) BIBREF10 are utilized for extracting simplification rules. It is very easy to mix up the automatic TS task and the automatic summarization task BIBREF3 , BIBREF16 , BIBREF6 . TS is different from text summarization as the focus of text summarization is to reduce the length and redundant content.
At the lexical level, lexical simplification systems often substitute difficult words using more common words, which only require a large corpus of regular text to obtain word embeddings to get words similar to the complex word BIBREF1 , BIBREF9 . Biran et al. BIBREF0 adopted an unsupervised method for learning pairs of complex and simpler synonyms from a corpus consisting of Wikipedia and Simple Wikipedia. At the sentence level, a sentence simplification model was proposed by tree transformation based on statistical machine translation (SMT) BIBREF3 . Woodsend and Lapata BIBREF17 presented a data-driven model based on a quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. Wubben et al. BIBREF18 proposed a phrase-based machine translation (PBMT) model that is trained on ordinary-simplified sentence pairs. Xu et al. BIBREF19 proposed a syntax-based machine translation model using simplification-specific objective functions and features to encourage simpler output.
Compared with SMT, neural machine translation (NMT) has shown to produce state-of-the-art results BIBREF5 , BIBREF7 . The central approach of NMT is an encoder-decoder architecture implemented by recurrent neural networks, which can represent the input sequence as a vector, and then decode that vector into an output sequence. Therefore, NMT models were used for text simplification task, and achieved good results BIBREF8 , BIBREF4 , BIBREF20 . The main limitation of the aforementioned NMT models for text simplification depended on the parallel ordinary-simplified sentence pairs. Because ordinary-simplified sentence pairs are expensive and time-consuming to build, the available largest data is EW-SEW that only have 296,402 sentence pairs. The dataset is insufficiency for NMT model if we want to NMT model can obtain the best parameters. Considering simplified data plays an important role in boosting fluency for phrase-based text simplification, and we investigate the use of simplified data for text simplification. We are the first to show that we can effectively adapt neural translation models for text simplifiation with simplified corpora.
Compared with SMT, neural machine translation (NMT) has shown to produce state-of-the-art results (Bahdanau et al., 2014; Sutskever et al., 2014).
|
Which is more advanced as the basis of text simplification, statistical machine translation (SMT), or neural machine translation (NMT)?
|
Neural machine translation (NMT).
|
null | false
| null |
Identify which instrument is string or percussion: Wobble board, Scheitholt
|
Scheitholt is string, Wobble board is percussion.
|
|
null | false
| null |
What is the best place to go downhill mountain biking
|
Downhill mountain biking is a sport that you typically would practice in an area that is also used as a ski resort in the winter. No all ski resort are suitable and developed to be great mountain bike location. The name of those mountain bike resorts are bike parks. Some of the most famous ones are Whistler, Châtel, Fort William, Vallnord, Val Di Sole. Sometimes, you can find a bike park in smaller size mountains where there are no ski resorts. Those are typically are in hilly regions like Wales.
|
|
null | false
| null |
SR22s and SR20s built before 2003 were equipped with traditional analog instruments and a 10" (later 12") Multi-function display (MFD). In February 2003, Cirrus began offering SR22s with the Avidyne Entegra primary flight display (PFD), making the plane the first of its kind to come with a glass cockpit. Later that year, this instrumentation became standard equipment on all SR-series aircraft and sparked a major transition in general aviation, whereby over 90% of all new light aircraft by the year 2006 were equipped with glass cockpits. Retrofits are available for the older SR aircraft that replace the analog instrument panels with one that includes a PFD, a new MFD and the installation of back-up mechanical instruments.
On 22 May 2008, Cirrus revealed the "Cirrus Perspective" glass cockpit (by Garmin). Both cockpits were available for a while (the Avidyne cockpit was initially standard equipment) and after 2008 the SR22 was sold with only the Perspective panel.
In 2009, the third-generation Cirrus SR22 GTS came equipped with a new enhanced vision system (EVS), a sophisticated dual-wavelength instrument that offers both infrared and synthetic vision.
At the 2010 EAA AirVenture, Cirrus announced its plans to certify Garmin's ESP system (Electronic Stability and Protection) on the Cirrus SR22. It included advanced flight envelope protection that could stabilize the aircraft with the push of a button, to avoid spiral from developing.
The Cirrus Perspective-Plus avionics flight deck was introduced in 2017, with a faster processing speed, animated datalink weather, payload management, visual approach capabilities, wireless database uploads, glass back-up instruments, and more.
In 2020, the Perspective-Plus flight deck included a new stabilized approach advisory system which provides alerts to the pilot of unstable conditions during approach.
|
Since when did the SR22 include a glass cockpit
|
Since February 2003 Cirrus started to offer a glass panel for the SR22. First the Avidyne Entegra was offered to customers. Also retrofits became available for older SR versions. In May 2008, Cirrus revealed the "Cirrus Perspective" glass cockpit, which was based on instruments created by Garmin.
Until today, Cirrus is including Garmin glass cockpits in it's aircrafts.
|
null | false
| null |
How many toes do polydactyl cats have?
|
Polydactyl cats can have more than five toes on each paw; a typical cat has five (four on its paw pad and a declaw). Polydactyl cats will have six or more toes on each paw.
|
|
1802.05322
| false
| null |
The second source of data was the genres for all reviews which were scraped from the IMDb site. A total of 27 different genres were scraped. A list of all genres can be find in Appendix SECREF8 . A review can have one genre or multiple genres. For example a review can be for a movie that is both Action, Drama and Thriller at the same time while another move only falls into Drama.
A total of 27 different genres were scraped.
|
how many movie genres do they explore?
|
The answers are shown as follows:
* 27
|
null | false
| null |
Who is Manchester United's current manager?
|
Manchester United's current manager is Eric Ten Hag
|
|
null | false
| null |
Why do people work?
|
People work for a number of reasons. To earn an income and to pay bills are a few reasons that come to mind. But some people do it simply because they enjoy it and are passionate about the career they are pursing.
|
|
null | false
| 432
|
We first train five different NN architectures, LeNet-5 (Lecun et al., 1998), VGG13 (Simonyan & Zisserman, 2015), AlexNet (Krizhevsky et al., 2012), ResNet-32 and ResNet-44 (He et al., 2016) , on CIFAR-10 dataset and three architectures, LeNet-5, AlexNet and ResNet-50, on ImageNet. During training, the NN parameters are updated using stochastic gradient descent. On each dataset, we train NNs by combining 48 learning rates {1, 1.5, . . . , 4.5} × {10−6 , . . . , 10−1}, and 10 dropout rates {0, 0.1, . . . , 0.9}. All the models are trained for 50 epochs with batch size 128 and an early stopping method by which the training stops when the testing accuracy does not increase for 10 consecutive epochs. Consequently, we obtain 480 NNs per architecture and dataset, with diverse performance. Information for the generated NNs is summarized in Table 2: the range of epochs at which the training stops early, the range of final testing accuracy for the trained NNs, and the accuracy threshold used to map actual NN performance to ‘low/high accuracy’ labels for the classification task. We trained the NNs on an Nvidia 1080Ti GPU with 11G memory, and we conducted all the other experiments on 2.60GHz Intel Xeon E5-2697 v3 platform with 1024G memory. For computational efficiency and to avoid having a largely imbalanced dataset, out of the 480 NNs, we randomly sampled 250 NNs with an even split of high- and low-accuracy networks. For this sample, the first t epochs of training for each configuration were saved as checkpoints to be converted to time-evolving graphs (step (S1)).
|
How do you define the training set for a specific architecture? For one architecture, and one dataset, you have a single time-varying graphs, no?
|
Due to the space limitation we moved the experimental configuration to Appendix A.1. For each architecture and dataset, we train 480 NNs (48 learning rates and 10 dropout values) and then we construct the time-evolving graphs corresponding to those different NN configurations.
|
null | false
| null |
What is room temperature?
|
Room temperature is a range of air temperatures that most people prefer for indoor settings. This range is generally accepted to be around 20–22 °C or 68–72 °F. While it is subjective to opinion, keeping temperatures outside of these ranges can lead to discomfort and even aggression.
|
|
null | false
| null |
Retirement age
Country Men Women Year Notes Ref
Albania 65 61 2020
Argentina 65 60
Armenia 63 2011
Australia 67 2021 In Australia the retirement age is being increased gradually to 67 years by July 2023.
Austria 65 60 2015 In Austria the retirement age for women is to be equalized to the retirement age for men (65) by 2033.
Azerbaijan 65 60 2017 In Azerbaijan the retirement age is to be increased gradually to 65 years by 2021 (for men) and by 2027 (for women)
Belarus 63 58 2021 By 2022, the age will be 63 for men and 58 for women.
Bangladesh 59 2013
Belgium 60–65 2019 The legal retirement age (the age at which one can retire, regardless of career length) in Belgium is 65 in 2019. in 2025 it will be 66 and in 2030 it will be 67, both for women and men.
Early retirement is possible from 60 onwards with a career of at least 44 years, from 61 onwards with at least 43 years, or from 63 onwards with a career of at least 42 years. Some exceptions exist, mainly in the required number of years.
A career year is considered if it contains at least 104 days (in full time equivalent).
Bosnia and Herzegovina 65 2011
Brazil 65 62 2019 Certain individuals, such as rural workers, teachers and police officers, have a lower minimum age.
Brazil also requires workers to have contributed to social security for a minimum amount of time before they become eligible to claim benefits. To start receiving partial benefits, all private-sector workers are required have contributed for at least 20 years (for men) or 15 years (for women). Public-sector workers are required to have contributed for at least 25 years. To receive full benefits all workers must have contributed for at least 40 years (for men) or 35 years (for women).
British Virgin Islands 65 2017
Bulgaria 64.083 61.167 2018 In Bulgaria the retirement age is to be increased gradually and reach 65 years by 2029 for men and by 2037 for women.
Cameroon 50–60 2019 The legal retirement age at which one (men or women) can retire is 60 with at least 20 years of coverage and at least 180 months of contributions, including 60 months in the last 10 years. Employment must cease.
Early retirement age is 50 with at least 20 years of coverage and at least 180 months of contributions, including 60 months in the last 10 years. The pension is payable abroad only under reciprocal agreement.
Canada 60–65
Further information: Pensions in Canada
The standard age to begin receiving a CPP retirement pension is when one attains age 65 (the month following the 65th birthday). However, one may receive a reduced CPP retirement pension as early as the month following the 60th birthday. Alternatively, one may receive an increased pension after reaching age 65. Canada also has a pension supplement with different rules called Old Age Security (OAS).
Chile 65 60
China 60 50–55 2011 The retirement age in China currently is 60 for men and 55 for female civil servants and 50 for female workers.
Colombia 62 57 2014
Croatia 65 62.75 2021 By 2030 there will be an equal age for women and men set at 65.
Cuba 65 60 2015 The retirement age threshold was increased by 5 years in 2015
Cyprus 65 2011
Czech Republic 62.833 58–62 2015 In the Czech Republic, in the year 2015, men had the retirement age of 62 years 10 months and women had it between 58 and 62, depending on number of children. In Czech Republic, the retirement age is in the process of being increased, and therefore depends on year of birth (for individuals born after 1977 it may exceed even 67, e.g. a person born in year 1995 must be at least 70 years old.) For women the retirement age depends on the number of raised children as well. For people born in 1975, the retirement age will be the same (66y8m) regardless of sex and number of children raised; and this age will reach 67 for people born in 1977.
Denmark 67 2021 In Denmark, the retirement age will be increased gradually to reach 67 years by 2022. From 2030 onwards, it will be increased a maximum of one year every five years depending on increases in average lifespan.
See also: Pensions in Denmark
Egypt 60 2015
Estonia 64 2021 In Estonia the retirement age is to be increased gradually to 65 years by 2026. After 2026, it will be linked to the average life expectancy
Finland 63.75–68 2021
France 62–67 2018 The minimal retirement age has gradually increased from 60 to 62 years by 2018.
See also: Pensions in France
Georgia 65 60 2011
Germany 60–67 2021 In Germany the retirement age is to be increased gradually and reach 67 years by 2029. For a long time the most common mandatory retirement age was 65, although in East Germany it was 60.
See also: Pensions in Germany
Greece 67 2021
Hong Kong 60–65 2017 Retirement age 65. Early retirement possible between the ages of 60 and 64. Some disciplined services staff of the government have lower retirement age.
Hungary 65 2021 The age was 63 in 2018, but was raised to 65 by 2022. Women with 40 years of insurance can retire at any age.
Iceland 67 2007
India 60–65 2014 In the public sector, the retirement age is 62 while in the private sector it depends on the individual company and the maximum being 65.
Indonesia 58 2022 In Indonesia, provisions relating to pensions are regulated in Government Regulation Number 45 of 2015 Article 15 concerning the Implementation of the Pension Guarantee Program, in PP 45/2015 the following matters are regulated:
For the first time the Retirement Age is set at 56 (fifty six years). Starting January 1, 2019, the retirement age as referred to in paragraph (1) will be 57 (fifty seven) years. The Retirement Age as referred to in paragraph (2) is further increased by 1 (one) year for every subsequent 3 (three) years until it reaches the Retirement Age of 65 (sixty five) years. By referring to the regulation, the retirement age limit in Indonesia is 58 years in 2022 and will reach the maximum retirement age limit, which is 65 years in 2043.
Iran 60 55 2018
Ireland 66 2021 In Ireland the retirement age is to be increased gradually and reach 68 years by 2028.
Israel 67 62 2011
Italy 62–67 2021 Must have paid contributions for at least 20 years (At 67 years and 3 months).
Those who have paid contributions for at least 41 years can retire at 62.
Those who have paid contributions for at least 41 years and 10 months (women) or 42 years and 10 months (men) can retire regardless of age.
Japan 64 62 2022
See also: Pensions in Japan and Elderly people in Japan
While the government is at it with early retirement prevention, the age is expected to increase gradually to 65 years of age by 2025.
Kazakhstan 63 58 2015 From 2017 the retirement age for women is to be increased gradually and reach 63 years in 2027
Kosovo 65 65 2014
North Korea 60 55 1999
South Korea 60 2016 Employers with more than 300 employees are required to extend the retiring age to 60. From 1 January 2017, it will be mandatory for all employers nationwide.
Kyrgyzstan 63 58 2011
Latvia 64 2021 The age will be 65 by 2025.
Libya 65 60 2017
Liechtenstein 64 2007
Lithuania 64.167 63.333 2021 In Lithuania, the retirement age will be raised to 65 for both men and women by 2026.
Luxembourg 65 2011
Malaysia 60 2013 In Malaysia, The Congress of Unions of Employees in the Public and Civil Services (Cuepacs) wants the government to consider extending the retirement age for civil servants from 60 to 62, but the government has no immediate plan to extend it as the current retirement age is deemed as sufficient.
Malta 63 2021 In Malta the retirement age is being increased gradually to 65 years by 2027.
Mexico 65 2015 Retirement age is expected to be increased in the coming years.
Moldova 63 60 2022 Retirement age for women is increasing every 6 months until it reaches 63 years in 2028
Montenegro 66 64 2022
Morocco 63 2014 Abdelilah Benkirane increased the retirement age to 63 since 2015.
Namibia 50–60 2015 The early retirement age for public employees is 55 years, but will be reduced to 50 years (in 2016).
Nepal 65 2021
Netherlands 66–68 2018 Although official retirement age is 68, AOW (Algemene Ouderdomswet, meaning General Old Age Law) state pension will be received starting at the age of 66. AOW eligibility is tied to life expectancy and will gradually increase to 67 in 2021.
New Zealand 65 2019
North Macedonia 64 62 2011
Norway 62–67 2018
See also: Pensions in Norway
The general retirement age is currently set to age 67 however, given sufficient pension contributions it is possible to retire as early as at age 62. The longer an individual postpones withdrawing a pension, the greater the government pension provision becomes.
Oman 65 2013 The age is 60 if in hazardous or unhealthy occupations.
Pakistan 60 2012
Peru 60 2018
Philippines 60 1990 The retirement age for an employee depends on the employment contract. Upon retirement, the retired employee should be given his/her benefits according to the agreement or contract between the employer and the employee. However, if there is no existing retirement plan or agreement for the employee, he/she may retire at the age of 60, given that he/she has served the employer for 5 years, and shall be given a retirement pay of at least half a month's salary for every year of service (6 months of work given is considered as 1 whole year for the retirement pay).
Poland 65 60 2016
Portugal 66.5 2021
Romania 65 61.75 2019 The age for women is being increased gradually. It will reach 63 by 2030.
Russia 61.5 56.5 2021 From 2019 the retirement age for men (women) would gradually increase from 60 (55) to 65 (60) years by 2028; first it was intended to hike the age for women to 63 but later the plan was softened.
Saudi Arabia 60 2014 In Saudi Arabia, the retirement age is based on the Hijiri (lunar) calendar.
Serbia 65 63.5 2023 Both men and women can retire when they reach their respective retirement age and have at least 15 years of insurance coverage. Or, have 45 years of insurance coverage regardless of age. The retirement age for women is increased by 2 months every year and it will be the same as for men, 65 in 2032.
Singapore 62–65 2012 In Singapore, the Retirement Age Act (RAA) has been replaced by the Retirement and Re-employment Act (RRA) in 2012. Under the RRA, the statutory minimum retirement age is still 62, but employers are now required to offer re-employment to eligible employees who turn 62, up to the age of 65. The bill will gradually increase the retirement and re-employment ages for Singapore employees to 65 and 70 years old, respectively. By 2030, the retirement age will have been increased to 65 and the re-employment age will have been raised to 70, in a step-by-step approach.
Slovakia 62 2017 In Slovakia the retirement age for women depends on the number of children. The retirement age will be equalized for men and women at 62 in 2017. The retirement age as of October 2022 is 63 years with the conditions.
Slovenia 65 2021
Spain 65 2021 The age will be 67 by 2027.
See also: Pensions in Spain
Sri Lanka 55 2014
Sweden 61–67 2021
Switzerland 65 64 2021 65 for women by January 2025.
Taiwan 66 2015 In Taiwan the retirement age is to be increased gradually and reach 68 years by 2028.
Tajikistan 63 58 2011
Thailand 60 2015 except a president of a university can work beyond 60 years
Trinidad and Tobago 60–65 2015
Tunisia 62–65 2019
Turkmenistan 62 57 2011
Turkey 60 58 2014 Retirement age was gradually increased since 1980s, from 45 for men and 40 for women Current ages will increase to 65 for both genders by 2048. Additionally, various minimum days of service is required, which is currently 7000 days. It will become 7200 days (20 years) by 2048. One is subject to the laws on the day he/she started working.
Ukraine 60 2021
United Arab Emirates 65 2010 In the United Arab Emirates the mandatory retirement age was raised from 60 to 65 in 2010, enabling the UAE to retain its needed expat skilled work force longer for key construction projects.
United Kingdom 66–68 2019 State pension age equalized at 65 in 2018. It increased to 66 on 6 October 2020, and will go up to 67 by 2028 and 68 by 2037.
See also: Pensions in the United Kingdom
United States 62–67 2023
Main article: Pensions in the United States
Retirees are eligible to receive reduced Social Security payments at the age of 62. People 65 and over are eligible to receive Medicare benefits if they paid Medicare taxes for at least 10 years. The full retirement age is 67 for everyone born in 1960 or later. If a prospective retiree delays taking out benefits from 67 up to age 70, benefit amount will increase.
Uruguay 60–70 2009 60 years and 30 working years minimum (1995), or 65 years and 25 working years and progressive to 70 in age and 15 working years (2009).
Uzbekistan 60 55 2011
Venezuela 60 55 2015
Vietnam 60.5 55.67 2022 The retirement age will gradually increase to 62 for males by 2028 and 60 for females by 2035. In 2021, the retirement age is 60.25 (age 60 and 3 months) for males and 50.33 (age 50 and 4 months) for women, the age will be increased by 3 months each year following for males and 4 months for women.
|
Extract the retirement age in France, Germany, Sweden, Austria, and Poland for men and women.
|
France:{"Men":"62-67", "Wonen":"62-67"}
Germany:{"Men":"60-67", "Wonen":"60-67"}
Sweden:{"Men":"61-67", "Wonen":"61-67"}
Austria:{"Men":"65", "Wonen":"60"}
Poland:{"Men":"65", "Wonen":"60"}
|
null | false
| null |
The 1986 World Series was the championship series of Major League Baseball's (MLB) 1986 season. The 83rd edition of the World Series, it was a best-of-seven playoff played between the National League (NL) champion New York Mets and the American League (AL) champion Boston Red Sox. The Mets won the series four games to three to claim their second World Series title and first since 1969. The series is best remembered for its Game 6, which saw the Mets rally from a two run-deficit in the bottom of the 10th inning, despite having two outs and no one on base. The Red Sox, who held a 3–2 series lead, were twice one strike away from securing the championship, but failed to close out the inning as the Mets won off an error by Boston first baseman Bill Buckner. Due to the Mets claiming the series in Game 7, the Game 6 collapse entered baseball lore as part of the Curse of the Bambino superstition used to explain the Red Sox's championship drought after the 1918 World Series.
|
Who won the 1986 Major League Baseball World Series and in how many games?
|
The New York Mets beat the Boston Red Sox, 4 games to 3, to win the World Series. The Red Sox were a strike away from winning the series in Game 6 but the Mets won the game due to an error from first baseman Bill Buckner. The Mets went on to easily win Game 7. This was used as another data point in the "Curse of the Bambino" superstition that started in 1918 when the Red Sox traded Babe Ruth to the New York Yankees and hadn't won a World Series since (up til that point in time).
|
null | false
| null |
What is the Role of the Fourth Official in Football?
|
The main role of the Fourth Official in football is assisting the referee in his control of the match. All the officials should do so under the FIFA Football Laws of the Game
The 4th Official would officiate if any of the other three match officials (referee or assistant referees) are unable to continue to administer the rules and regulations.
|
|
null | false
| null |
What is 75 hard and what are the tasks?
|
75 hard is a mental toughness challenge created by Andy Frisella. For 75 consecutive days, you follow the daily tasks as prescribed, no shortcuts. If you miss a task, you reset to day 1 and start all over again.
The tasks include:
- Take a daily progress picture.
- Follow a diet . No cheating and no alcohol.
- Perform TWO separate 45 minute workouts, one of which needs to be outside.
- Drink 1 gallon of water.
- Read 10 pages of a non-fiction book. Audiobooks don't count.
|
|
null | false
| null |
What are the top five brands of guitars?
|
Fender
Gibson
Martin
Gretsch
Taylor
|
|
null | false
| null |
Tell me whether these are cheeses or not: Stinking Bishop, Gouda, Gonzo, Brie, Partisano
|
Stinking Bishop, Gouda, and Brie are cheeses. Gonzo and Partisano are not.
|
|
null | false
| null |
Identify which instrument is string or percussion: Tassa, Sitar
|
Sitar is string, Tassa is percussion.
|
|
null | false
| null |
Stingrays are a group of sea rays, which are cartilaginous fish related to sharks. They are classified in the suborder Myliobatoidei of the order Myliobatiformes and consist of eight families: Hexatrygonidae (sixgill stingray), Plesiobatidae (deepwater stingray), Urolophidae (stingarees), Urotrygonidae (round rays), Dasyatidae (whiptail stingrays), Potamotrygonidae (river stingrays), Gymnuridae (butterfly rays) and Myliobatidae (eagle rays). There are about 220 known stingray species organized into 29 genera.
|
How many species of Stingrays are there?
|
There are about 220 known stingray species organized into 29 genera.
|
null | false
| null |
What are the five best sports cars to buy right now, under $200K USD?
|
Assuming now authorized dealer markups (ADMs), the following cars are often considered among the best sports cars under $200K.
1. Porsche 992 GT3, 2. Audi R8 V10, 3. Chevrolet C8 Z06 Corvette, 4. Porsche 718 GT4, 5. Acura NSX
|
|
null | false
| null |
What are some fashion ideas that haven't been done before?
|
- Tiny shoes for you ears
- Giant wrestling belt worn with a suit
- Sideways pants (both legs in one pant, the other pant just hanging behind you)
- Shave leg hair into tiger stripe pattern
- Fashionable hazmat suit
|
|
null | false
| null |
Why are dogs so good at smelling?
|
Dogs have approximately 300 million olfactory sensors as opposed to 6 million in humans. Their sense of smell is about 40 times better than humans. They are also neophilic which means they are attracted to new and interesting smells. They also have the ability to discern which nostril a smell emanated from.
|
|
2004.02083
| false
| null |
Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.
Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.
Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet.
We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.
We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet.
We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet.
|
What are the architectures used for the three tasks?
|
The answers are shown as follows:
* DyNet
|
null | false
| 160
|
Learning the distributed representation for long spans of text from its constituents has been a key step for various natural language processing (NLP) tasks, such as text classification BIBREF0 , BIBREF1 , semantic matching BIBREF2 , BIBREF3 , and machine translation BIBREF4 . Existing deep learning approaches take a compositional function with different forms to compose word vectors recursively until obtaining a sentential representation. Typically, these compositional functions involve recurrent neural networks BIBREF5 , BIBREF6 , convolutional neural networks BIBREF7 , BIBREF8 , and tree-structured neural networks BIBREF9 , BIBREF10 .
Among these methods, tree-structured neural networks (Tree-NNs) show theirs superior performance in many NLP tasks BIBREF11 , BIBREF12 . Following the syntactic tree structure, Tree-NNs assign a fixed-length vector to each word at the leaves of the tree, and combine word and phrase pairs recursively to create intermediate node vectors, eventually obtaining one final vector to represent the whole sentence.
However, these models have a major limitation in their inability to fully capture the richness of compositionality BIBREF13 . The same parameters are used for all kinds of semantic compositions, even though the compositions have different characteristics in nature. For example, the composition of the adjective and the noun differs significantly from the composition of the verb and the noun. Moreover, many semantic phenomena, such as semantic idiomaticity or transparency, call for more powerful compositional mechanisms BIBREF14 . Therefore, Tree-NNs suffer from the underfitting problem.
To alleviate this problem, some researchers propose to use multiple compositional functions, which are arranged beforehand according to some partition criterion BIBREF11 , BIBREF13 , BIBREF15 . Intuitively, using different parameters for different types of compositions has the potential to greatly reduce underfitting. BIBREF13 [ BIBREF13 ] defined different compositional functions in terms of syntactic categories, and a suitable compositional function is selected based on the syntactic categories. BIBREF15 [ BIBREF15 ] introduced multiple compositional functions and during compositional phase, a proper one is selected based on the input information. Although these models accomplished their mission to a certain extent, they still suffer from the following three challenges. First, the predefined compositional functions cannot cover all the compositional rules; Second, they require more learnable parameters, suffering from the problem of overfitting; Third, it is difficult to determine a universal criterion for semantic composition based solely on syntactic categories.
In this paper, we propose dynamic compositional neural networks over tree structure, in which a meta network is used to generate the context-specific parameters of a dynamic compositional network. Specifically, we construct our models based on two kinds of tree-structured neural networks: recursive neural network (Tree-RecNN) BIBREF11 and tree-structure long short-term memory neural network (Tree-LSTM) BIBREF9 . Our work is inspired by recent work on dynamic parameter prediction BIBREF16 , BIBREF17 , BIBREF18 . The meta network is used to extract the shared meta-knowledge across different compositional rules and to dynamically generate the context-specific compositional function. Thus, the compositional function of our models varies with positions, contexts and samples. The dynamic compositional network then applies those context-specific parameters to the current input information. Both meta and dynamic networks are differentiable such that the overall networks can be trained in an end-to-end fashion. Additional, to reduce the complexity of the whole networks, we define the dynamic weight matrix in a manner simulating low-rank matrix decomposition.
We evaluate our models on two typical tasks: text classification and text semantic matching. The results show that our models are more expressive due to their learning to learn nature, yet without increasing the number of model's parameters. Moreover, we find certain composition operations can be learned implicitly by meta TreeNN, such as the composition of noun phrases and verb phrases.
The contributions of the paper can be summed up as follows.
In this paper, we propose dynamic compositional neural networks over tree structure, in which a meta network is used to generate the context-specific parameters of a dynamic compositional network.
|
What method does the author propose in this paper?
|
The author propose dynamic compositional neural networks over tree structure, in which a meta network is used to generate the context-specific parameters of a dynamic compositional network.
|
null | false
| null |
What do you know about the Fatimid Empire?
|
The Fatimid Dynasty was a Shia Ismaili Empire which existed from Tenth to the Twelfth Centuries AD (909 - 1171 CE). They were originated from the Alids & they trace their lineage to Ali-ibn-Abi-Talib, the First Shia Imam, and named after Fatima-Al-Zahra-bint-Mohammed wife of Ali-ibn-Abi-Talib & originating from the Arab. Throughout the time, their reign spanned from North Africa to the Middle East & certain parts of Europe & Asia covering present day areas of North Africa, Egypt, Sudan, Sicily, the Levant, Yemen, Multan and the Hijaz.
Abdullah al-Mahdi Billah was the founder of the Fatimid Dynasty. At present certain sects of Ismaili Shia claim to be the descendants of the Fatimids including the Druze, Nizaris, Taiyabbis & the Dawoodi Bohras.
|
|
null | false
| null |
Tell me whether these styles are related to facial hair or head hair: mohawk, buzz cut, goatee, soul patch, mustache, chin strap, bowl cut
|
Mohawk: Head Hair
Buzz Cut: Head Hair
Goatee: Facial Hair
Soul Patch: Facial Hair
Mustache: Facial Hair
Chin Strap: Facial Hair
Bowl Cut: Head Hair
|
|
null | false
| null |
How many slide positions are there on a trombone
|
There are 7 slide positions on a trombone, where each slide position is a half step apart from the next slide position. In combination with changes in air pressure to change the pitch of the note, a trombone player adjusts the slide position to produce various tones at various octaves.
|
|
null | false
| null |
Imagine you are the last person on Earth. Write a diary entry describing your thoughts and feelings.
|
Dear Diary,
As warned a few days back, all the humans on planet Earth have disappeared. I am now the last person on Earth. There are a lot of positives and, at the same time, many negatives. I now have the freedom to do what I want without anyone being my obstacle. I can live my life according to my wish and do not have to follow the rules and restrictions of society. I can now live the “Hakuna Matata” life without worrying about my future temporarily. Alas! There’s also a negative side to this heaven on Earth. Now that the world's farmers have disappeared, the biodegradable food items would only last for a few days and would start to degrade if not refrigerated. All that would be left would be the canned and packed food items though these foodstuffs would also expire after a few years and eventually start to decompose. Soon the loneliness would also set into the brain. The reason was that man had evolved into a communal organism. Humans are born into an interdependent society where everyone is interdependent in their needs and wants. Without a community, the human race would probably perish. Soon, with no edible food remaining, I may also have to go out hunting for my meals like the early hunter-gatherers or to farm on the farmlands for three meals a day and would also forget about the worldly wants that the human society had created like electronic devices, automobiles, etc.
These devices would be of no use any more as a few months to go, the power reserves would run out of energy, and there would be no energy left to put them to good use.
Finally, I would miss my loved ones, friends, family, etc.
The memories, as I predict, may start to haunt me, and they may also begin to give me nightmares.
Memories would eventually turn to sadness, transforming into depression, but I would learn to live with it. Except for these changes, I don't think any other change will take place in my life.
With this, I end it here.
Yours,
(Name)
|
|
null | false
| 137
|
Pre-trained models have demonstrated their effectiveness in many downstream natural language processing (NLP) tasks. The availability of multilingual pre-trained models enables zero-shot transfer of NLP tasks from high resource languages to low resource ones. However, recent research in improving pre-trained models focuses heavily on English. While it is possible to train the latest neural architectures for other languages from scratch, it is undesirable due to the required amount of compute. In this work, we tackle the problem of transferring an existing pre-trained model from English to other languages under a limited computational budget. With a single GPU, our approach can obtain a foreign BERT base model within a day and a foreign BERT large within two days. Furthermore, evaluating our models on six languages, we demonstrate that our models are better than multilingual BERT on two zero-shot tasks: natural language inference and dependency parsing.
In this work, we tackle the problem of transferring an existing pretrained model from English to other languages under a limited computational budget.
|
What problem does the work tackle?
|
The problem of transferring an existing pretrained model from English to other languages under a limited computational budget.
|
null | false
| null |
As an expert skier, what are the top resorts in the US to go skiing?
|
There are many ski resorts in the United States that offer challenging terrain for expert level skiers. Som of the top resorts are: Big Sky, Jackson Hole, Pallisades, Crested Butte, Telluride, and Silverthorn
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.