paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
146
What would be possible if a person had an oracle that could immediately provide the answer to any question about the visual world? Sight-impaired users could quickly and reliably figure out the denomination of their currency and so whether they spent the appropriate amount for a product BIBREF0 . Hikers could immediately learn about their bug bites and whether to seek out emergency medical care. Pilots could learn how many birds are in their path to decide whether to change course and so avoid costly, life-threatening collisions. These examples illustrate several of the interests from a visual question answering (VQA) system, including tackling problems that involve classification, detection, and counting. More generally, the goal for VQA is to have a single system that can accurately answer any natural language question about an image or video BIBREF1 , BIBREF2 , BIBREF3 . Entangled in the dream of a VQA system is an unavoidable issue that, when asking multiple people a visual question, sometimes they all agree on a single answer while other times they offer different answers (Figure FIGREF1 ). In fact, as we show in the paper, these two outcomes arise in approximately equal proportions in today's largest publicly-shared VQA benchmark that contains over 450,000 visual questions. Figure FIGREF1 illustrates that human disagreements arise for a variety of reasons including different descriptions of the same concept (e.g., “minor" and “underage"), different concepts (e.g., “ghost" and “photoshop"), and irrelevant responses (e.g., “no"). Our goal is to account for whether different people would agree on a single answer to a visual question to improve upon today's VQA systems. We propose multiple prediction systems to automatically decide whether a visual question will lead to human agreement and demonstrate the value of these predictions for a new task of capturing the diversity of all plausible answers with less human effort. Our work is partially inspired by the goal to improve how to employ crowds as the computing power at run-time. Towards satisfying existing users, gaining new users, and supporting a wide range of applications, a crowd-powered VQA system should be low cost, have fast response times, and yield high quality answers. Today's status quo is to assume a fixed number of human responses per visual question and so a fixed cost, delay, and potential diversity of answers for every visual question BIBREF2 , BIBREF0 , BIBREF4 . We instead propose to dynamically solicit the number of human responses based on each visual question. In particular, we aim to accrue additional costs and delays from collecting extra answers only when extra responses are needed to discover all plausible answers. We show in our experiments that our system saves 19 40-hour work weeks and $1800 to answer 121,512 visual questions, compared to today's status quo approach BIBREF0 . Our work is also inspired by the goal to improve how to employ crowds to produce the information needed to train and evaluate automated methods. Specifically, researchers in fields as diverse as computer vision BIBREF2 , computational linguistics BIBREF1 , and machine learning BIBREF3 rely on large datasets to improve their VQA algorithms. These datasets include visual questions and human-supplied answers. Such data is critical for teaching machine learning algorithms how to answer questions by example. Such data is also critical for evaluating how well VQA algorithms perform. In general, “bigger" data is better. Current methods to create these datasets assume a fixed number of human answers per visual question BIBREF2 , BIBREF4 , thereby either compromising on quality by not collecting all plausible answers or cost by collecting additional answers when they are redundant. We offer an economical way to spend a human budget to collect answers from crowd workers. In particular, we aim to actively allocate additional answers only to visual questions likely to have multiple answers. The key contributions of our work are as follows: We show in our experiments that our system saves 19 40-hour work weeks and $1800 to answer 121,512 visual questions, compared to today's status quo approach.
How many visual questions have been answered in the experiments?
121,512 visual questions
null
false
null
Goa used to be a colony of which nation
Portugal
null
false
null
Himmerland is a peninsula in northeastern Jutland, Denmark. It is delimited to the north and the west by the Limfjord, to the east by the Kattegat, and to the south by the Mariager Fjord. The largest city is Aalborg; smaller towns include Hobro, Aars, Løgstør, Støvring and Nibe. In northeastern Himmerland is the Lille Vildmose, Denmark's largest raised bog, which sustains a rich bird life of international importance.
Given this paragraph about Himmerland, which is the largest city in Himmerland?
Aalborg is the largest city in Himmerland.
1911.03385
false
null
Given that non-content words are distinctive enough for a classifier to determine style, we propose a suite of low-level linguistic feature counts (henceforth, controls) as our formal, content-blind definition of style. The style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style BIBREF20. Controls are extracted heuristically, and almost all rely on counts of pre-defined word lists. For constituency parses we use the Stanford Parser BIBREF21. table:controlexamples lists all the controls along with examples. Given that non-content words are distinctive enough for a classifier to determine style, we propose a suite of low-level linguistic feature counts (henceforth, controls) as our formal, content-blind definition of style. The style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style BIBREF20. Controls are extracted heuristically, and almost all rely on counts of pre-defined word lists. For constituency parses we use the Stanford Parser BIBREF21. table:controlexamples lists all the controls along with examples.
How they model style as a suite of low-level linguistic controls, such as frequency of pronouns, prepositions, and subordinate clause constructions?
The answers are shown as follows: * style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style
2001.06785
false
null
We collected a total of 657 ratings by 14 volunteers, 5 Italian and 9 non-Italian listeners, spread over the 24 clips and three testing conditions. We conducted a statistical analysis of the data with linear mixed-effects models using the lme4 package for R BIBREF31. We analyzed the naturalness score (response variable) against the following two-level fixed effects: dubbing system A vs. B, system A vs. C, and system B vs. C. We run separate analysis for Italian and non-Italian listeners. In our mixed models, listeners and video clips are random effects, as they represent a tiny sample of the respective true populationsBIBREF31. We keep models maximal, i.e. with intercepts and slopes for each random effect, end remove terms required to avoid singularities BIBREF32. Each model is fitted by maximum likelihood and significance of intercepts and slopes are computed via t-test. We collected a total of 657 ratings by 14 volunteers, 5 Italian and 9 non-Italian listeners, spread over the 24 clips and three testing conditions.
How many people are employed for the subjective evaluation?
The answers are shown as follows: * 14 volunteers
null
false
null
What was Britain called - before it was Britain
Albion
null
false
null
Does a Tesla work well in Michigan?
Yes, a Tesla works great in Michigan. The supercharger network is rarely busy and the range impact from the cold weather isn't terrible. You can easily drive between superchargers and the dual motor models have incredible traction control on the snow covered roads. I have a Tesla and live in Michigan and have no worries or regrets with the car.
null
false
null
What are the most crowded surf spots in the world
Snapper Rocks, Gold Coast, Australia - Rincon, California - Malibu, California - Trestles, California
null
false
null
Who is the lead singer of Limp Bizkit?
Fred Durst is the leader singer of nu-metal rap band, Limp Bizkit.
null
false
185
Learning natural language generation (NLG) models heavily relies on annotated training data. However, most available datasets are collected in a single language (typically English), which restricts deploying the applications to other languages. In this work, we aim at transferring the supervision of a monolingual NLG dataset to unseen languages, so that we can boost performance for the low-resource settings. Various methods have been proposed over the years to learn universal cross-lingual word embeddings BIBREF0, BIBREF1, BIBREF2 or sentence encoders BIBREF3, BIBREF4, BIBREF5, which tries to encode multilingual texts into a single shared vector space. Despite achieving promising results on cross-lingual classification problems, cross-lingual pre-trained models purposed for NLG tasks remains relatively understudied. The cross-lingual generation problem is challenging due to the following reasons. First, it requires the models to understand multilingual input texts, and generate multilingual target sequences. So both encoder and decoder should be pre-trained together. Second, the many-to-many nature of cross-lingual NLG increases language pairs with the square of the number of languages. Third, the prediction space of cross-lingual NLG is much larger than classification tasks, which makes the knowledge transfer of decoders quite critical. Previous work mainly relies on machine translation (MT) systems to map texts to different languages. The first strand of research directly uses MT in a pipeline manner BIBREF6. For example, the input written in other languages is first translated to English, and fed into the NLG model that is trained by English data. Then the generated English text is translated back to the target language. Another strand of work employs MT to generate pseudo training data for other language pairs that are lack of annotations BIBREF7, BIBREF8. However, such methods have to use multiple MT systems, which renders them suffering from error propagation. Moreover, because the pipeline-based methods do not explicitly share the same parameter space across the languages, we can not directly transfer the task-specific supervision to other low-resource languages. In this paper, we propose a cross-lingual pre-trained model (named as Xnlg) in order to transfer monolingual NLG supervision to other pre-trained languages by fine-tuning. Specifically, Xnlg shares the same sequence-to-sequence model across languages, and is pre-trained with both monolingual and cross-lingual objectives. The model not only learns to understand multilingual input, but also is able to generate specific languages by conditioning on the encoded semantics. Figure FIGREF2 demonstrates how to use Xnlg to perform cross-lingual transfer for downstream tasks. The proposed model enables us to fine-tune the pre-trained model on monolingual NLG training data, and then evaluate it beyond a single language, including zero-shot cross-lingual generation. Besides, we explore several fine-tuning strategies to make a compromise between cross-lingual ability and task ability. In addition, we introduce two cross-lingual NLG datasets (i.e., question generation, and abstractive summarization) for evaluation, which includes three languages, namely English, Chinese, and French. Experimental results on the NLG tasks show that Xnlg achieves competitive performance compared with the machine-translation-based pipeline model in zero-shot cross-lingual settings. In this paper, we propose a cross-lingual pre-trained model (named as Xnlg) in order to transfer monolingual NLG supervision to other pre-trained languages by fine-tuning.
Why do they propose a cross-lingual pre-trained model?
To transfer monolingual NLG supervision to other pre-trained languages by fine-tuning.
null
false
null
How can I lower my utility bills?
Here are some of the things you can do to lower your utility bills: * check your utility company’s website to see if you qualify for any subsidies * if you’re on a TOU (time of use) plan, try shifting heavy loads, such as electric clothes dryer use, to cheaper time periods * replace your appliances and/or HVAC with more efficient ones * buy solar panels, which will lower the cost of electricity * make sure your home is well insulated
null
false
142
Machine translation has made remarkable progress, and studies claiming it to reach a human parity are starting to appear BIBREF0. However, when evaluating translations of the whole documents rather than isolated sentences, human raters show a stronger preference for human over machine translation BIBREF1. These findings emphasize the need to shift towards context-aware machine translation both from modeling and evaluation perspective. Most previous work on context-aware NMT assumed that either all the bilingual data is available at the document level BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 or at least its fraction BIBREF11. But in practical scenarios, document-level parallel data is often scarce, which is one of the challenges when building a context-aware system. We introduce an approach to context-aware machine translation using only monolingual document-level data. In our setting, a separate monolingual sequence-to-sequence model (DocRepair) is used to correct sentence-level translations of adjacent sentences. The key idea is to use monolingual data to imitate typical inconsistencies between context-agnostic translations of isolated sentences. The DocRepair model is trained to map inconsistent groups of sentences into consistent ones. The consistent groups come from the original training data; the inconsistent groups are obtained by sampling round-trip translations for each isolated sentence. To validate the performance of our model, we use three kinds of evaluation: the BLEU score, contrastive evaluation of translation of several discourse phenomena BIBREF11, and human evaluation. We show strong improvements for all metrics. We analyze which discourse phenomena are hard to capture using monolingual data only. Using contrastive test sets for targeted evaluation of several contextual phenomena, we compare the performance of the models trained on round-trip translations and genuine document-level parallel data. Among the four phenomena in the test sets we use (deixis, lexical cohesion, VP ellipsis and ellipsis which affects NP inflection) we find VP ellipsis to be the hardest phenomenon to be captured using round-trip translations. Our key contributions are as follows: we introduce the first approach to context-aware machine translation using only monolingual document-level data; our approach shows substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation; we show which discourse phenomena are hard to capture using monolingual data only. Our key contributions are as follows: we introduce the first approach to context-aware machine translation using only monolingual document-level data; our approach shows substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation; we show which discourse phenomena are hard to capture using monolingual data only.
What are the author's major contributions?
They introduce the first approach to context-aware machine translation using only monolingual document-level data; their approach shows substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation; they show which discourse phenomena are hard to capture using monolingual data only.
null
false
148
In the first experiment, we evaluated the DL and ML methods on SNLI, multi-NLI, Quora, and Clinical-QE. For the datasets that did not have a development and test sets, we randomly selected two sets, each amounting to 10% of the data, for test and development, and used the remaining 80% for training. For MultiNLI, we used the dev1-matched set for validation and the dev2-mismatched set for testing. Table TABREF28 presents the results of the first experiment. The DL model with GloVe word embeddings achieved better results on three datasets, with 82.80% Accuracy on SNLI, 78.52% Accuracy on MultiNLI, and 83.62% Accuracy on Quora. Logistic Regression achieved the best Accuracy of 98.60% on Clinical-RQE. We also performed a 10-fold cross-validation on the full Clinical-QE data of 8,588 question pairs, which gave 98.61% Accuracy. In the second experiment, we used these datasets for training only and compared their performance on our test set of 850 consumer health questions. Table TABREF29 presents the results of this experiment. Logistic Regression trained on the clinical-RQE data outperformed DL models trained on all datasets, with 73.18% Accuracy. To validate further the performance of the LR method, we evaluated it on question similarity detection. A typical approach to this task is to use an IR method to find similar question candidates, then a more sophisticated method to select and re-rank the similar questions. We followed a similar approach for this evaluation by combining the LR method with the IR baseline provided in the context of SemEval-cQA. The hybrid method combines the score provided by the Logistic Regression model and the reciprocal rank from the IR baseline using a weight-based combination: INLINEFORM0 The weight INLINEFORM0 was set empirically through several tests on the cQA-2016 development set ( INLINEFORM1 ). Table TABREF30 presents the results on the cQA-2016 and cQA-2017 test datasets. The hybrid method (LR+IR) provided the best results on both datasets. On the 2016 test data, the LR+IR method outperformed the best system in all measures, with 80.57% Accuracy and 77.47% MAP (official system ranking measure in SemEval-cQA). On the cQA-2017 test data, the LR+IR method obtained 44.66% MAP and outperformed the cQA-2017 best system in Accuracy with 67.27%. In the first experiment, we evaluated the DL and ML methods on SNLI, multi-NLI, Quora, and Clinical-QE. For the datasets that did not have a development and test sets, we randomly selected two sets, each amounting to 10% of the data, for test and development, and used the remaining 80% for training. For MultiNLI, we used the dev1-matched set for validation and the dev2-mismatched set for testing.
How do the authors evaluate the DL and ML methods in the first experiment?
In the first experiment, the authors evaluated the DL and ML methods on SNLI, multi-NLI, Quora, and Clinical-QE. For the datasets that did not have a development and test sets, the authors randomly selected two sets, each amounting to 10% of the data, for test and development, and used the remaining 80% for training. For MultiNLI, the authors used the dev1-matched set for validation and the dev2-mismatched set for testing.
null
false
null
How do you play chess?
Chess is a two-person board game, where the objective of the game is to capture your opponents King. The King is just one of sixteen pieces that each player can use during the game. The other fifteen pieces include: Eight pawns, two rooks, two knights, two bishops, and a queen. The pieces are placed on an eight by eight, square board in a fixed starting position. The game begins by the player using the white pieces moving one of their pieces to another square. Each player will then alternate playing one move turns. Each piece has a set of legal moves that it can perform. For example, a bishop can only move diagonally on squares of its same color, whereas rooks can only move vertically or horizontally. The goal of the game is to use your pieces in a strategic manner that ultimately leads to your opponent's King being captured. The most common way to achieve this is by using your pieces to capture your opponent's pieces that are defending the King, and then exploiting this weakness to attack the King. While it's important to play on the offensive, it's equally as important to understand your opponent's strategy, so that you can properly defend against their attempts to capture your own King.
null
false
null
The Popelogan River is a tributary of the Eastern shore Upsalquitch River, flowing particularly through the Addington Parish, in Restigouche County, in the Northwest the New Brunswick, in Canada. In his course to the south, the Popelogan river flows through a valley increasingly deep in the mountainous terrain. Its main tributaries are the Gordon Brook and the "Lake Popelogan Branch" which supplies at Popelogan Lake. A forest road serves the upper part of the river, from Popelogan Depot.
With the given passage as a reference, write some details about Popelogan River
1. The Popelogan River is a tributary of the Upsalquitch River on the eastern shore of Canada's New Brunswick, mainly passing through the Addington Parish in Restigouche County. 2. The Popelogan river travels through a valley in the steep terrain that gets deeper as it moves south. The Gordon Brook and the "Lake Popelogan Branch," which supplies Popelogan Lake, are its principal tributaries. 3. Popelogan Depot provides access to the upper half of the river via a woodland road.
null
false
399
Machine translation (MT) has achieved huge advances in the past few years BIBREF1, BIBREF2, BIBREF3, BIBREF4. However, the need for a large amount of manual parallel data obstructs its performance under low-resource conditions. Building an effective model on low resource data or even in an unsupervised way is always an interesting and challenging research topic BIBREF5, BIBREF6, BIBREF7. Recently, unsupervised MT BIBREF8, BIBREF9, BIBREF0, BIBREF10, BIBREF11, which can immensely reduce the reliance on parallel corpora, has been gaining more and more interest. Training cross-lingual word embeddings BIBREF0, BIBREF12 is always the first step of the unsupervised MT models which produce a word-level shared embedding space for both the source and target, but the lexical coverage can be an intractable problem. To tackle this issue, BIBREF13 provided a subword-level solution to overcome the out-of-vocabulary (OOV) problem. In this work, the systems we implement for the German-Czech language pair are built based on the previously proposed unsupervised MT systems, with some adaptations made to accommodate the morphologically rich characteristics of German and Czech BIBREF14. Both word-level and subword-level neural machine translation (NMT) models are applied in this task and further tuned by pseudo-parallel data generated from a phrase-based statistical machine translation (PBSMT) model, which is trained following the steps proposed in BIBREF10 without using any parallel data. We propose to train BPE embeddings for German and Czech separately and align those trained embeddings into a shared space with MUSE BIBREF0 to reduce the combinatorial explosion of word forms for both languages. To ensure the fluency and consistency of translations, an additional Czech language model is trained to select the translation candidates generated through beam search by rescoring them. Besides the above, a series of post-processing steps are applied to improve the quality of final translations. Our contribution is two-fold: We propose a method to combine word and subword (BPE) pre-trained input representations aligned using MUSE BIBREF0 as an NMT training initialization on a morphologically-rich language pair such as German and Czech. We study the effectiveness of language model rescoring to choose the best sentences and unknown word replacement (UWR) procedure to reduce the drawback of OOV words. This paper is organized as follows: in Section SECREF2, we describe our approach to the unsupervised translation from German to Czech. Section SECREF3 reports the training details and the results for each steps of our approach. More related work is provided in Section SECREF4. Finally, we conclude our work in Section SECREF5. We propose a method to combine word and subword (BPE) pre-trained input representations aligned using MUSE as an NMT training initialization on a morphologically-rich language pair such as German and Czech.
What method does the paper propose?
A method to combine word and subword (BPE) pre-trained input representations aligned using MUSE as an NMT training initialization on a morphologically-rich language pair such as German and Czech.
null
false
null
New York, often called New York City or NYC, is the most populous city in the United States. With a 2020 population of 8,804,190 distributed over 300.46 square miles (778.2 km2), New York City is the most densely populated major city in the United States and more than twice as populous as Los Angeles, the nation's second-largest city.
Which is the most populous city in the United States
New York
null
false
null
Tell me whether these are flowers or trees: Spruce, Calla Lily, Carnation, pine, beech, Orchid, oak, Tulip, Gerbera Daisy.
Spruce is a tree. Calla Lily is a flower. Carnation is a flower. pine is a tree. beech is a tree. Orchid is a flower. oak is a tree. Tulip is a flower. Gerbera Daisy is a flower.
null
false
null
Kilwa is an administrative ward in Kilindi District of Tanga Region in Tanzania. The ward covers an area of 153 km2 (59 sq mi), and has an average elevation of 858 m (2,815 ft). According to the 2012 census, the ward has a total population of 6,835.
Where is the Kilwa Tanga
Kilwa is an administrative ward in Kilindi District of Tanga Region in Tanzania. The ward covers an area of 153 km2 (59 sq mi), and has an average elevation of 858 m (2,815 ft). According to the 2012 census, the ward has a total population of 6,835.
null
false
null
A botanical garden or botanic garden[nb 1] is a garden with a documented collection of living plants for the purpose of scientific research, conservation, display, and education. Typically plants are labelled with their botanical names. It may contain specialist plant collections such as cacti and other succulent plants, herb gardens, plants from particular parts of the world, and so on; there may be greenhouses, shadehouses, again with special collections such as tropical plants, alpine plants, or other exotic plants. Most are at least partly open to the public, and may offer guided tours, educational displays, art exhibitions, book rooms, open-air theatrical and musical performances, and other entertainment.
Tell me about the botanical garden?
A botanical garden have a collection of living plants for the purpose of scientific research, conservation, display, and education. Plants are labelled with their botanical names. It may contain specialist plant collections such as cacti and other succulent plants, herb gardens, p tropical plants, alpine plants, or other exotic plants. The origin of modern botanical gardens is generally traced to the appointment of professors of botany to the medical faculties of universities in 16th century Renaissance Italy, which also entailed the curation of a medicinal garden. Botanical gardens are often run by universities or other scientific research organizations. The staff will normally include botanists as well as gardeners. Worldwide, there are now about 1800 botanical gardens and arboreta in about 150 countries (mostly in temperate regions) of which about 550 are in Europe (150 of which are in Russia), 200 in North America, and an increasing number in East Asia. These gardens attract about 300 million visitors a year.
null
false
null
How do you know if a chicken breast is fully cooked?
When the internal temperature reaches a minimum of 150 F
null
false
null
Lure of the Swamp is a 1957 American drama film directed by Hubert Cornfield and written by William George, based on the 1953 novel of the same name by Gil Brewer. The film stars Marshall Thompson, Willard Parker, Joan Vohs, Jack Elam, Leo Gordon and Joan Lora. The film was released in May 1957 by 20th Century-Fox.
When was Lure of the Swamp released?
Lure of the Swamp was released by 20th Century-Fox in May 1957 and is an American drama film directed by Hubert Cornfield and written by William George. Lure of the Swamp is based on the 1953 novel of the same name by Gil Brewer.
null
false
null
Yield curves are usually upward sloping asymptotically: the longer the maturity, the higher the yield, with diminishing marginal increases (that is, as one moves to the right, the curve flattens out). According to columnist Buttonwood of The Economist newspaper, the slope of the yield curve can be measured by the difference, or "spread", between the yields on two-year and ten-year U.S. Treasury Notes. A wider spread indicates a steeper slope. There are two common explanations for upward sloping yield curves. First, it may be that the market is anticipating a rise in the risk-free rate. If investors hold off investing now, they may receive a better rate in the future. Therefore, under the arbitrage pricing theory, investors who are willing to lock their money in now need to be compensated for the anticipated rise in rates—thus the higher interest rate on long-term investments. Another explanation is that longer maturities entail greater risks for the investor (i.e. the lender). A risk premium is needed by the market, since at longer durations there is more uncertainty and a greater chance of events that impact the investment. This explanation depends on the notion that the economy faces more uncertainties in the distant future than in the near term. This effect is referred to as the liquidity spread. If the market expects more volatility in the future, even if interest rates are anticipated to decline, the increase in the risk premium can influence the spread and cause an increasing yield. The opposite situation can also occur, in which the yield curve is "inverted", with short-term interest rates higher than long-term. For instance, in November 2004, the yield curve for UK Government bonds was partially inverted. The yield for the 10-year bond stood at 4.68%, but was only 4.45% for the 30-year bond. The market's anticipation of falling interest rates causes such incidents. Negative liquidity premiums can also exist if long-term investors dominate the market, but the prevailing view is that a positive liquidity premium dominates, so only the anticipation of falling interest rates will cause an inverted yield curve. Strongly inverted yield curves have historically preceded economic recessions. The shape of the yield curve is influenced by supply and demand: for instance, if there is a large demand for long bonds, for instance from pension funds to match their fixed liabilities to pensioners, and not enough bonds in existence to meet this demand, then the yields on long bonds can be expected to be low, irrespective of market participants' views about future events.
Extract the factors that affect the shape of a yield curve, and list them using bullets.
- The market's expectation of the future interest rate. If investors expect a higher interest rate in the future, then the curve would be upward sloping. Otherwise if the market is anticipating a falling interest rate, then the curve would be downward sloping. - Risk premium. Longer maturities entail greater risks for the investor, thus investors would expect a higher return for longer term investments. - Supply and demand. If there is a higher demand or lower supply for bonds with a certain maturity, then the yield for those bonds would go down.
null
false
null
Tell me whether these are books by black authors: I Know Why the Caged Bird Sings, Homegoing, Between the World and Me, Becoming, Beloved, The Color Purple.
The books: I Know Why the Caged Bird Sings, Homegoing, Between the World and Me, Becoming, Beloved, and The Color Purple are all written by black authors.
null
false
null
Identify which animal species is alive or extinct: Neanderthal, Bigeye Tuna
Bigeye Tune is alive, Neanderthal is extinct.
null
false
null
Provide me with popular girl names.
Popular girl names include Olivia, Emma, Charlotte, Amelia, Ava, and Sophia.
null
false
null
Apply the traditional western culture label (domesticated pet, farm animal, wild animal) to each of the following animals: dog, sheep, horse, wolf, cat, zebra, cow
dog: Domesticated pet sheep: Farm animal horse: Farm animal wolf: Wild animal cat: Domesticated pet zebra: Wild animal cow: Farm animal
null
false
null
What is the difference between love and affection?
Friendship and love are feelings close to each other, but there is a substantial difference. Friendship is a feeling that, if genuine, can last a lifetime. Love, on the other hand, is a more complex feeling that can change with the passage of time
null
false
null
Who won the masters in golf in 2019?
Tiger Woods won his fifth Green Jacket, and fifteenth major, by one stroke ahead of three runners-up. It left him one shy of Jack Nicklaus's record six Masters wins, and three short of his record eighteen major wins. At age 43, he became the eleventh oldest player to win a major, and the second oldest Masters winner, again only bettered by Nicklaus who won at age 46. It was 11 years after his last major win, the 2008 U.S. Open, and 14 years since his last Masters win beating the previous record of 13 years held by Gary Player. Due to the high profile of Woods, and his storied fall from the top of the game due to personal issues and injuries, the victory generated a large amount of publicity around the world, and is regarded as one of the great sporting comebacks.
null
false
null
What are snowshoes?
Snowshoes provide a way for people to walk on top of snow without sinking in by spreading their weight over a wider surface than typical shoes. The typical snowshoe is made of metal or wood, has a strap for attaching regular shoes or boots, and often also has some sharp grip at the bottom to help prevent slipping on ice.
null
false
null
Arc'teryx Arc'teryx is a Canadian high-end design company specializing in outdoor apparel and equipment headquartered in North Vancouver, British Columbia. It focuses on technical apparel for mountaineering and Alpine sports, including related accessories. The company's name and logo reference the Archaeopteryx, the transitional fossil of early dinosaurs to modern dinosaurs (birds). Arc'teryx is known for their waterproof Gore-Tex shell jackets, knitwear, and down parkas. Founded in 1989 as Rock Solid, the company re-branded in 1991 as Arc'teryx to produce outerwear and climbing gear for the Coast Mountains in Canada. The company was sold to Salomon Group in 2001 and Amer Sports in 2005. Arc'teryx maintains two divisions: Veilance, their luxury streetwear retailer and LEAF, their retailer of technical gear for law enforcement and military forces. The company is a major influence in the "gorpcore" and "normcore" fashion movements, the wearing of minimalist, outdoor apparel in urban settings. History An Arc'teryx boutique on Mercer Street, London in July 2016 Originally named "Rock Solid" by founder Dave Lane, the company's first line of products was climbing gear. Dave Lane sold his 50% interest to Blair Murdoch and Tim Duholke who became silent partners in 1989. Then-principal Jeremy Guard changed the company name to Arc'teryx in 1991 to reflect the company's vision of creating disruptive "evolutionary" innovation within the outdoor products industry. Guard was president and principal of the company from 1991 to 2001. Using a heat laminate (thermolamination) technology, the partners designed and marketed the Vapor harness, which would become the company's most popular item. In 1993, after a series of relocations and staff additions, Arc'teryx released the Bora backpack using the same Vapor technology. In 1996, the company introduced technical apparel featuring Gore-Tex after obtaining a license from W. L. Gore & Associates. Arc'teryx re-located its headquarters to Burnaby, British Columbia in 1999 and then to North Vancouver in 2005. In 2001, Arc'teryx was purchased by Salomon Group, a French subsidiary of the German retailer Adidas. In 2005, Arc'teryx was sold to Finnish retailer Amer Sports. In 2019, Chinese retailer Anta Sports bought a controlling stake (56%) in Amer. The Arc'teryx head office is located in North Vancouver and harnesses, backpacks, and other apparel is made in its own factory in New Westminster, BC. As their apparel line expanded Arc'teryx began manufacturing in international markets, specifically in China, the Philippines, Vietnam, Bangladesh, El Salvador, Laos, and Greece. Jon Hoerauf joined the company as president in 2012, assuming the additional role of general manager in 2016. During the early 2020s, Arc'teryx co-produced items with high-fashion brands and designers which expanded their consumer market beyond outdoor enthusiasts. Divisions Veilance Arc'teryx launched their luxury formal urban-wear brand in 2009, branded as Arc'teryx Veilance. LEAF Arc'teryx's Law Enforcement and Armed Forces (LEAF) line is aimed at the military and police market. In the consumer and Internet market, it is often referred to as "military birds". Some of the collections are designed for their civilian counterparts, while others, such as Arc'teryx's Tango and Charlie backpacks, feature camouflage designs that are entirely geared toward the military. In contrast, military bird products use dark colors and military colors. Only a handful of products, such as the Alpha jacket, are currently made in Canada, while the rest, such as Assault Shirt, are made overseas, such as in El Salvador and Vietnam. Subculture Arc'teryx has become widely popular in the street fashion scene and Internet culture. The theft ("racking") of Arc'teryx and other Gore-Tex items is considered to be part of the street subculture. It is seen as a high-end status symbol among youth, "just shy of Stone Island and Moncler." The Atlantic noted the brand as selling "premium-tier outdoorsiness" while the Financial Times noted one of their largest demographics as "urbanites" in 2022. Labeled a cult brand by Fast Company in 2021, Arc'teryx is worn by " hikers and hype-beasts" according to The New York Times. The company is a major influence in the "gorpcore" and "normcore" fashion movements – the wearing of minimalist, outdoor apparel in urban settings, along with Mammut, REI, Marmot and Patagonia. Throughout 2022, a TikTok trend emerged where individuals would shower, fully clothed with an Arc'teryx jacket, as British rapper YT's song "Arc'teryx" played on background.
Summarize the interesting points about Arc'teryx
- Arc'teryx is a Canadian company which produces high-tech outdoor gears - Their best known products include waterproof Gore-Tex shell jackets, knitwear, and down parkas - Started as a company selling climbing gears, the company went through several re-branding and, in early 2020s, co-produced items with high-fashion brands which expanded their consumer market beyond outdoor enthusiasts - The company owns two branches, one for commercial consumers, the other for military and police market - The company, especially its Gore-Tex product line, has become a popular urban street fashion and has a major influence in the "gorpcore" and "normcore" fashion movements, along with similar brands such as Mammut, REI, Marmot and Patagonia
null
false
null
How can I become a better runner?
There are a few things you can do to become a better runner: 1. Train consistently - Endurance running creates physiological adaptations which take a long time for you to notice. Try to run at least 3 or 4 times a week for at least 30 minutes to begin building those adaptations. 2. Get enough sleep - studies show that muscles recover better if you get at least 8 hours of sleep each night. 3. Fuel your body - you can't gain muscle if you don't provide your body with enough calories to keep yourself fueled. Ensure that you are eating a good mix of protein, carbohydrates, and vegetables to set yourself up for success. 4. Don't overtrain - One common mistake new runners make is to run more than they are trained to manage and end up injuring yourself. You can't build fitness if you are off of training due to injury!
null
false
null
Where did Anne Frank live after her family fled Germany?
Anne Frank and her family moved to Amsterdam, Netherlands, where she went to school and later on, they went into hiding for more than two years before they were discovered and brought to Germany.
null
false
null
Identify which instrument is string or percussion: Chikara, Bamboula
Bamboula is percussion, Chikara is string.
null
false
null
The Yorkshire Terrier is a British breed of toy dog of terrier type. It is among the smallest of the terriers and indeed of all dog breeds, with a weight of no more than 3.2 kg. It originated in the nineteenth century in the English county of Yorkshire, for which it is named. Their coats are typically tan on the head with a dark steel-grey body, but they can come in a variety of colors. It is a playful and energetic dog, usually kept as a companion dog. It has contributed to the development of other breeds including the Silky Terrier, and also to cross-breeds such as the Yorkipoo.
Given this reference on Yorkshire Terriers, tell me where they originate from, and what are their typical size and color.
Yorkshire Terrier originated from English county of Yorkshire, it has a small size with weight of no more than 3.2 kg. They have a variety of colors but typically their coats are tan on the head and dark steel-grey on the body.
null
false
null
One Day as a Lion was an alternative rock supergroup that was started in 2008 by Zack de la Rocha, the vocalist of Rage Against the Machine, and Jon Theodore, former drummer of The Mars Volta and current drummer of Queens of the Stone Age. The duo blended elements of rock and hip hop. One Day as a Lion released their eponymous debut EP in July 2008 on the Anti- label. The band's name comes from a quote by Benito Mussolini
Who formed the rock group One Day as a Lion?
Zack de la Rocha and Jon Theodore
1704.05572
false
null
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T). Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
What OpenIE method was used to generate the extractions?
The answers are shown as follows: * for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S * take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$
null
false
null
Why is the Pacific ocean typically cold off of the West Coast of the United States?
The primary ocean current along the West Coast of the United States is often referred to as the "California Current" and follows a clockwise pattern, bringing cold water from the Gulf of Alaska southeasterly, where it eventually meets the beaches of Washington, Oregon and California. The further south you go typically the warmer it gets so while surfers may be out in a thick wetsuit in June in San Francisco you may well see surfers and swimmers from Santa Barbara southward in shorts or a spring suit.
null
false
146
Suppose we have a budget INLINEFORM0 which we can allocate to collect extra answers for a subset of visual questions. Our system automatically decides to which visual questions to allocate the “extra" answers in order to maximize captured answer diversity for all visual questions. The aim of our system is to accrue additional costs and delays from collecting extra answers only when extra responses will provide more information. Towards this aim, our system involves three steps to collect answers for all INLINEFORM0 visual questions (Figure FIGREF11 a). First, the system applies our top-performing random forest classifier to every visual question in the batch. Then, the system ranks the INLINEFORM1 visual questions based on predicted scores from the classifier, from visual questions most confidently predicted to lead to answer “agreement" from a crowd to those most confidently predicted to lead to answer “disagreement" from a crowd. Finally, the system solicits more ( INLINEFORM2 ) human answers for the INLINEFORM3 visual questions predicted to reflect the greatest likelihood for crowd disagreement and fewer ( INLINEFORM4 ) human answers for the remaining visual questions. More details below. The aim of our system is to accrue additional costs and delays from collecting extra answers only when extra responses will provide more information.
What is the aim of their Answer Collection System?
To accrue additional costs and delays from collecting extra answers only when extra responses will provide more information.
null
false
null
Which episode of "The X-Files" was broadcast in black-and-white?
The Post-Modern Prometheus
null
false
null
Dataminr was founded in 2009 by Yale University graduates Ted Bailey, Sam Hendel and Jeff Kinsey. Dataminr came to wider notice when it issued an alert that Osama bin Laden had been killed 23 minutes faster than major news organizations. In 2014, Datamnr entered into a partnership with CNN and Twitter, resulting in Dataminr for News, a tool to "alert journalists to information that’s emerging on Twitter in real time." On December 30, 2019, Dataminr claimed to have detected the first signals of the COVID-19 outbreak within public social media posts.[citation needed] The company went on to detect clusters indicating future spikes in 14 different US states. Seven days later, all 14 states were hit hard by the coronavirus. Dataminr partnered with the UN in May 2019 to equip thousands of UN personnel with Dataminr’s First Alert product for the public sector. Dataminr's social media intelligence contract for the FBI was taken over by Zerofox at the end of 2020. On the morning of January 5, 2021, Dataminr warned Capitol security officials of troubling online public chatter that would soon become the January 6 riot. In July 2021, Dataminr conducted its first M&A transaction when it acquired WatchKeeper, a UK-based geovisualization platform. In the acquisition, Dataminr combined WatchKeeper’s geovisualized data layers with its Pulse platform to provide context around events. A few months later, in October 2021, Dataminr acquired Krizo, a real-time crisis response platform based in Copenhagen, Denmark. The company ranked #5 on the Forbes AI 50 List in 2019 and was the winner of an 2019 AI Breakthrough Award for Best Overall AI Solution. In 2020, Dataminr was named to Forbes Cloud 100, and Deloitte Technology Fast 500, ‘Most Innovative Use of AI’ at the 2020 AI & Machine Learning Awards. In 2020, Dataminr’s AI team had published several papers including, “Unsupervised Detection of Sub-Events in Large Scale Disasters,” for the Association for the Advancement of Artificial Intelligence quarterly journal; “Multimodal Categorization of Crisis Events in Social Media,” for Computer Vision and Pattern Recognition 2020; “Clustering of Social Media Messages for Humanitarian Aid Response during Crisis,” for AI for Social Good, and “The ApposCorpus: A new multilingual, multi-domain dataset for factual appositive generation,” for COLING 2020. In late 2021, the company announced the appointment of Cristina Bita to its board of directors and as chairwoman of the audit committee.
How long has Dataminr been operating and who are their customers?
Dataminr was founded in 2009 so it has been operating for 14 years. Dataminr sells to variety of organisations in the public and private sector such as the UN and journalists from new media organisations.
1709.10217
false
null
From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue.
What problems are found with the evaluation scheme?
The answers are shown as follows: * no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue
null
false
null
What is the length of an Olympic size swimming pool?
Olympic size swimming pool is 50 meters in length
null
false
501
Figure 6: Left: Mixture of one dimensional Gaussians we took samples from. Color denotes class label. Middle: Epistemic uncertainty our model assigns to data points. Note that the uncertainty is quite high in the region of 3-5. For the sake of visualization, we clipped the maximum value to be 1. Right: Our approximation of aleatoric uncertainty is built along with the true conditional probability.
How accurate is the proposed method in terms of estimating the TRUE epistemic uncertainty? In cases where the generative distribution is known, this should be easy to compute.
The proposed approach is suspicious on theoretical grounds, in that it relies essentially on the ratio of some estimation of training uncertainty and density in representation-space in order to estimate epistemic uncertainty.
1910.05154
false
null
We use the bilingual neural-based Unsupervised Word Segmentation (UWS) approach from BIBREF6 to discover words in Mboshi. In this approach, Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target, the language to document (unsegmented phonemic sequence). Due to the attention mechanism present in these networks BIBREF7, posterior to training, it is possible to retrieve soft-alignment probability matrices between source and target sequences. These matrices give us sentence-level source-to-target alignment information, and by using it for clustering neighbor phonemes aligned to the same translation word, we are able to create segmentation in the target side. The product of this approach is a set of (discovered-units, translation words) pairs. In this work we apply two simple methods for including multilingual information into the bilingual models from BIBREF6. The first one, Multilingual Voting, consists of merging the information learned by models trained with different language pairs by performing a voting over the final discovered boundaries. The voting is performed by applying an agreement threshold $T$ over the output boundaries. This threshold balances between accepting all boundaries from all the bilingual models (zero agreement) and accepting only input boundaries discovered by all these models (total agreement). The second method is ANE Selection. For every language pair and aligned sentence in the dataset, a soft-alignment probability matrix is generated. We use Average Normalized Entropy (ANE) BIBREF8 computed over these matrices for selecting the most confident one for segmenting each phoneme sequence. This exploits the idea that models trained on different language pairs will have language-related behavior, thus differing on the resulting alignment and segmentation over the same phoneme sequence. Lastly, following the methodology from BIBREF8, we extract the most confident alignments (in terms of ANE) discovered by the bilingual models. Table presents the top 10 most confident (discovered type, translation) pairs. Looking at the pairs the bilingual models are most confident about, we observe there are some types discovered by all the bilingual models (e.g. Mboshi word itua, and the concatenation oboá+ngá). However, the models still differ for most of their alignments in the table. This hints that while a portion of the lexicon might be captured independently of the language used, other structures might be more dependent of the chosen language. On this note, BIBREF11 suggests the notion of word cannot always be meaningfully defined cross-linguistically. FLOAT SELECTED: Table 3: Top 10 confident (discovered type, translation) pairs for the five bilingual models. The “+” mark means the discovered type is a concatenation of two existing true types. In this approach, Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target, the language to document (unsegmented phonemic sequence). The first one, Multilingual Voting, consists of merging the information learned by models trained with different language pairs by performing a voting over the final discovered boundaries. The voting is performed by applying an agreement threshold $T$ over the output boundaries. Table presents the top 10 most confident (discovered type, translation) pairs. Looking at the pairs the bilingual models are most confident about, we observe there are some types discovered by all the bilingual models (e.g. Mboshi word itua, and the concatenation oboá+ngá). However, the models still differ for most of their alignments in the table. FLOAT SELECTED: Table 3: Top 10 confident (discovered type, translation) pairs for the five bilingual models. The “+” mark means the discovered type is a concatenation of two existing true types.
What are the different bilingual models employed?
The answers are shown as follows: * Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target
null
false
null
What makes for a great pizza?
There are many different variations of pizza. Many would say that the crust and sauce are more important than the toppings when considering what makes a pizza great. Toppings (pepperoni, mushroom, sausage, pineapple, etc.) are generally pretty standard, with little variation by source or region, though premium ingredients may be had my many pizza chefs, these are not considered here because premium toppings may not be available to every pizza chef. The crust and sauce, however, are nearly limitless in variation. From thin crust to thick crusts, to bright or spicy sauce; a pizza chef may make their mark with excellent pizza on the foundation of a well made crust with a well flavored sauce.
null
false
null
Write a paragraph about the word "woman".
The word "woman" is a gender construct to be contrasted with the word "man". This word has come under recent scrutiny due to the increasing significance of the LGBTQIA+ community and their aims to be seen, expressed, or acknowledged within their communities. Traditionally, the word referred to an "adult human female". This definition has come under scrutiny for being unreasonably restrictive. Another common definition is "a woman is a person who feels like a woman". This definition has come under scrutiny for being circular.
null
false
207
Word Sense Disambiguation (WSD) is an important problem in Natural Language Processing (NLP), both in its own right and as a stepping stone to other advanced tasks in the NLP pipeline, applications such as machine translation BIBREF0 and question answering BIBREF1 . WSD specifically deals with identifying the correct sense of a word, among a set of given candidate senses for that word, when it is presented in a brief narrative (surrounding text) which is generally referred to as context. Consider the ambiguous word `cold'. In the sentence “He started to give me a cold shoulder after that experiment”, the possible senses for cold can be cold temperature (S1), a cold sensation (S2), common cold (S3), or a negative emotional reaction (S4). Therefore, the ambiguous word cold is specified along with the sense set {S1, S2, S3, S4} and our goal is to identify the correct sense S4 (as the closest meaning) for this specific occurrence of cold after considering - the semantic and the syntactic information of - its context. In this effort, we develop our supervised WSD model that leverages a Bidirectional Long Short-Term Memory (BLSTM) network. This network works with neural sense vectors (i.e. sense embeddings), which are learned during model training, and employs neural word vectors (i.e. word embeddings), which are learned through an unsupervised deep learning approach called GloVe (Global Vectors for word representation) BIBREF2 for the context words. By evaluating our one-model-fits-all WSD network over the public gold standard dataset of SensEval-3 BIBREF3 , we demonstrate that the accuracy of our model in terms of F-measure is comparable with the state-of-the-art WSD algorithms'. We outline the organization of the rest of the paper as follows. In Section 2, we briefly explore earlier efforts in WSD and discuss recent approaches that incorporate deep neural networks and word embeddings. Our main model that employs BLSTM with the sense and word embeddings is detailed in Section 3. We then present our experiments and results in Section 4 supported by a discussion on how to avoid some drawbacks of the current model in order to achieve higher accuracies and demand less number of training data which is desirable. Finally, in Section 5, we conclude with some future research directions for the construction of sense embeddings as well as applications of such model in other domains such as biomedicine. By evaluating our one-model-fits-all WSD network over the public gold standard dataset of SensEval-3 , we demonstrate that the accuracy of our model in terms of F-measure is comparable with the state-of-the-art WSD algorithms'.
In what measure that the accuracy of their model is comparable with the state-of-the-art WSD algorithms'?
F-measure.
null
false
null
Find an item from each category bellow that somehow relates to a banana. place animal vehicle feeling food tech song movie
place: Bahamas animal: monkey vehicle: banana boat feeling: warm food: banana pudding tech: mailchimp song: copacabana movie: beverly hills cop
null
false
null
Rudy Stevens became famous under which name
Barbara Stanwyck
null
false
null
The only known monotremes in the animal kingdom are the echidna and which other creature?
The platypus.
null
false
null
Identify which teams in the following list are hockey teams: Buffalo Sabres, Detroit Red Wings, Florida Panthers, Montreal Canadiens, New England Patriots, Pittsburgh Pirates, new York Yankees, Texas Rangers, Manchester United, Ottawa Senators, Tampa Bay Lightning, Toronto Maple Leafs, Chelsea, Liverpool FC, Los Angeles Lakers, Toronto Raptors, Arizona Coyotes, Chicago Blackhawks, Colorado Avalanche, Dallas Stars, Minnesota Wild, Pittsburgh Steelers
The NHL teams in this list are as follows: Buffalo Sabres, Detroit Red Wings, Florida Panthers, Montreal Canadiens, Ottawa Senators, Tampa Bay Lightning, Toronto Maple Leafs, Arizona Coyotes, Chicago Blackhawks, Colorado Avalanche, Dallas Stars, and the Minnesota Wild.
null
false
5
In contrast to the language embeddings, the phoneme embeddings appear to show many regularities (see Table TABREF33 ). This is a sign that our multilingual model learns similar embeddings for phonemes that are written with the same grapheme in different languages. These phonemes tend to be phonetically similar to each other. Perhaps the structure of the phoneme embedding space is what leads to our models' very good performance on WER 100. Even when the model's first predicted pronunciation is not correct, it tends to assign more probability mass to guesses that are more similar to the correct one. Applying some sort of filtering or reranking of the system output might therefore lead to better performance. Perhaps the structure of the phoneme embedding space is what leads to our models' very good performance on WER 100. Even when the model's first predicted pronunciation is not correct, it tends to assign more probability mass to guesses that are more similar to the correct one. Applying some sort of filtering or reranking of the system output might therefore lead to better performance.
Is there any improvement that can be done to optimize phoneme embedding results?
Applying some sort of filtering or reranking of the system output might therefore lead to better performance.
null
false
null
What was Seward's Folly?
Seward's Folly was the name given by opponents to the 1867 purchase of Alaska from Russia by the United States. Secretary of State William Seward negotiated the treaty in secrecy, and critics mocked both him and President Andrew Johnson for the purchase of a "polar bear garden." However, it turned out to be a profitable investment when gold was discovered in Alaska in 1896.
null
false
null
Why invest in the stock market?
Long term investing in the stock market is a good idea. Short term growth is risky and fluctuates a lot but long term growth can show returns as much at 10% - 15% return per year on average. Tracking the S&P 500 is a great way to get started because its a well diversified option that tracks 500 of the largest companies listed on stock exchanges. It has a historical return of nearly 10% and the S&P 500 was introduced in 1957.
null
false
null
Classify each of these car manufacturers as either French, German, American, Japanese, or other: Mercedes-Benz, Toyota, DS, Ford, Tesla, Fiat, KIA
French: DS German: Mercedes-Benz American: Ford, Tesla Japanese: Toyota other: Fiat, KIA
null
false
null
Thomas Edward Patrick Brady Jr. (born August 3, 1977) is a former American football quarterback who played in the National Football League (NFL) for 23 seasons. He spent his first 20 seasons with the New England Patriots and was a central contributor to the franchise's dynasty from 2001 to 2019. In his final three seasons, he was a member of the Tampa Bay Buccaneers. Brady is widely regarded as the greatest quarterback of all time.
Given a reference text about Tom Brady, tell me when they were born and what they are famous for.
Tom Brady was born on August 3rd, 1977. He is famous for being the greatest quarterback of all time.
null
false
null
Cunningham was drafted with the first overall pick of the 2021 NBA draft by the Detroit Pistons. On August 8, 2021, he made his debut in the 2021 NBA Summer League in a 76–72 win against the Oklahoma City Thunder in which he posted 12 points, six rebounds, and two assists in 26 minutes. Cunningham lost part of the training camp, all of the preseason, and five of Detroit's first six games due to an ankle injury. On October 30, he made his NBA debut, putting up two points, two assists, and seven rebounds in a 110–103 win over the Orlando Magic. After five games, Cunningham posted his first 40-plus field-goal percentage, and he had an 18-point, 10-rebound double-double in his third NBA game. On November 15, Cunningham became the youngest player in NBA history to tally at least 25 points, eight rebounds, and eight assists with five three-pointers in a game at 20 years, 51 days. He topped LeBron James (20 years, 100 days) and Trae Young (20 years, 163 days). On November 21, he notched a triple-double with 13 points, 12 rebounds, and 10 assists in a 121—116 loss to the Los Angeles Lakers to become the eighth-youngest player in NBA history to record a triple-double and the youngest in Pistons history.
What achievements did Cade Cunningham receive during his first year in the NBA based on the provided paragraph?
On November 15, 2021, Cade Cunningham became the youngest player in NBA history to tally at least 25 points, eight rebounds, and eight assists with five three-pointers in a game at 20 years, 51 days. He topped LeBron James (20 years, 100 days) and Trae Young (20 years, 163 days). On November 21, 2021, he notched a triple-double with 13 points, 12 rebounds, and 10 assists in a 121—116 loss to the Los Angeles Lakers to become the eighth-youngest player in NBA history to record a triple-double and the youngest in Pistons history.
null
false
515
We adopt two evaluation metrics: segment-based and event-based metric. Segment-based metric is a widely adopted evaluation metric, it couples semantic label and spatial location together: a semantic-correctly detected sound source needs to be spatially close enough to its ground truth location in order to be regarded as a true positive detection. Event-based based metric is newly proposed by to comprehensively evaluate under different confidence scores. Like object detection from images, it computes mean average precision (mAP) and mean average recall (mAR) score. We call our framework SoundSynp and compare it with four most recent methods: SELD-Net, EIN, SoundDet and Utsc-Iflytek. SELDNet is the baseline model and it jointly trains sound source's semantic label and spatial location with a convolutional recurrent neural network (CRNN). EIN) is a very recent work. It adopts multi-heads self-attention to model temporal dependency and trackwise permutation-invariant training to train the model. SoundDet directly learns from raw waveform with MaxCorr kernels, followed by an encoder-decoder neural network. Utsc-Iflytek is ranked first in DECASE2020 challenge leaderboard 1 , it combines MIC and FOA features and ensembles different models like ResNet and Xception(Chollet, 2017) to give final prediction. The network architecture is shown in table in Appendix material. To train the neural network, we evenly divide the one minute long four-channel raw waveform into non-overlapping 4s short snippets. The raw waveform is first normalized to r´1, 1s before feeding the neural network. We adopt Adam optimizer with an initial learning rate 0.0002 in the first 100 epochs and 0.00007 in the following 50 epochs. Batchsize is 16. The loss combination weight between classification head and regression head is 1 : 2. During training, data augmentation method SpecAugment is applied. For DoA task, we regress direction of arrival angle in Cartesian coordinates rx, y, zs. In synperiodic filter bank groups, the filter length is 1025, each group's filter number is 256 and the step size is 600. Particularly, we have observed the initialized learnable synperiodic filter banks update its parameters intensively during the very early several epochs, and then gradually becomes stable. The final learned parameters are close to their initialization, one group's learned parameters (like frequency response and filter window length) is different from those learned by other groups, although they are initialized as the same in the start. We train each model five times independently and report the average score. The standard deviation is within 0.04 (for recall) and 0.2 ˝for angle, 0.003 for mAP and mAR. The result is given in Table1, from which we see that SoundSynp achieves the best performance over all comparing methods by a large margin, under both segment-based and event-based metrics. Both SELDNet, EIN and UTSC-Iflytek use pre-extracted hand-engineered sound features, such as Logmel, GCC-Phat and Intensity Vector. SoundDet and SoundSynp are the only two methods that directly learn from raw waveforms. At the same time, SoundSynp obtains better performance on FOA than MIC format, the same phenomena has been observed by all other methods. It thus shows FOA better fits for sound source detection than MIC. It is worth noting that Utsc-Iflytek ensembles different powerful image-based 2D models to detect sound sources. However, our proposed SoundSynp still outperforms Utsc-Iflytek by a large margin. We don't report of mAP/mAR value for Utsc-Iflytek because it is a complex system and no detail about their system is available. Ablation Study To disentangle the individual contribution of each part of our SoundSynp framework to the whole performance improvement, we conduct three ablation studies. First, the individual contribution of Synperiodic filter bank. We replace hand-engineered sound feature head of SELDNet, EIN and learnable MaxCorr filter bank with our proposed Synperiodic filter bank to test their corresponding performance. It helps to remove the influence of the backbone neural network of different models and thus helps to get direct comparison of synperiodic filter bank with other front-end. Second, we replace SoundSynp's synperiodic filter bank group with widely-used MFCC and Log-Mel that is used by and EIN, respectively. It disentangles synperiodic filter bank group with Transformer-like backbone, thus helps to figure out if the performance gain is simply brought by backbone network. Third, internally, we test five synperiodic variants: (1) synperiodic filter bank with just multiscale perception in frequency domain (SoundSynp MSFreq), (2) just multi-scale perception in time domain (SoundSynp MSTime), (3) Synperiodic filter bank with frequency responses linearly initialized in Nyquist frequency range (SoundSynp Linear, compare with our mel-scale initialization), (4) just one synperiodic filter bank without multi-scale perception neither in time nor frequency domain (SoundSynp SingleScale). () synperiodic filter bank with rectangular band-pass frequency response initialization (SoundSynp Sinc), like SincNet does. The internal comparison helps us to figure out the necessity of each part of synperiodic filter bank design. The ablation study result is shown in Table. We can observe that: First, using synperiodic filter bank as a replacement of existing filter bank can help to improve the corresponding performance. Second, replacing SoundSynp's synperiodic filter bank with classic hand-engineered features inevitably reduces the performance under all evaluation metrics, its thus shows learning from pre-extracted fixed and single-scale sound representation leads to inferior performance than our proposed multiscale synperiodic filter bank. Third, the absence of multi-scale perception in either frequency domain or time domain inevitably reduces the performance. We find sound source semantic label detection suffers more in single-scale perception in time domain than in frequency domain (see better performance on ER 20 ˝, and F 20 ˝score), which shows frequency domain multi-scale perception is vital for semantic label estimation. Similarly, we can observe that multi-scale perception in time domain is vital for sound source spatial location estimation (see better performance on LE and LR score). Linearly initialized filter bank frequency response reduces the performance, which shows assigning more filters to the lower frequency range is important. But this conclusion might be data-dependent because we find DCASE dataset contains many low-frequency sounds like burning fire and footsteps. Moreover, reducing the synperiodic filter bank group to one groups with just singlescale perception leads to comparably the worst performance, it thus shows multi-scale perception in both time and frequency domain is essential for DoA-based sound source detection. Lastly, SoundSynp Sinc leads to slightly inferior performance than our used mel-scale initialization strategy, it shows our proposed synperiodic filter bank is a general filter bank that can be adopted to other frequency sensitive filter bank. One qualitative comparison is shown in Fig.. We can clearly see that SELDNet generates mixed prediction at different time steps and DoA locations. SoundDet and EIN give non-existing sound sources (orange color). When multiple sound sources happen at the same time (polyphonicity), SoundDet and EIN are easily failed to predict the right spatial location (discretized blue and red color). Our method predicts more spatially and temporally consistent sound sources by maximally keeping sound source's continuity and consistency. We adopt two evaluation metrics: segment-based and event-based metric. Segment-based metric is a widely adopted evaluation metric, it couples semantic label and spatial location together: a semantic-correctly detected sound source needs to be spatially close enough to its ground truth location in order to be regarded as a true positive detection. Event-based based metric is newly proposed by to comprehensively evaluate under different confidence scores. Like object detection from images, it computes mean average precision (mAP) and mean average recall (mAR) score. We call our framework SoundSynp and compare it with four most recent methods: SELD-Net, EIN, SoundDet and Utsc-Iflytek. SELDNet is the baseline model and it jointly trains sound source's semantic label and spatial location with a convolutional recurrent neural network (CRNN). EIN) is a very recent work. It adopts multi-heads self-attention to model temporal dependency and trackwise permutation-invariant training to train the model. SoundDet directly learns from raw waveform with MaxCorr kernels, followed by an encoder-decoder neural network. Utsc-Iflytek is ranked first in DECASE2020 challenge leaderboard 1 , it combines MIC and FOA features and ensembles different models like ResNet and Xception(Chollet, 2017) to give final prediction. The network architecture is shown in table in Appendix material. To train the neural network, we evenly divide the one minute long four-channel raw waveform into non-overlapping 4s short snippets. The raw waveform is first normalized to r´1, 1s before feeding the neural network. We adopt Adam optimizer with an initial learning rate 0.0002 in the first 100 epochs and 0.00007 in the following 50 epochs. Batchsize is 16. The loss combination weight between classification head and regression head is 1 : 2. During training, data augmentation method SpecAugment is applied. For DoA task, we regress direction of arrival angle in Cartesian coordinates rx, y, zs. In synperiodic filter bank groups, the filter length is 1025, each group's filter number is 256 and the step size is 600. Particularly, we have observed the initialized learnable synperiodic filter banks update its parameters intensively during the very early several epochs, and then gradually becomes stable. The final learned parameters are close to their initialization, one group's learned parameters (like frequency response and filter window length) is different from those learned by other groups, although they are initialized as the same in the start. We train each model five times independently and report the average score. The standard deviation is within 0.04 (for recall) and 0.2 ˝for angle, 0.003 for mAP and mAR. The result is given in Table1, from which we see that SoundSynp achieves the best performance over all comparing methods by a large margin, under both segment-based and event-based metrics. Both SELDNet, EIN and UTSC-Iflytek use pre-extracted hand-engineered sound features, such as Logmel, GCC-Phat and Intensity Vector. SoundDet and SoundSynp are the only two methods that directly learn from raw waveforms. At the same time, SoundSynp obtains better performance on FOA than MIC format, the same phenomena has been observed by all other methods. It thus shows FOA better fits for sound source detection than MIC. It is worth noting that Utsc-Iflytek ensembles different powerful image-based 2D models to detect sound sources. However, our proposed SoundSynp still outperforms Utsc-Iflytek by a large margin. We don't report of mAP/mAR value for Utsc-Iflytek because it is a complex system and no detail about their system is available. Ablation Study To disentangle the individual contribution of each part of our SoundSynp framework to the whole performance improvement, we conduct three ablation studies. First, the individual contribution of Synperiodic filter bank. We replace hand-engineered sound feature head of SELDNet, EIN and learnable MaxCorr filter bank with our proposed Synperiodic filter bank to test their corresponding performance. It helps to remove the influence of the backbone neural network of different models and thus helps to get direct comparison of synperiodic filter bank with other front-end. Second, we replace SoundSynp's synperiodic filter bank group with widely-used MFCC and Log-Mel that is used by and EIN, respectively. It disentangles synperiodic filter bank group with Transformer-like backbone, thus helps to figure out if the performance gain is simply brought by backbone network. Third, internally, we test five synperiodic variants: (1) synperiodic filter bank with just multiscale perception in frequency domain (SoundSynp MSFreq), (2) just multi-scale perception in time domain (SoundSynp MSTime), (3) Synperiodic filter bank with frequency responses linearly initialized in Nyquist frequency range (SoundSynp Linear, compare with our mel-scale initialization), (4) just one synperiodic filter bank without multi-scale perception neither in time nor frequency domain (SoundSynp SingleScale). () synperiodic filter bank with rectangular band-pass frequency response initialization (SoundSynp Sinc), like SincNet does. The internal comparison helps us to figure out the necessity of each part of synperiodic filter bank design. The ablation study result is shown in Table. We can observe that: First, using synperiodic filter bank as a replacement of existing filter bank can help to improve the corresponding performance. Second, replacing SoundSynp's synperiodic filter bank with classic hand-engineered features inevitably reduces the performance under all evaluation metrics, its thus shows learning from pre-extracted fixed and single-scale sound representation leads to inferior performance than our proposed multiscale synperiodic filter bank. Third, the absence of multi-scale perception in either frequency domain or time domain inevitably reduces the performance. We find sound source semantic label detection suffers more in single-scale perception in time domain than in frequency domain (see better performance on ER 20 ˝, and F 20 ˝score), which shows frequency domain multi-scale perception is vital for semantic label estimation. Similarly, we can observe that multi-scale perception in time domain is vital for sound source spatial location estimation (see better performance on LE and LR score). Linearly initialized filter bank frequency response reduces the performance, which shows assigning more filters to the lower frequency range is important. But this conclusion might be data-dependent because we find DCASE dataset contains many low-frequency sounds like burning fire and footsteps. Moreover, reducing the synperiodic filter bank group to one groups with just singlescale perception leads to comparably the worst performance, it thus shows multi-scale perception in both time and frequency domain is essential for DoA-based sound source detection. Lastly, SoundSynp Sinc leads to slightly inferior performance than our used mel-scale initialization strategy, it shows our proposed synperiodic filter bank is a general filter bank that can be adopted to other frequency sensitive filter bank. One qualitative comparison is shown in Fig.. We can clearly see that SELDNet generates mixed prediction at different time steps and DoA locations. SoundDet and EIN give non-existing sound sources (orange color). When multiple sound sources happen at the same time (polyphonicity), SoundDet and EIN are easily failed to predict the right spatial location (discretized blue and red color). Our method predicts more spatially and temporally consistent sound sources by maximally keeping sound source's continuity and consistency. We use Pyroomacoustics simulator to simulate a shoebox like room with r7m, 5m, 3ms size. Four microphones (24kHz sampling rate) are configured in a plane with 5cm mutual distance and put in the room center (r3.5m, 2.5m, 1.5ms). We choose three commonly heard sounds from BBC sound effect site 2 : cat meowing, baby talking and dog barking. All these seed sounds last less than 10s. During simulation, we put all seed sound in the plane z " 1.5m (we constrain to a plane because we find relaxing z easily leads EIN model to get stuck in a local minima like r´1, ´1, ´1s), but with x randomly in r2m, 5ms and y randomly in r1m, 4ms, up to two sounds can happen at the same time. In sum, we have simulated 500 one-minute recordings (400 for train, 100 for test), with an average of four sounds recorded per recording. For physical location prediction, we regress the angle and range. The result is in Table. We do not report SELDNet result as we find it doesn't converge during train. We can see from the table that SoundSynp produces the best performance than over EIN and SoundDet network. Replacing their front-end with synperiodic filter bank also improves the performance. It thus shows our proposed SoundSynp is capable of learning representation for physical-location based sound source detection. Table 2: Ablation study report. Internally, we replace SELDNet, EIN and SoundDet head with synperiodic filter bank (denoted as Synp). Externally, we test different modified SoundSynp versions.****Third, internally, we test five synperiodic variants: (1) synperiodic filter bank with just multiscale perception in frequency domain (SoundSynp MSFreq), (2) just multi-scale perception in time domain (SoundSynp MSTime), (3) Synperiodic filter bank with frequency responses linearly initialized in Nyquist frequency range (SoundSynp Linear, compare with our mel-scale initialization), (4) just one synperiodic filter bank without multi-scale perception neither in time nor frequency domain (SoundSynp SingleScale). (5) synperiodic filter bank with rectangular band-pass frequency response initialization (SoundSynp Sinc), like SincNet(Ravanelli & Bengio, 2018) does. The internal comparison helps us to figure out the necessity of each part of synperiodic filter bank design.****Replacing their front-end with synperiodic filter bank also improves the performance. It thus shows our proposed SoundSynp is capable of learning representation for physical-location based sound source detection.****The computation time of all comparing methods is given in Table 5, from which we can see that our proposed SoundSynp framework requires between 2x-3x more computation time than SELDNet, EIN and SoundDet due to its large Transformer-like backbone. However, SoundSynp computation time is still within a controllable range.
Whether the improvement is mainly due to Synperiodic filter bank alone or Synperiodic filter bank + new structure?
In our first version submission, we have conducted two ablation studies (shown in Table 2) 1. We replace SELDNet, EIN and SoundDet front-end head with our proposed synperiodic filter bank. Experimental result shows that introducing synperiodic filter bank to existing approaches helps to improve their corresponding performance. It thus shows our proposed synperiodic filter bank can be used as a plug-and-play front-end by existing approaches to improve their performance on sound source detection task, without introducing explicit extra parameters and computational burden. 2. Moreover, we test five variants of synperiodic filter bank. For example, we find out that reducing the multi-scale perception strategy of synperiodic filter bank to single-scale perception in either time domain or frequency domain reduces the performance. This internal synperiodic filter bank variants ablation study shows the necessity of each part of synperiodic filter bank design. To disentangle the synperiodic filter bank front-end and Transformer-like backbone, in our revised paper, we have added another ablation study (also given in Table 2), in which we replace our proposed SoundSynp's synperiodic filter bank with traditional hand-engineered features including MFCC and LogMel. The corresponding experimental result shows that using traditional sound features leads to inferior performance. In sum, we can conclude from the three ablation studies that: * The performance improvement is due to the joint contribution of synperiodic filter bank and Transformer-like backbone network structure. * Removing either Transformer-like backbone network (ablation study 1) or synperiodic filter bank (ablation study 2) inevitably reduces the performance. * Transformer-like backbone network and synperiodic filter bank can be disentangled to solve other sound related tasks separately.
null
false
24
We observe linguistic-agnostic representations in the last subsection. If tokens are represented in a language-agnostic way, the model may be able to handle code-switching data. Because there is no code-switching data for RC, we create artificial code-switching datasets by replacing some of the words in contexts or questions with their synonyms in another language. The synonyms are found by word-by-word translation with given dictionaries. We use the bilingual dictionaries collected and released in facebookresearch/MUSE GitHub repository. We substitute the words if and only if the words are in the bilingual dictionaries. Table TABREF14 shows that on all the code-switching datasets, the EM/F1 score drops, indicating that the semantics of representations are not totally disentangled from language. However, the examples of the answers of the model (Table TABREF21) show that multi-BERT could find the correct answer spans although some keywords in the spans have been translated into another language. Because there is no code-switching data for RC, we create artificial code-switching datasets by replacing some of the words in contexts or questions with their synonyms in another language. The synonyms are found by word-by-word translation with given dictionaries. We use the bilingual dictionaries collected and released in facebookresearch/MUSE GitHub repository.
How did they find the synonyms when replacing some of the words in contexts or questions with their synonyms in another language?
By word-by-word translation with given dictionaries.
1911.12569
false
null
We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections.
How is multi-tasking performed?
The answers are shown as follows: * The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. * Each of the shared representations is then fed to the primary attention mechanism
null
false
null
The dog has been selectively bred over millennia for various behaviors, sensory capabilities, and physical attributes. Dog breeds vary widely in shape, size, and color. They perform many roles for humans, such as hunting, herding, pulling loads, protection, assisting police and the military, companionship, therapy, and aiding disabled people. Over the millennia, dogs became uniquely adapted to human behavior, and the human–canine bond has been a topic of frequent study. This influence on human society has given them the sobriquet of "man's best friend".
What are dogs?
The dog (Canis familiaris or Canis lupus familiaris) is a domesticated descendant of the wolf. Also called the domestic dog, it is derived from the extinct Pleistocene wolf, and the modern wolf is the dog's nearest living relative. Dogs were the first species to be domesticated by hunter-gatherers over 15,000 years ago before the development of agriculture. Due to their long association with humans, dogs have expanded to a large number of domestic individuals and gained the ability to thrive on a starch-rich diet that would be inadequate for other canids.
null
false
159
We assessed the proposed models on four different NLG domains: finding a restaurant, finding a hotel, buying a laptop, and buying a television. The Restaurant and Hotel were collected in BIBREF4 , while the Laptop and TV datasets have been released by BIBREF22 with a much larger input space but only one training example for each DA so that the system must learn partial realization of concepts and be able to recombine and apply them to unseen DAs. This makes the NLG tasks for the Laptop and TV domains become much harder. The dataset statistics are shown in Table 1 . . The Restaurant and Hotel were collected in (Wen et al., 2015b), while the Laptop and TV datasets have been released by (Wen et al., 2016a) with a much larger input space but only one training example for each DA so that the system must learn partial realization of concepts and be able to recombine and apply them to unseen DAs.
Why are NLG tasks for the Laptop and TV domains much harder than others?
It has a much larger input space but only one training example for each DA so that the system must learn partial realization of concepts and be able to recombine and apply them to unseen DAs.
null
false
128
Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available. Its sentences are on average shorter than those in CoVoST (see also Table TABREF2) given the original purpose of language learning. Sentences in TT are licensed under CC BY 2.0 FR and part of the speeches are available under various CC licenses. We construct an evaluation set from TT (for French, German, Dutch, Russian and Spanish) as a complement to CoVoST development and test sets. We collect (speech, transcript, English translation) triplets for the 5 languages and do not include those whose speech has a broken URL or is not CC licensed. We further filter these samples by sentence lengths (minimum 4 words including punctuations) to reduce the portion of short sentences. This makes the resulting evaluation set closer to real-world scenarios and more challenging. We run the same quality checks for TT as for CoVoST but we do not find poor quality translations according to our criteria. Finally, we report the overlap between CoVo transcripts and TT sentences in Table TABREF5. We found a minimal overlap, which makes the TT evaluation set a suitable additional test set when training on CoVoST. Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available. Its sentences are on average shorter than those in CoVbST (see also Table 1) given the original purpose of language learning. Sentences in TT are licensed under CC BY 2.0 FR and part of the audio is available under various CC licenses. We construct an evaluation set from TT (for French, German, Dutch, Russian and Spanish) as a complement to CoVbST development and test sets. We collect (speech, transcript, English translation) triplets for the 5 languages and do not include those whose speech has a broken URL or is not CC licensed. We further filter these samples by sen_x001f_tence lengths (minimum 4 words including punctuations) to reduce the portion of short sentences. This makes the resulting evaluation set closer to real-world scenarios and more challenging. We run the same quality checks for TT as for CoVoST but we do not find poor quality translations according to our criteria. Finally, we report the overlap between CoVo transcripts and TT sentences in Table 2. We found a minimal overlap, which makes the TT evaluation set a suitable additional test set when training on CoVoST
Why is Tatoeba (TT) chosen as an additional test set?
TT is also a corpus having multiple languages, and it has corresponding speech partially available. After certain treatment of TT, it becomes quite convenient for evaluation as the overlap between CoVoST and TT is quite small.
null
false
null
What are some good slogan ideas for my new boat rental company?
Here are some potential slogans for your boat rental company: - Have a boat but only when you want a boat - There's nothing worse than being a boat owner. There's nothing better than being a boat renter. - Just leave it where you found it - We'll do the hard stuff. You just have fun out there. - You're the king of the lake today
1909.00578
false
null
Methods ::: Baselines ::: BiGRU s with attention: This is very similar to Sum-QE but now $\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5). Methods ::: Baselines ::: ROUGE: This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences. Methods ::: Baselines ::: Language model (LM): For a peer summary, a reasonable estimate of $\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter. Methods ::: Baselines ::: Next sentence prediction: BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\mathcal {Q}3$ (Referential Clarity), $\mathcal {Q}4$ (Focus) and $\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary: where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\left< s_{i-1}, s \right>$, and $n$ is the number of sentences in the peer summary. Methods ::: Baselines ::: BiGRU s with attention: This is very similar to Sum-QE but now $\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5). Methods ::: Baselines ::: ROUGE: This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences. Methods ::: Baselines ::: Language model (LM): For a peer summary, a reasonable estimate of $\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter. Methods ::: Baselines ::: Next sentence prediction: BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\mathcal {Q}3$ (Referential Clarity), $\mathcal {Q}4$ (Focus) and $\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary: where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\left< s_{i-1}, s \right>$, and $n$ is the number of sentences in the peer summary.
What simpler models do they look at?
The answers are shown as follows: * BiGRU s with attention * ROUGE * Language model (LM) * Next sentence prediction
null
false
null
Should I charge my phone if the battery is low?
Yes, typically it would be a good idea to charge your battery in that case. If your battery runs of charge, you may miss an important text message or phone call!
null
false
null
Alexander III of Macedon (Ancient Greek: Ἀλέξανδρος, romanized: Alexandros; 20/21 July 356 BC – 10/11 June 323 BC), commonly known as Alexander the Great, was a king of the ancient Greek kingdom of Macedon. He succeeded his father Philip II to the throne in 336 BC at the age of 20, and spent most of his ruling years conducting a lengthy military campaign throughout Western Asia and Egypt. By the age of 30, he had created one of the largest empires in history, stretching from Greece to northwestern India. He was undefeated in battle and is
Where was Alexander The Great from?
Alexender the Great was king of the ancient kingdom of Macedonia.
null
false
null
Kaka means parrot in which language
Maori
null
false
46
The used architecture of the QA model is briefly summarized below. Here we choose QANet BIBREF2 as the base model due to the following reasons: 1) it achieves the second best performance on SQuAD, and 2) since there are completely no recurrent networks in QANet, its training speed is 5x faster than BiDAF BIBREF17 when reaching the same performance on SQuAD. The network architecture is illustrated in Figure FIGREF2 . The left blocks and the right blocks form two QANets, each of which takes a document and a question as the input and outputs an answer. In QANet, firstly, an embedding encoder obtains word and character embeddings for each word in INLINEFORM0 or INLINEFORM1 and then models the temporal interactions between words and refines word vectors to contextualized word representations. All encoder blocks used in QANet are composed exclusively of depth-wise separable convolutions and self-attention. The intuition here is that convolution components can model local interactions and self-attention components focus on modeling global interactions. The context-query attention layer generates the question-document similarity matrix and computes the question-aware vector representations of the context words. After that, a model encoder layer containing seven encoder blocks captures the interactions among the context words conditioned on the question. Finally, the output layer predicts a start position and an end position in the document to extract the answer span from the document. The context-query attention layer generates the question-document similarity matrix and computes the question-aware vector representations of the context words.
What does the context-query attention layer generate?
The question-document similarity matrix.
null
false
67
Alzheimer’s disease (AD) is a neurodegenerative disease affecting over 40 million people worldwide with high costs of acute and long-term care BIBREF0 . Recruitment of participants with cognitive impairment has historically been a bottleneck in clinical trials BIBREF1 , making AD datasets relatively small. Additionally, though cognitive assessments test domains of cognition through multiple tasks, most available datasets of pathological speech are restricted to participants performing a single task. Picture description using an image to elicit narrative discourse samples is one such task that has proved to be successful in detecting AD BIBREF2 . However, it is important to develop ML models of high performance that would produce results generalizable across different tasks. Several studies have used natural language processing and machine learning to distinguish between healthy and cognitively impaired speech of participants describing a picture. Fraser et al. BIBREF3 used linguistic and acoustic features to classify healthy and pathological speech transcripts with an accuracy of INLINEFORM0 . Similarly, Karlekar et al. BIBREF4 classified utterances of speakers as AD or healthy (HC) with an accuracy of INLINEFORM1 using an enlarged, utterance-level view of transcripts of picture descriptions. In line with previous research, we use linguistic and acoustic features of speech as input to our ML model. Furthermore, we extend the model to using data from several different tasks. Noorian et al. BIBREF5 demonstrated that using within-task data of healthy participants describing a picture improved AD detection performance by up to 13%. In this paper, we evaluate if model performance improves with the addition of data from healthy participants, with varying ages, performing either the same or different tasks. We find that models trained on datasets of picture description tasks augmented with conversational speech of healthy speakers learn decision boundaries that are more generalizable across activities with lower out-of-task errors. We observe a 9% increase in AD detection performance when normative data from different tasks are utilized. We also analyze if each task provides domain-specific inductive bias for other tasks to obtain a model setting capable of detecting AD from any sample of speech using high-precision model-agnostic explanations proposed by Ribeiro et al. BIBREF6 and computation of various error metrics related to classification. We find that models trained on datasets of picture description tasks augmented with conversational speech of healthy speakers learn decision boundaries that are more generalizable across activities with lower out-of-task errors.
What kind of datasets does the author use to train the models?
The datasets of picture description tasks.
null
false
null
Classify these tasks by how easy they would be for a 10 year old to do (either easy or hard): drive a car, tie their shoes, throw a baseball, go to college, ride a bike, spell the word "water"
drive a car: hard, tie their shoes: easy, throw a baseball: easy, go to college: hard, ride a bike: easy, spell the word "water": easy
1701.08118
false
null
The surveys consisted of 20 tweets presented in a random order. For each tweet, each participant was asked three questions. Depending on the survey, participants were asked (1) to answer (yes/no) if they considered the tweet hate speech, either based on the definition or based on their personal opinion. Afterwards they were asked (2) to answer (yes/no) if the tweet should be banned from Twitter. Participants were finally asked (3) to answer how offensive they thought the tweet was on a 6-point Likert scale from 1 (Not offensive at all) to 6 (Very offensive). If they answered 4 or higher, the participants had the option to state which particular words they found offensive. to answer how offensive they thought the tweet was on a 6-point Likert scale from 1 (Not offensive at all) to 6 (Very offensive). If they answered 4 or higher, the participants had the option to state which particular words they found offensive.
Was the degree of offensiveness taken as how generally offensive the text was, or how personally offensive it was to the annotator?
Personal thought of the annotator.
null
false
null
List the prior presidents of Boston College
Monan, Leahy, Fulton, Gasson, Lyons, Devlin
null
false
null
Write a story about a toddler who woke up in the middle of the night after a nightmare about a monster under their bed.
The toddler woke up after hearing a shuffling noise and feeling their bed rattle. The bedroom was eerily dark and quiet all of a sudden. He turned on the light and saw a quick glimpse of what looked like hands retreating under the bed. "DAD!", he yelled out. As his dad ran into the room, he started crying - muttering, in between sniffles, about a monster under the bed. Dad bravely poked under the bed, squealed with excitement, and pulled out a toy fork from under the bed. Smiling, he showed the toddler his new toy - that he'd forgotten about. After a few cuddles and another bedtime story, he was fast asleep again. And the monster under the bed was back to planning its next move.
null
false
82
We evaluate the performance of text simplification using neural machine translation on available parallel sentences and additional simplified sentences. Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing. Metrics. Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 . We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity BIBREF8 . The three non-native fluent English speakers are shown reference sentences and output sentences. They are asked whether the output sentence is much simpler (+2), somewhat simpler (+1), equally (0), somewhat more difficult (-1), and much more difficult (-2) than the reference sentence. Methods. We use OpenNMT BIBREF24 as the implementation of the NMT system for all experiments BIBREF5 . We generally follow the default settings and training procedure described by Klein et al.(2017). We replace out-of-vocabulary words with a special UNK symbol. At prediction time, we replace UNK words with the highest probability score from the attention layer. OpenNMT system used on parallel data is the baseline system. To obtain a synthetic parallel training set, we back-translate a random sample of 100K sentences from the collected simplified corpora. OpenNMT used on parallel data and synthetic data is our model. The benchmarks are run on a Intel(R) Core(TM) i7-5930K CPU@3.50GHz, 32GB Mem, trained on 1 GPU GeForce GTX 1080 (Pascal) with CUDA v. 8.0. We choose three statistical text simplification systems. PBMT-R is a phrase-based method with a reranking post-processing step BIBREF18 . Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R BIBREF25 . SBMT-SARI BIBREF19 is syntax-based translation model using PPDB paraphrase database BIBREF26 and modifies tuning function (using SARI). We choose two neural text simplification systems. NMT is a basic attention-based encoder-decoder model which uses OpenNMT framework to train with two LSTM layers, hidden states of size 500 and 500 hidden units, SGD optimizer, and a dropout rate of 0.3 BIBREF8 . Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper BIBREF20 . For the experiments with synthetic parallel data, we back-translate a random sample of 60 000 sentences from the collected simplified sentences into ordinary sentences. Our model is trained on synthetic data and the available parallel data, denoted as NMT+synthetic. Results. Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. We also substantially outperform Dress, who previously reported SOTA result. The results of our human evaluation using Simplicity are also presented in Table 1. NMT on synthetic data is significantly better than PBMT-R, Dress, and SBMT-SARI on Simplicity. It indicates that our method with simplified data is effective at creating simpler output. Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with statistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data. We choose three statistical text simplification systems. PBMT-R is a phrase-based method with a reranking post-processing step (Wubben et al., 2012). Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R (Narayan and Gardent, 2014). SBMT-SARI (Xu et al., 2016) is syntax-based translation model using PPDB paraphrase database (Ganitkevitch et al., 2013) and modifies tuning function (using SARI).
What are the statistical text simplification systems used in the research?
PBMT-R, Hybrid and SBMT-SARI.
null
false
null
What is a bear in finance?
A bear is someone who expects share prices to fall in the future and sells shares now so that they can buy them back later at a lower price.
null
false
null
What is the standard loft of a 9-iron in golf?
The standard loft in today's 9-irons ranges from 45 to 48 degrees.
null
false
null
What does Tianwen means?
Tianwen means quest for heavenly truth
null
false
null
Name the most famous detective books in the history
- Murder on the Orient Express by Agatha Christie (1934) - The Hound of the Baskervilles by Sir Arthur Conan Doyle (1902) - The Maltese Falcon by Dashiell Hammett (1930) - The Big Sleep by Raymond Chandler (1939) - The Name of the Rose by Umberto Eco (1980) - In the Woods by Tana French (2007)
null
false
null
What caused the millenium tower to sink in san francisco?
The critics blame the city for allowing Millennium partners to anchor the building into packed sand rather than bedrock. The problem with Millennium tower arose because the 60 to 90-foot-long friction piles supporting it were driven into sandy soil rather than the bedrock, 200 feet down.
null
false
null
The main ways for employers to find workers and for people to find employers are via jobs listings in newspapers (via classified advertising) and online, also called job boards. Employers and job seekers also often find each other via professional recruitment consultants which receive a commission from the employer to find, screen and select suitable candidates. However, a study has shown that such consultants may not be reliable when they fail to use established principles in selecting employees. A more traditional approach is with a "Help Wanted" sign in the establishment (usually hung on a window or door or placed on a store counter). Evaluating different employees can be quite laborious but setting up different techniques to analyze their skills to measure their talents within the field can be best through assessments. Employer and potential employee commonly take the additional step of getting to know each other through the process of a job interview.
Summarise the ways an employer can find workers using the given text as a reference
1. Job listing in newspapers through classified advertising. 2. Posting Job listings online (Job Boards) . 3. Through Recruitment consultants who takes a commission from the employer to find, screen and select suitable candidates. 4. By displaying a 'Help Wanted' sign in the establishment near a window or door or on the reception.
null
false
null
Where should you visit in Belize if you are looking for a nice beach?
You should visit Secret Beach on Ambergris Caye if you want to experience calm waters without sargassum (sea grass) on the beach. In Secret Beach you will find the typical postcard beaches with nice sand, calm waters and additionally you get to watch the sunset since this side of the island faces west. On the east side of Ambergris Caye, you will experience more wind, more waves and sargassum. However, this side of the island has better access to the reef for snorkeling and fishing.
null
false
null
What can I do to help me sleep better?
Make sure that you turn off your phone at least an hour before bedtime. Avoid eating close to bedtime. Make sure that you allow for enough time to spend in bed: at least 7-9 hours. Avoid drinking alcohol right before bed. Don't rely on substances to help you sleep. Get significant exercise during the day. For most people though, you don't want to exercise right before sleep. Avoid benzodiazepenes as much as possible because they have a significant tendency to lead to dependency and will interfere with sleep.
null
false
null
Pultenaea empetrifolia is a species of flowering plant in the family Fabaceae and is endemic to the south-west of Western Australia. It is a spindly, prostrate or spreading shrub with down-curved, cylindrical, grooved leaves and yellow to orange and red flowers. Description Pultenaea empetrifolia is a spindly, prostrate or spreading shrub that typically grows to a height of up to 40 cm (16 in) and has glabrous stems. The leaves are cylindrical and curved strongly downwards with one or two grooves along the lower surface, 3–4.5 mm (0.12–0.18 in) long and 0.6–1.0 mm (0.024–0.039 in) wide with stipules at the base. The flowers are yellow to orange and red, and sessile or on a pedicel about up to 0.5 mm (0.020 in) long. The sepals are hairy and 4.0–4.6 mm (0.16–0.18 in) long with hairy bracteoles 1.5–3 mm (0.059–0.118 in) long at the base. The standard petal is 8.0–8.5 mm (0.31–0.33 in) long, the wings 7–8 mm (0.28–0.31 in) long and the keel 6.2–7.0 mm (0.24–0.28 in) long. Flowering occurs from September to October and the fruit is a pod.
Given a reference text about Pultenaea empetrifolia, tell me the colors of its flowers.
The Pultenaea empetrifolia has yellow to orange and red flowers.
null
false
null
The boat measures 99 feet (30 m) long overall, with a hull length of 80 feet (24 m), and an 18-foot (5.5 m) beam. She draws 32 inches (0.8 m). She is powered by a 160-horsepower (120 kW) 1940 Fairbanks-Morse 35E10 direct reversing diesel engine. She measures 51 gross registry tons and displaces approximately 108 long tons (110 t). She is fashioned out of steel plates riveted to a steel frame. She has a scow-form bow fitted with special "knees" used in pushing barges, and a flat bottom with no keel
Given the reference text about a towboat, how does the boat work?
The boat is powered by a 160-horsepower 1940 Fairbanks-Morse 35E10 direct reversing diesel engine.
null
false
null
Identify which car manufacturer is Chinese or American: Jeep, Karry
Karry is Chinese, Jeep is American
null
false
null
Serbia's electoral system was reformed in 2011, such that parliamentary mandates were awarded in numerical order to candidates on successful lists. Milisavljević received the forty-ninth position on the Democratic Party's Choice for a Better Life list and was re-elected when the list won sixty-seven mandates. The Serbian Progressive Party and the Socialist Party of Serbia formed a new coalition government after the election, and the Democratic Party moved into opposition. In 2013, Milisavljević proposed that the government of Serbia sell its high-powered Mercedes, Audi, and BMW cars in favour of the Fiat 500L model produced in Kragujevac. This proposal received some attention in the Italian media.
Extract the names of Serbian political parties mentioned in the text and put them into a bulleted list.
* Democratic Party * Serbian Progressive Party * Socialist Party of Serbia
1603.07044
true
null
Moreover, our approach is also language independent. We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. The results are competitive with a hand-tuned strong baseline from SemEval-2015. We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task.
Did they experimnet in other languages?
Yes.
1910.07134
false
null
The Transformer network BIBREF3 is a neural sequence-to-sequence model that has achieved state-of-the-art results in machine translation. However, Transformer models tend to be very large, typically consisting of hundreds of millions of parameters. As the number of parameters directly corresponds to secondary storage requirements and memory consumption during inference, using Transformer networks may be prohibitively expensive in scenarios with constrained resources. For the 2019 Workshop on Neural Generation of Text (WNGT) Efficiency shared task BIBREF0, the Notre Dame Natural Language Processing (NDNLP) group looked at a method of inducing sparsity in parameters called auto-sizing in order to reduce the number of parameters in the Transformer at the cost of a relatively minimal drop in performance. For the 2019 Workshop on Neural Generation of Text (WNGT) Efficiency shared task BIBREF0, the Notre Dame Natural Language Processing (NDNLP) group looked at a method of inducing sparsity in parameters called auto-sizing in order to reduce the number of parameters in the Transformer at the cost of a relatively minimal drop in performance.
What is WNGT 2019 shared task?
efficiency task aimed at reducing the number of parameters while minimizing drop in performance
null
false
null
What was Nicola Yoon's first published novel?
Everything, Everything, which was released in September 2015.
null
false
null
Which of the following are men scientists: Marie Curie, Elizabeth Blackwell, Jane Goodall, Rosalind Franklin, Ada Lovelace, Maria Mitchell, Archimedes, Albert Einstein, Ernest Rutherford, Alfred Nobel, Alexander Fleming, Michael Faraday, Niels Bohr, Stephen Hawking, Alan Turing.
Archimedes, Albert Einstein, Ernest Rutherford, Alfred Nobel, Alexander Fleming, Michael Faraday, Niels Bohr, Stephen Hawking, Alan Turing are scientists who are men.
null
false
null
Horsfieldia nervosa is a species of plant in the family Myristicaceae. It is a tree endemic to Borneo where it is confined to Sarawak.
Is Horsfieldia nervos a plant or animal?
plant
null
false
371
The recent increase in digitally available language corpora made it possible to extend the traditional linguistic tools to a vast amount of often user-generated texts. Understanding how these corpora differ from traditional texts is crucial in developing computational methods for web search, information retrieval or machine translation BIBREF0 . The amount of these texts enables the analysis of language on a previously unprecedented scale BIBREF1 , BIBREF2 , BIBREF3 , including the dynamics, geography and time scale of language change BIBREF4 , BIBREF5 , social media cursing habits BIBREF6 , BIBREF7 , BIBREF8 or dialectal variations BIBREF9 . From online user activity and content, it is often possible to infer different socio-economic variables on various aggregation scales. Ranging from showing correlation between the main language features on Twitter and several demographic variables BIBREF10 , through predicting heart-disease rates of an area based on its language use BIBREF11 or relating unemployment to social media content and activity BIBREF12 , BIBREF13 , BIBREF14 to forecasting stock market moves from search semantics BIBREF15 , many studies have attempted to connect online media language and metadata to real-world outcomes. Various studies have analyzed spatial variation in the text of OSN messages and its applicability to several different questions, including user localization based on the content of their posts BIBREF16 , BIBREF17 , empirical analysis of the geographic diffusion of novel words, phrases, trends and topics of interest BIBREF18 , BIBREF19 , measuring public mood BIBREF20 . While many of the above cited studies exploit the fact that language use or social media activity varies in space, it is hard to capture the impact of the geographic environment on the used words or concepts. There is a growing literature on how the sheer size of a settlement influences the number of patents, GDP or the total road length driven by universal laws BIBREF21 . These observations led to the establishment of the theory of urban scaling BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 , where scaling laws with city size have been observed in various measures such as economic productivity BIBREF31 , human interactions BIBREF32 , urban economic diversification BIBREF33 , election data BIBREF34 , building heights BIBREF35 , crime concentration BIBREF36 , BIBREF37 or touristic attractiveness BIBREF38 . In our paper, we aim to capture the effect of city size on language use via individual urban scaling laws of words. By examining the so-called scaling exponents, we are able to connect geographical size effects to systematic variations in word use frequencies. We show that the sensitivity of words to population size is also reflected in their meaning. We also investigate how social media language and city size affects the parameters of Zipf's law BIBREF39 , and how the exponent of Zipf's law is different from that of the literature value BIBREF39 , BIBREF40 . We also show that the number of new words needed in longer texts, the Heaps law BIBREF1 exhibits a power-law form on Twitter, indicating a decelerating growth of distinct tokens with city size. In this study, we analyze 456 millions of these geolocated tweets collected between February 2012 and August 2014 from the area of the United States.
How many geolocated tweets do the authors analyze?
456 millions.
null
false
null
The Davie School (also known as the Old Davie School) is a historic school in Davie, Florida, USA. It is located at 6650 Griffin Road. It was designed by August Geiger. It opened its doors in 1918 to 90 students, and was in continuous use as a school until 1978. On March 29, 1988, it was added to the U.S. National Register of Historic Places. It is the oldest extant school building in Broward County. School building The architect Geiger was hired by the school board in 1917 and presented them with his plans for the school that September. It is a T-shaped two-story building constructed of concrete with a textured stucco finish. The building has a hip roof with a surrounding parapet. Architectural elements include horseshoe and bell-shaped openings, raised bands of stucco and multiple panel windows.
Given a reference text about the Davie School, tell me when it was added to the register of historic places and when it was open and closed..
The Davie School opened in 1918 and closed in 1978. It was added to the U.S. National Register of Historic places on March 29, 1988.
null
false
null
Write a short story about a little girl who befriends a dragon, the dragons magical powers can bring her to a different universe.
Once upon a time there was a little girl who dreamed every day of finding a dragon. This particular Tuesday morning when she was waiting for the bus, she spotted a shiny object on the sidewalk. She leaned down to pick it up and it glimmered in the sunshine. She had never seen such a brilliant treasure, it was the color of the rainbow, and when she looked closer it looked eerily similar to a dragon scale, but she couldn't believe that it was truly a dragon scale because they only existed in the stories she read. She looked around and spotted another shiny scale, like a trail of breadcrumbs leading her to the forest behind her house. She knew that her parents would want her to go to school but her curiosity got the better of her and she started to follow the trail of shiny scales to the forest. She fought her way through the forest until she came upon what looked like a cave that she was sure was not there before. She cautiously entered the cave and and instead of the cave getting darker, it was getting brighter, like a brilliant light was inside the cave. When she turned the corner in the cave, she saw the most beautiful sparkling dragon and could not believe her eyes! She slowly crept toward the dragon, who was watching her closely as she approached. She held out her hand as if to pet the dragon and the dragon sniffed her hand, then rubbed it's head on the bottom of her hand to signal that it wanted to befriend her. She giggled as the dragon purred and rubbed it's shiny body circling around her. The dragon then spoke and asked her if she had ever been to the beyond worlds. The girl shook her head, the dragon blinked her eyes and projected an image onto the cave wall to show one of the beautiful beyond worlds that was filled with dragons. The girl asked if the dragon would take her there, the dragon laughed and nodded, signaling for the girl to jump on it's back. The rush of excitement came over the girl, and she hopped on the dragons back. She held on tight as the dragon started running out of the cave, as the dragon exited the cave a portal opened and it leaped into the portal. The girl closed her eyes, not knowing what would happen next, she finally opened her eyes and gawked with amazement when they arrived in the beyond world, she could not believe what she was seeing, DRAGONS! DRAGONS EVERYWHERE! They were riding over the rainbow, playing in the waterfalls, and the universe was full of beautiful flowers towering as tall as redwood trees. The sky was purple and the world had 10 moons as she glanced up in the sky.......this was going to be a great adventure!
null
false
null
Larissa Ilinichna Volpert (Russian: Лариса Ильинична Вольперт; 30 March 1926 – 1 October 2017) was a Soviet chess Woman Grandmaster and Russian and Estonian philologist. She was a three time Soviet women's chess champion (1954, 1958, and 1959). Life Born in Leningrad, she learned chess from her older brother and received chess instruction at the Leningrad Pioneers Palace. In 1947, she tied for first at the Leningrad Women's Championship. She played her first USSR Women's Championship in 1949 and finished equal fifth. In 1954, she won her first USSR Women's Championship. She scored 2–0 against Nina Hrušková-Bělská in the 1954 USSR v. Czechoslovakia match. In 1958 she shared the USSR Women's Championship title, and in 1959 she won for the third time, her second outright victory. Volpert earned the Woman International Master title in 1954 and the Woman Grandmaster title in 1978. She had a degree in philology from Leningrad University and since 1977 taught Russian philology at the University of Tartu, Estonia. Her major works are about Pushkin's and Lermontov's poetry, especially in their connections to French literature. She died at the age of 91 on 1 October 2017.[3
Given a reference text about Larissa Ilinichna Volpert, tell me what game she played, when she was champion, and what titles she earned.
Larissa Ilinichna Volpert played chess and was the Soviet women's chess champion in 1954, 1958 and 1959. She earned the titles of Woman International Master and Woman Grandmaster for her accomplishments in the game of chess.
null
false
134
We evaluate our method on natural language inference (NLI), paraphrase identification (PI), and sentiment classification. We also conduct analysis on gate values and experiments on model variants. For detailed experimental settings, we refer readers to the supplemental material. For the NLI and PI tasks, there exists recent work specializing in sentence pair classification. However in this work we confine our model to the architecture that encodes each sentence using a shared encoder without any inter-sentence interaction, in order to focus on the effectiveness of the models in extracting semantics. But note that the applicability of CAS-LSTM is not limited to sentence encoding based approaches. The objective of both datasets is to predict the relationship between a premise and a hypothesis sentence: entailment, contradiction, and neutral.
What's the objective of SNLI and MultiNLI datasets?
To predict the relationship bettheyen a premise and a hypothesis sentence: entailment, contradiction, and neutral.
null
false
null
What are the benefits of working with a coach?
Working with a coach can help you move more quickly and effectively towards achieving your goals than if you worked alone. A coach can help you set the right goals, facilitate creating a plan of action to work towards your goals and help keep you accountable to taking the actions that will lead you towards achieving your goals.
null
false
null
Is karaoke fun?
It depends. Some people like karaoke, and some don't. You might enjoy karaoke if, for example, you have an outgoing personality, like to sing or perform publicly, or enjoy bonding with others over music. Others might find karaoke to be loud or may not like listening to amateur performers, who at times can also be intoxicated if performing at a karaoke bar. In addition, when at a public location that offers karaoke, the songs are selected by the performer, so you cannot predict whether you will enjoy the music selection (unless you may happen to rent a private room where you can select all of the songs yourself, which is offered at certain karaoke institutions). Interestingly, whether or not you personally would consider yourself to have a good voice does not always determine how much you will enjoy karaoke.