paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
null
What is Ethiopia known for from agriculture perspective?
Ethiopia is well known for its exports of coffee, sesame seeds, gold, cut flowers, and zinc(World bank data). In terms of geography and climate, Ethiopia is slightly less than twice the size of Texas and has a tropical monsoon climate with wide topographic-induced variation (World factbook). The country is located on a high plateau with a central mountain range that is divided by the Great Rift Valley. The typical family size in Ethiopia is five members, and the type of dwelling they live in depends on whether they are in a rural or urban area. In Ethiopia, many families live in traditional thatched-roof houses made of mud or wattle and daub, while urban dwellings are more likely to be made of brick or concrete. However, they may still be relatively small and basic, with limited amenities, and many families in urban areas live in overcrowded and informal settlements that lack basic infrastructure and services. (International Labour Organization)
null
false
null
Identify which instrument is string or woodwind: Gehu, Venu
Venu is woodwind, Gehu is string.
null
false
null
What is quantum mechanics?
In a more general sense, the word ​“quantum” can refer to the smallest possible amount of something. Quantum mechanics is a fundamental theory in physics that describes the behavior of particles such as atoms, electrons, photons and almost everything in the molecular and sub-molecular realm. Quantum mechanics is the best tool we have to understand how the universe works on its smallest scales. Heisenberg's Uncertainty Principle is one of the most celebrated results of quantum mechanics and states that one (often, but not always) cannot know all things about a particle (as it is defined by it's wave function) at the same time.
null
false
142
Machine translation has made remarkable progress, and studies claiming it to reach a human parity are starting to appear BIBREF0. However, when evaluating translations of the whole documents rather than isolated sentences, human raters show a stronger preference for human over machine translation BIBREF1. These findings emphasize the need to shift towards context-aware machine translation both from modeling and evaluation perspective. Most previous work on context-aware NMT assumed that either all the bilingual data is available at the document level BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 or at least its fraction BIBREF11. But in practical scenarios, document-level parallel data is often scarce, which is one of the challenges when building a context-aware system. We introduce an approach to context-aware machine translation using only monolingual document-level data. In our setting, a separate monolingual sequence-to-sequence model (DocRepair) is used to correct sentence-level translations of adjacent sentences. The key idea is to use monolingual data to imitate typical inconsistencies between context-agnostic translations of isolated sentences. The DocRepair model is trained to map inconsistent groups of sentences into consistent ones. The consistent groups come from the original training data; the inconsistent groups are obtained by sampling round-trip translations for each isolated sentence. To validate the performance of our model, we use three kinds of evaluation: the BLEU score, contrastive evaluation of translation of several discourse phenomena BIBREF11, and human evaluation. We show strong improvements for all metrics. We analyze which discourse phenomena are hard to capture using monolingual data only. Using contrastive test sets for targeted evaluation of several contextual phenomena, we compare the performance of the models trained on round-trip translations and genuine document-level parallel data. Among the four phenomena in the test sets we use (deixis, lexical cohesion, VP ellipsis and ellipsis which affects NP inflection) we find VP ellipsis to be the hardest phenomenon to be captured using round-trip translations. Our key contributions are as follows: we introduce the first approach to context-aware machine translation using only monolingual document-level data; our approach shows substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation; we show which discourse phenomena are hard to capture using monolingual data only. In our setting, a separate monolingual sequence-to-sequence model (DocRepair) is used to correct sentence-level translations of adjacent sentences.
What's the function of the monolingual sequence-to-sequence model (DocRepair)?
To correct sentence-level translations of adjacent sentences.
null
false
null
What is the difference between open-toed and close-toed shoes?
Open-toed shoes refer to footwear that exposes the toes. They can reveal just the toes (think of a show with the toe portion cut out) or the entire foot (e.g., sandals). Closed toe shoes are any shoes that cover the toes.
null
false
null
The 2023 NCAA Division I men's basketball tournament involved 68 teams playing in a single-elimination tournament that determined the National Collegiate Athletic Association (NCAA) Division I men's basketball national champion for the 2022–23 season. The 84th annual edition of the tournament began on March 14, 2023, and concluded with the UConn Huskies defeating the San Diego State Aztecs, 76–59 in the championship game on April 3 at NRG Stadium in Houston, Texas.
Which team won the 2023 NCAA Men's National Championship?
The University of Connecticut Huskies
null
false
null
Through the 134th Westminster Show (February 2010), Best in Show has been won by the Terrier group 45 out of the 103 times that the prize has been awarded since 1907, more than twice as many wins as any other group. The single breed that has won the most is the Wire Fox Terrier, which has won 15 times. Two of the most popular dog breeds in the United States—the Labrador Retriever and Golden Retriever—have never won Best in Show. The dogs are marked for Best in Breed and then proceed to compete in Best in Group, and finally in the grand prize competition of Best in Show
What breed is most likely to win a dog show
The Wire Fox Terrier is to date, the winningest breed at the Westminster Show with 15 total wins. Terriers in general have won over 40% of the Best in Show competitions across the long history of the show.
null
false
396
In this section, we outline the technical details of ParityBot. The system consists of: 1) a Twitter listener that collects and classifies tweets directed at a known list of women candidates, and 2) a responder that sends out positivitweets when hateful tweets are detected. We collect tweets from Twitter's real-time streaming API. The stream listener uses the open-source Python library Tweepy BIBREF8. The listener analyses tweets in real-time by firing an asynchronous tweet analysis and storage function for each English tweet mentioning one or more candidate usernames of interest. We limit the streaming to English as our text analysis models are trained on English language corpora. We do not track or store retweets to avoid biasing the analysis by counting the same content multiple times. Twitter data is collected and used in accordance with the acceptable terms of use BIBREF9. The tweet analysis and storage function acts as follows: 1) parsing the tweet information to clean and extract the tweet text, 2) scoring the tweet using multiple text analysis models, and 3) storing the data in a database table. We clean tweet text with a variety of rules to ensure that the tweets are cleaned consistent with the expectations of the analysis models (see Appdx SECREF9). The text analysis models classify a tweet by using, as features, the outputs from Perspective API from Jigsaw BIBREF10, HateSonar BIBREF11, and VADER sentiment models BIBREF12. Perspective API uses machine learning models to score the perceived impact a tweet might have BIBREF10. The outputs from these models (i.e. 17 from Perspective, 3 from HateSonar, and 4 from VADER) are combined into a single feature vector for each tweet (see Appdx SECREF10). No user features are included in the tweet analysis models. While these features may improve classification accuracy they can also lead to potential bias BIBREF13. We measure the relative correlation of each feature with the hateful or not hateful labels. We found that Perspective API's TOXICITY probability was the most consistently predictive feature for classifying hateful tweets. Fig. FIGREF5 shows the relative frequencies of hateful and non-hateful tweets over TOXICITY scores. During both elections, we opted to use a single Perspective API feature to trigger sending positivitweets. Using the single TOXICITY feature is almost as predictive as using all features and a more complex model SECREF14. It was also simpler to implement and process tweets at scale. The TOXICITY feature is the only output from the Perspective API with transparent evaluation details summarized in a Model Card BIBREF14, BIBREF15. We collect tweets from Twitter’s real-time streaming API. The stream listener uses the open-source Python library Tweepy [22].
What Python library is used by listeners?
Tweepy.
1808.03986
false
null
Our method is based on a sequence to sequence network BIBREF38 , BIBREF12 , BIBREF39 . The sequence to sequence network has a text sequence as input and output. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model.
How/where are the natural question generated?
The answers are shown as follows: * Decoder that generates question using an LSTM-based language model
null
false
null
What is one of your favorite hobbies?
I'm an avid reader, especially of audiobooks. I read or listen to more than a book a week, and will dabble in a lot of different genres. I've recently been reading a lot of sci-fi and fantasy, but also enjoy a good biography, military history, sociological, or general work of fiction as long as they are well written.
null
false
null
What colours are on the flag of the Netherlands?
The flag of the Netherlands is predominantly blue with a white Dutch lion on a golden background.
null
false
null
Can I learn to ski as an adult?
Yes. Alpine downhill skiing is a sport that can be learned at any age. While many learn to ski when they're young, there are lots of people that only started to ski at an older age. With the right motivation and a willingness to put time and effort, almost anyone can learn to ski. Also, advances to ski equipment and slope grooming technology make skiing a lot safer than it used to be years ago. Learning to ski as an adult may take longer than learning as a child. It is recommended that you start with some lessons first to gain the right technique. This could be group lessons or private lessons. You could also learn from friends who know how to ski, or learn by watching online tutorials and applying them yourself on the mountain. Once you have the basic skills, you should be able to ski easy slopes and start enjoying skiing as a fun activity. In addition to lessons, you will also need the right gear to ski. This includes appropriate clothing that can keep you warm on the mountain while you ski, as well as ski equipment that you will use. It is recommended that you get a pair of skis and poles, good fitting boots, a helmet for safety, and goggles to protect your eyes from the elements. Renting your equipment is a good option when you're starting. That way you are not making an expensive commitment upfront. You could also buy your equipment either new or from any of the second-hand ski shops and ski swaps. With the right gear and knowledge of the basics, you can take your learning in any direction you want. Many people choose to stay at a beginner or intermediate level and just enjoy the occasional ski trip with friends. While others take their passion forward with more practice, hours on the mountain, and likely more lessons to get to advanced levels. Whichever path you pick, skiing can be a fun sport to learn and enjoy for the rest of your life. Have fun skiing. You got this!
null
false
16
For the projection we can use an arbitrary invertible function, and given the representational power of neural networks they seem a natural choice. However, calculating the inverse and Jacobian of an arbitrary neural network can be difficult, as it requires that all component functions be invertible and also requires storage of large Jacobian matrices, which is memory intensive. To address this issue, several recent papers propose specially designed invertible networks that are easily trainable yet still powerful BIBREF16 , BIBREF17 , BIBREF19 . Inspired by these works, we use the invertible transformation proposed by BIBREF16 , which consists of a series of “coupling layers”. This architecture is specially designed to guarantee a unit Jacobian determinant (and thus the invertibility property). From Eq. ( EQREF22 ) we know that only INLINEFORM0 is required for accomplishing learning and inference; we never need to explicitly construct INLINEFORM1 . Thus, we directly define the architecture of INLINEFORM2 . As shown in Figure FIGREF24 , the nonlinear transformation from the observed embedding INLINEFORM3 to INLINEFORM4 represents the first coupling layer. The input in this layer is partitioned into left and right halves of dimensions, INLINEFORM5 and INLINEFORM6 , respectively. A single coupling layer is defined as: DISPLAYFORM0 where INLINEFORM0 is the coupling function and can be any nonlinear form. This transformation satisfies INLINEFORM1 , and BIBREF16 show that its Jacobian matrix is triangular with all ones on the main diagonal. Thus the Jacobian determinant is always equal to one (i.e. volume-preserving) and the invertibility condition is naturally satisfied. To be sufficiently expressive, we compose multiple coupling layers as suggested in BIBREF16 . Specifically, we exchange the role of left and right half vectors at each layer as shown in Figure FIGREF24 . For instance, from INLINEFORM0 to INLINEFORM1 the left subset INLINEFORM2 is unchanged, while from INLINEFORM3 to INLINEFORM4 the right subset INLINEFORM5 remains the same. Also note that composing multiple coupling layers does not change the volume-preserving and invertibility properties. Such a sequence of invertible transformations from the data space INLINEFORM6 to INLINEFORM7 is also called normalizing flow BIBREF20 . For the projection we can use an arbitrary invertible function, and given the representational power of neural networks they seem a natural choice.
What function do they use for the projection?
An arbitrary invertible function.
null
false
null
Is Spain a good place to live?
Spain could be a great place to live depending on what you are looking for. Spain has amazing weather, very friendly people, amazing food and a good amount of history to learn about. As a tourist or short term visitor, you could have a blast. On the other hand, for a long term stay where you need to earn some income… unemployment is a big problem in Spain for the youth.
null
false
null
What is an enzyme inhibitor?
An enzyme inhibitor is a molecule that binds to an enzyme's active site, or another site on the enzyme, and blocks the enzyme's catalysis of the reaction. Enzyme inhibitors are generally specific to one enzyme and control that enzyme's activity. They also control essential enzymes such as proteases or nucleases that, if left unchecked, may damage a cell. Many poisons produced by animals or plants are enzyme inhibitors, and many drug molecules are enzyme inhibitors that inhibit an aberrant human enzyme or an enzyme critical for the survival of a pathogen. Since anti-pathogen inhibitors generally target only one enzyme, such drugs are highly specific and generally produce few side effects in humans. Medicinal enzyme inhibitors often have low dissociation constants, meaning a minute amount of the inhibitor will inhibit the enzyme. The discovery of enzyme inhibitors and their refinement are researched in biochemistry and pharmacology.
null
false
null
A Star Is Born (1937 film), starring Janet Gaynor and Fredric March, directed by William A. Wellman A Star Is Born (1954 film), starring Judy Garland and James Mason, directed by George Cukor A Star Is Born (1976 film), starring Barbra Streisand and Kris Kristofferson, directed by Frank Pierson A Star Is Born (2018 film), starring Lady Gaga and Bradley Cooper, directed by Bradley Cooper
Based on the passage provided in which years the movie named 'A star is Born' was released?
1937, 1954, 1976 and 2018.
null
false
null
Identify which instrument is string or percussion: Tamborim, Veena
Veena is string, Tamborim is percussion.
null
false
246
In the past few years, there has been much interest in applying neural network based deep learning techniques to solve all kinds of natural language processing (NLP) tasks. From low level tasks such as language modeling, POS tagging, named entity recognition, and semantic role labeling BIBREF0 , BIBREF1 , to high level tasks such as machine translation, information retrieval, semantic analysis BIBREF2 , BIBREF3 , BIBREF4 and sentence relation modeling tasks such as paraphrase identification and question answering BIBREF5 , BIBREF6 , BIBREF7 . Deep representation learning has demonstrated its importance for these tasks. All the tasks get performance improvement via learning either word level representations or sentence level representations. In this work, we brought deep representation learning technologies to the clinical domain. Specifically, we focus on clinical information extraction, using clinical notes and pathology reports from the Mayo Clinic. Our system will identify event expressions consisting of the following components: The input of our system consists of raw clinical notes or pathology reports like below: And output annotations over the text that capture the key information such as event mentions and attributes. Table TABREF7 illustrates the output of clinical information extraction in details. To solve this task, the major challenge is how to precisely identify the spans (character offsets) of the event expressions from raw clinical notes. Traditional machine learning approaches usually build a supervised classifier with features generated by the Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) . For example, BluLab system BIBREF8 extracted morphological(lemma), lexical(token), and syntactic(part-of-speech) features encoded from cTAKES. Although using the domain specific information extraction tools can improve the performance, learning how to use it well for clinical domain feature engineering is still very time-consuming. In short, a simple and effective method that only leverage basic NLP modules and achieves high extraction performance is desired to save costs. To address this challenge, we propose a deep neural networks based method, especially convolution neural network BIBREF0 , to learn hidden feature representations directly from raw clinical notes. More specifically, one method first extract a window of surrounding words for the candidate word. Then, we attach each word with their part-of-speech tag and shape information as extra features. Then our system deploys a temporal convolution neural network to learn hidden feature representations. Finally, our system uses Multilayer Perceptron (MLP) to predict event spans. Note that we use the same model to predict event attributes. More specifically, one method first extract a window of surrounding words for the candidate word. Then, we attach each word with their part-of-speech tag and shape information as extra features. Then our system deploys a temporal convolution neural network to learn hidden feature representations. Finally, our system uses Multilayer Perceptron (MLP) to predict event spans. Note that we use the same model to predict event attributes.
How was the method proposed by the authors work?
The authors first extract a window of surrounding words for the candidate word. Then, the authors attach each word with their part-of-speech tag and shape information as extra features. Then, the authors deploy a temporal convolution neural network to learn hidden feature representations. Finally, the authors use Multilayer Perceptron (MLP) to predict event spans.
null
false
67
Alzheimer’s disease (AD) is a neurodegenerative disease affecting over 40 million people worldwide with high costs of acute and long-term care BIBREF0 . Recruitment of participants with cognitive impairment has historically been a bottleneck in clinical trials BIBREF1 , making AD datasets relatively small. Additionally, though cognitive assessments test domains of cognition through multiple tasks, most available datasets of pathological speech are restricted to participants performing a single task. Picture description using an image to elicit narrative discourse samples is one such task that has proved to be successful in detecting AD BIBREF2 . However, it is important to develop ML models of high performance that would produce results generalizable across different tasks. Several studies have used natural language processing and machine learning to distinguish between healthy and cognitively impaired speech of participants describing a picture. Fraser et al. BIBREF3 used linguistic and acoustic features to classify healthy and pathological speech transcripts with an accuracy of INLINEFORM0 . Similarly, Karlekar et al. BIBREF4 classified utterances of speakers as AD or healthy (HC) with an accuracy of INLINEFORM1 using an enlarged, utterance-level view of transcripts of picture descriptions. In line with previous research, we use linguistic and acoustic features of speech as input to our ML model. Furthermore, we extend the model to using data from several different tasks. Noorian et al. BIBREF5 demonstrated that using within-task data of healthy participants describing a picture improved AD detection performance by up to 13%. In this paper, we evaluate if model performance improves with the addition of data from healthy participants, with varying ages, performing either the same or different tasks. We find that models trained on datasets of picture description tasks augmented with conversational speech of healthy speakers learn decision boundaries that are more generalizable across activities with lower out-of-task errors. We observe a 9% increase in AD detection performance when normative data from different tasks are utilized. We also analyze if each task provides domain-specific inductive bias for other tasks to obtain a model setting capable of detecting AD from any sample of speech using high-precision model-agnostic explanations proposed by Ribeiro et al. BIBREF6 and computation of various error metrics related to classification. We find that models trained on datasets of picture description tasks augmented with conversational speech of healthy speakers learn decision boundaries that are more generalizable across activities with lower out-of-task errors.
When models are trained on datasets of picture description tasks augmented with a conversational speech of healthy speakers learn decision, are there any benefits?
They are more generalizable across activities with lower out-of-task errors.
null
false
null
Classify the countries based on the continent they belong to - North America or South America with the former as 'north' and the latter as 'south'. United States, Mexico, Canada, Brazil, Columbia, Argentina, Cuba, Jamaica, Chile, Guyana, Peru.
'north', 'north', 'north', 'south', 'south', 'south', 'north', 'north', 'south', 'south', 'south'.
null
false
26
One of the most fundamental topics in natural language processing is how best to derive high-level representations from constituent parts, as natural language meanings are a function of their constituent parts. How best to construct a sentence representation from distributed word embeddings is an example domain of this larger issue. Even though sequential neural models such as recurrent neural networks (RNN) BIBREF0 and their variants including Long Short-Term Memory (LSTM) BIBREF1 and Gated Recurrent Unit (GRU) BIBREF2 have become the de-facto standard for condensing sentence-level information from a sequence of words into a fixed vector, there have been many lines of research towards better sentence representation using other neural architectures, e.g. convolutional neural networks (CNN) BIBREF3 or self-attention based models BIBREF4 . From a linguistic point of view, the underlying tree structure—as expressed by its constituency and dependency trees—of a sentence is an integral part of its meaning. Inspired by this fact, some recursive neural network (RvNN) models are designed to reflect the syntactic tree structure, achieving impressive results on several sentence-level tasks such as sentiment analysis BIBREF5 , BIBREF6 , machine translation BIBREF7 , natural language inference BIBREF8 , and discourse relation classification BIBREF9 . However, some recent works have BIBREF10 , BIBREF11 proposed latent tree models, which learn to construct task-specific tree structures without explicit supervision, bringing into question the value of linguistically-motivated recursive neural models. Witnessing the surprising performance of the latent tree models on some sentence-level tasks, there arises a natural question: Are linguistic tree structures the optimal way of composing sentence representations for NLP tasks? In this paper, we demonstrate that linguistic priors are in fact useful for devising effective neural models for sentence representations, showing that our novel architecture based on constituency trees and their tag information obtains superior performance on several sentence-level tasks, including sentiment analysis and natural language inference. A chief novelty of our approach is that we introduce a small separate tag-level tree-LSTM to control the composition function of the existing word-level tree-LSTM, which is in charge of extracting helpful syntactic signals for meaningful semantic composition of constituents by considering both the structures and linguistic tags of constituency trees simultaneously. In addition, we demonstrate that applying a typical LSTM to preprocess the leaf nodes of a tree-LSTM greatly improves the performance of the tree models. Moreover, we propose a clustered tag set to replace the existing tags on the assumption that the original syntactic tags are too fined-grained to be useful in neural models. In short, our contributions in this work are as follows: A chief novelty of our approach is that we introduce a small separate tag-level tree-LSTM to control the composition function of the existing word-level tree-LSTM, which is in charge of extracting helpful syntactic signals for meaningful semantic composition of constituents by considering both the structures and linguistic tags of constituency trees simultaneously. In addition, we demonstrate that applying a typical LSTM to preprocess the leaf nodes of a tree-LSTM greatly improves the performance of the tree models. Moreover, we propose a clustered tag set to replace the existing tags on the assumption that the original syntactic tags are too fined-grained to be useful in neural models.
What is new about the method proposed by the author?
They introduce a small separate tag-level tree-LSTM to control the composition function of the existing word-level tree-LSTM, which is in charge of extracting helpful syntactic signals for meaningful semantic composition of constituents by considering both the structures and linguistic tags of constituency trees simultaneously. In addition, they demonstrate that applying a typical LSTM to preprocess the leaf nodes of a tree-LSTM greatly improves the performance of the tree models. Moreover, they propose a clustered tag set to replace the existing tags on the assumption that the original syntactic tags are too fined-grained to be useful in neural models.
null
false
null
What are the best books of Dostoyevsky?
1. 'Crime and Punishment' 2. 'The Brothers Karamazov' 3. 'The Idiot' 4. 'Demons' 5. ‘Notes from the Underground’
null
false
114
We propose a novel adversarial training framework to deal with the instability problem of current GANs for text generation. To address the instability issue caused by policy gradient, we incorporate RAML into the advesarial training paradigm to make our generator acquire stable rewards. Experiments show that our model performs better than several state-of-the-art GAN baselines with lower training variance, yet producing better performance on three text generation tasks. We propose a novel adversarial training framework to deal with the instability problem of current GANs for text generation. To address the instability issue caused by policy gradient, we incorporate
Does the model proposed by the authors perform better than several state-of-the-art GAN baselines?
Yes, it does.
null
false
null
Give me a list of the best rappers of all time
Eminem Andre 3000 Tupac Jay-Z Kanye West Kendrick Lamar Lauryn Hill Nas
null
false
null
What is the Spengler Cup
The Spengler Cup is an annual invitational ice hockey tournament held in Davos, Switzerland. First held in 1923, the Spengler Cup is often cited as the oldest invitational ice hockey tournament in the world. The event is hosted by the Swiss team HC Davos and played each year in Davos from 26 December to 31 inclusively. Currently, all games are held at the Eisstadion Davos. It was originally devised by Dr. Carl Spengler as a means to promote teams from German-speaking Europe, who might have suffered ostracism in the aftermath of World War I. Eventually, the tournament grew well beyond expectations. Many of Europe's most prestigious clubs and national programs have appeared, including Soviet, Czechoslovak, Swedish, German, and Finnish powerhouses. Through its history, club or national teams from 13 countries have won the tournament, with Team Canada winning the most cups as a club (16) while various teams from Switzerland have won the most cups for one country (20). Among non-European organizations, Team Canada, Team USA, nationally ranked U.S. collegiate teams, reigning AHL Calder Cup and Ontario Hockey Association champions, and even Team Japan (in 1971, building international experience before playing as hosts of the 1972 Sapporo Winter Olympics) have competed for the Spengler Cup. Since at least 1990, Team Canada has been the only participant from North America, with the exception of the AHL's Rochester Americans in 1996 and 2013 (see AHL participation at the Spengler Cup). In the 2018 tournament, Finnish club team KalPa from the city of Kuopio defeated Team Canada 2–1 in the final. The game was decided in the 8th round of a shootout, the first series of game-winning shots in tournament history that determined the winner of the Spengler Cup. The Spengler Cup tournament was not played in 2020 or 2021 due to the COVID-19 pandemic. In the 2022 tournament, Swiss club team HC Ambrì-Piotta defeated Sparta Prague 3–2 in the final; with the game decided in the 5th round of a shootout.
null
false
null
What are the most expensive cities in the world ?
This is an exhaustive list of the most expensive cities : Singapore, Tel Aviv, New York, Hong Kong, Los Angeles, Zurich, Geneva, San Francisco, Paris and Sydney
null
false
null
Robin Williams dressed in drag for which 1993 film
Mrs Doubtfire
1904.04055
false
null
Experiments were carried out by the method proposed in BIBREF27 . The first part is described as Task A, the purpose of which is to identify the boundaries of timexes and assign them to one of the following classes: date, time, duration, set. We chose the best 3 results from each word embeddings group (EE, EP, EC) from Table TABREF19 presenting F1-scores for all models. Then we evaluated these results using more detailed measures for timexes, presented in BIBREF27 . The following measures were used to evaluate the quality of boundaries and class recognition, so-called strict match: strict precision (Str.P), strict recall (Str.R) and strict F1-score (Str.F1). A relaxed match (Rel.P, Rel.R, Rel.F1) evaluation has also been carried out to determine whether there is an overlap between the system entity and gold entity, e.g. [Sunday] and [Sunday morning] BIBREF27 . If there was an overlap, a relaxed type F1-score (Type.F1) was calculated BIBREF27 . The results are presented in Table TABREF20 . The first part is described as Task A, the purpose of which is to identify the boundaries of timexes and assign them to one of the following classes: date, time, duration, set. We chose the best 3 results from each word embeddings group (EE, EP, EC) from Table TABREF19 presenting F1-scores for all models. Then we evaluated these results using more detailed measures for timexes, presented in BIBREF27 .
What experiments are presented?
The answers are shown as follows: * identify the boundaries of timexes and assign them to one of the following classes: date, time, duration, set * Then we evaluated these results using more detailed measures for timexes
null
false
null
Discuss the pros and cons of a day trip to Vancouver versus a day trip to Portland from Seattle.
Both Vancouver, Canada and Portland, Oregon can be a fun day trip from Seattle. The pros for both are that the cities are fairly close to Seattle, so a day trip is very reasonable. In terms of time to get there, Portland is a bit closer and also has the advantage of still being in the United States. So, you may save time compared to going to Vancouver since there is no border crossing. This difference can be fairly trivial though since the border between the US and Canada is often fairly fast. Besides travel time, something to consider is what kind of activity you want to do. Vancouver has a lot of great chinese cuisine and also has Stanley park which is a great place to go walking and cycling. Portland is known for Voodoo donuts, a popular and unique donut store. Overall, both are great options and it boils down to what things you enjoy doing most.
null
false
null
When going skydiving, categorize each of the following as 'useful' or 'not useful': a parachute, an Acme anvil, a helmet, a go-pro camera, a small herd of miniature ponies, wraparound sunglasses, a backpack full of uranium
Useful: a parachute, a helmet, a go-pro camera, wraparound sunglasses Not useful: an Acme anvil, a small herd of miniature ponies, a backpack full of uranium
null
false
null
Friedrich August von Hayek CH FBA (/ˈhaɪək/ HY-ək, German: [ˈfʁiːdʁɪç ˈʔaʊɡʊst fɔn ˈhaɪɛk] (listen); 8 May 1899 – 23 March 1992), often referred to by his initials F. A. Hayek, was an Austrian-British intellectual who made contributions to economics, political science, psychology, intellectual history, philosophy and other fields. Hayek shared the 1974 Nobel Memorial Prize in Economic Sciences with Gunnar Myrdal for work on money and economic fluctuations, and the interdependence of economic, social and institutional phenomena. His account of how prices communicate information is widely regarded as an important contribution to economics that led to him receiving the prize
Who won the 1974 Nobel Memorial Prize in Economic Sciences?
Friedrich August von Hayek and Gunnar Myrdal shared the 1974 Nobel Memorial Prize in Economic Sciences.
1910.03891
false
null
The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification.
How does KANE capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner?
The answers are shown as follows: * To capture both high-order structural information of KGs, we used an attention-based embedding propagation method.
null
false
3
Deep Learning approaches have achieved impressive results on various NLP tasks BIBREF0 , BIBREF1 , BIBREF2 and have become the de facto approach for any NLP task. However, these deep learning techniques have found to be less effective for low-resource languages when the available training data is very less BIBREF3 . Recently, several approaches like Multi-task learning BIBREF4 , multilingual learning BIBREF5 , semi-supervised learning BIBREF2 , BIBREF6 and transfer learning BIBREF7 , BIBREF3 have been explored by the deep learning community to overcome data sparsity in low-resource languages. Transfer learning trains a model for a parent task and fine-tunes the learned parent model weights (features) for a related child task BIBREF7 , BIBREF8 . This effectively reduces the requirement on training data for the child task as the model would have learned relevant features from the parent task data thereby, improving the performance on the child task. Transfer learning has also been explored in the multilingual Neural Machine Translation BIBREF3 , BIBREF9 , BIBREF10 . The goal is to improve the NMT performance on the source to target language pair (child task) using an assisting source language (assisting to target translation is the parent task). Here, the parent model is trained on the assisting and target language parallel corpus and the trained weights are used to initialize the child model. The child model can now be fine-tuned on the source-target language pairs, if parallel corpus is available. The divergence between the source and the assisting language can adversely impact the benefits obtained from transfer learning. Multiple studies have shown that transfer learning works best when the languages are related BIBREF3 , BIBREF10 , BIBREF9 . Several studies have tried to address lexical divergence between the source and the target languages BIBREF10 , BIBREF11 , BIBREF12 . However, the effect of word order divergence and its mitigation has not been explored. In a practical setting, it is not uncommon to have source and assisting languages with different word order. For instance, it is possible to find parallel corpora between English and some Indian languages, but very little parallel corpora between Indian languages. Hence, it is natural to use English as an assisting language for inter-Indian language translation. To see how word order divergence can be detrimental, let us consider the case of the standard RNN (Bi-LSTM) encoder-attention-decoder architecture BIBREF13 . The encoder generates contextual representations (annotation vectors) for each source word, which are used by the attention network to match the source words to the current decoder state. The contextual representation is word-order dependent. Hence, if the assisting and the source languages do not have similar word order the generated contextual representations will not be consistent. The attention network (and hence the decoder) sees different contextual representations for similar words in parallel sentences across different languages. This makes it difficult to transfer knowledge learned from the assisting language to the source language. We illustrate this by visualizing the contextual representations generated by the encoder of an English to Hindi NMT system for two versions of the English input: (a) original word order (SVO) (b) word order of the source language (SOV, for Bengali). Figure FIGREF1 shows that the encoder representations obtained are very different. The attention network and the decoder now have to work with very different representations. Note that the plot below does not take into account further lexical and other divergences between source and assisting languages, since we demonstrated word order divergence with the same language on the source side. To address this word order divergence, we propose to pre-order the assisting language sentences to match the word order of the source language. We consider an extremely resource constrained scenario, where we do not have any parallel corpus for the child task. We are limited to a bilingual dictionary for transfer information from the assisting to the source language. From our experiments, we show that there is a significant increase in the translation accuracy for the unseen source-target language pair. To address the word order divergence, we propose to pre-order the assisting language sentences (SVO) to match the word order of the source language (SOV).
How to address the word order divergence in this paper?
To address the word order divergence, the authors propose to pre-order the assisting language sentences (SVO) to match the word order of the source language (SOV).
null
false
null
What is an ionic bond?
An ionic bond is a bond where an electron from one atom is transferred to another atom. An example of an ionic bond is NaCl, or salt.
null
false
null
Born and raised in Alabama, Williams learned guitar from African-American blues musician Rufus Payne in exchange for meals or money. Payne, along with Roy Acuff and Ernest Tubb, had a significant influence on Williams' musical style. Williams began his professional career in Montgomery in 1937 when local radio station WSFA hired him to perform on a 15-minute program. He formed the Drifting Cowboys backup band, which was managed by his mother, and dropped out of school to devote his time to his career. When several of his band members were drafted during World War II, he had trouble with their replacements, and WSFA terminated his contract because of his alcoholism.
Who were Hank Williams early musical influences?
As a child Hank Williams was influenced by popular country artists such as Roy Acuff and Earnest Tubb, but he was also influenced by African-American blues musician Rufus Payne. Payne taught Williams how to play guitar, and was a significant influence on his musical style.
1910.02339
false
null
FLOAT SELECTED: Table 2: Results of AlgoLisp dataset Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset BIBREF17 is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in BIBREF17, and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in BIBREF18. As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations. FLOAT SELECTED: Table 2: Results of AlgoLisp dataset As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set.
How does TP-N2F compare to LSTM-based Seq2Seq in terms of training and inference speed?
Full Testing Set accuracy: 84.02 Cleaned Testing Set accuracy: 93.48
null
false
null
Historical population Year Pop. ±% 1911 752 — 1962 437 −41.9% 1968 486 +11.2% 1975 398 −18.1% 1982 393 −1.3% 1990 382 −2.8% 1999 395 +3.4% 2008 441 +11.6%
This table lists the change over time in population of a small town in France. Between which two consecutive entries from the table did the rate of population growth change the most? Explain your math.
The rate of population growth changed the most between the entries for 1962 and 1968. Between 1911 and 1962, there was a -41.9% change in population, and between 1962 and 1968 there was a +11.2% change in population. The rate of change of population growth was |(−41.9% - +11.2%)| = 53.1% change, which is larger than the absolute difference between any other two consecutive rows in the table.
null
false
null
Name some TV shows that American actor David Duchovny has been in.
The X-Files, Californication, Twin Peaks, The Red Shoe Diaries, Aquarius
null
false
null
Give me a list of the core things to focus on when training for a run or race
The three things you should focus on are: 1) Endurance: eg. how far you can run 2) Speed: eg. how fast you can run your given distance 3) Strength: how well you can power through changes in elevation
null
false
null
Can Solar panels help you maintain power during a storm ?
It would really depend on how much cloud coverage you are getting but your panels should be able to generate some power.
null
false
null
Demond "Bob" Sanders (born February 24, 1981) is an American former professional football player who was a safety in the National Football League (NFL) for eight seasons. He played college football for the Iowa Hawkeyes and was drafted by the Indianapolis Colts in the second round of the 2004 NFL Draft. He was named NFL Defensive Player of the Year in 2007 and also played for the San Diego Chargers in his final season in 2011. Sanders was nicknamed "The Sandman" because of his hard hits and tackles, and was also called "The Eraser" by former Colts' coach Tony Dungy because of his tendency to erase the mistakes of his teammates. He spent the majority of his career battling injuries, having played more than six games in a single season only twice, in 2005 and 2007.
Extract the awards that Bob Sanders gained throughout his career, and put them in a comma-separated list.
NFL Defensive Player of the Year in 2007
null
false
null
Tell me which of these States' shapes have exactly four sides: Utah, Colorado, Florida, Washington, California.
Colorado has exactly four sides in its shape.
null
false
null
Tell me which of the following items are trees and which ones are weeds: aspen, ash, chickweed, crabgrass, oak, almond, dandelion
tree: aspen, ash, oak, almond weed: chickweed, crabgrass, dandelion
null
false
null
Which of these four novels has the least in common with the others? 1. Red Harvest by Dashiell Hammett 2. One Hundred Years of Solitude by Gabriel Garcia Marquez 3. The Big Sleep by Raymond Chandler 4. The Hound of the Baskervilles by Sir Arthur Conan Doyle
One Hundred Years of Solitude has the least in common with the other books. It is part of a genre called Magical Realism, which depicts a world that strongly resembles real life but contains magical aspects that would normally be found in the Fantasy genre. The other three novels are all Detective fiction, and belong to the Hardboiled sub-genre. Hardboiled detective stories typically feature a deeply cynical (and often unlikable) protagonist, organized crime elements, and a prohibition-era setting.
null
false
null
Should a human be happy all the time?
Answer 1: Happiness is not only subjective but also temporal - what brought you joy 2 years ago might be boring/unpleasant now. So, instead of optimizing for happiness at a given time, it might actually be better to realize what makes us feel down or not content and avoid them. By avoiding such circumstances/conditions, we can be content and maybe that's what need in life: not strive to be happy all the time but avoid what we truly don't like so that happiness takes care of its own. Answer 2: Yes! Life is too short to get caught in dull moments so a human needs to optimize for maximum happiness. The definition of happiness differs and that's ok; we all live once and we should spend our time looking for new things that make us happier. Answer 3: No. Most meaningful lessons in life are learned in situations when we're not particularly happy: Loss of a close person, financial break downs, unexpected health conditions. Getting through these hard times is actually makes a person stronger and more prepared for other calamities in life. We should not strive to be happy "all the time" - instead we should take things as they come. In life, there is a need for moments of discontent as well as a need for moments of joy.
null
false
null
What is the Trans Tahoe Relay?
Each year there is a race held to swim across Lake Tahoe. The race was created and is still sponsored by The Olympic Club of San Francisco. Teams are created and entered from across the globe but primarily North America. The format of the race is a six person relay; all male, all female. or half male/female. Starting on the Nevada side of the lake each member of the team swims for 30 minutes following the team's boat. After each of the swimmers has completed the 30 minute rotation the swimmers then rotate again every 10 minutes. This is done is sequence until the team reach the other side of the lake in California. The swimmers need to touch one-another when rotating between swimmers and there are no wetsuits or fins allowed.
null
false
486
We focus on the ALFRED challenge, where an agent is asked to follow human instructions to complete long-horizon household tasks in indoor scenes (simulated in AI2Thor). Each task in ALFRED consists of several subgoals for either navigation (moving in the environment) or object interactions (interacting with at least one object). Language inputs contain a high-level task description and a sequence of low-level step-by-step instructions (each corresponding to a subgoal). The agent is a simulated robot with access to the states of the environment only through a front-view RGB camera with a relatively small field of view. The agent's own state is a 5-tuple (x, y, r, h, o), where x, y are its 2D position, r the horizontal rotation angle, h the vertical camera angles (also called "horizon") and o the type of object held in its hand. The state space of the agent is discrete, with navigation actions: MoveAhead (moving forward by 0.25m), RotateLeft & RotateRight (rotating in the horizontal plane by 90 • ) and LookUp & LookDown (adjusting the horizon by 15 where positive h indicates facing downward. With these discrete actions, the agent has full knowledge of the relative changes ∆x, ∆y, ∆r and ∆h. Each of the 7 object interaction actions (PickUp, Open, Slice, etc.) is parametrized by an binary mask for the target object, which is usually predicted with a pre-trained instance segmentation module. Featuring long-horizon tasks with a range of interactions, the ALFRED challenge evaluates an agent's ability to perform tasks over unseen test scenes, while only allowing ≤1000 steps and ≤10 action failures for each task at inference time. Language inputs contain a high-level task description and a sequence of low-level step-by-step instructions (each corresponding to a subgoal).
In sec 4.1, how does the agent predict the task described in the language instructions?
As introduced in Sec. 3, the input format of tasks in ALFRED consists of a sequence of low-level instructions, each corresponding to a subgoal.
null
false
null
What is the difference between basketball and netball?
Basketball and netball are similar, but the team numbers, rules, and equipment are different. Basketball has ten players, players can dribble and run with the ball, and the ball and basket are larger. Netball has seven players, players are not allowed to dribble or run with the ball, and the ball and basket are slightly smaller. Netball was originally developed as a women's sport although modern-day men now play netball as well.
null
false
null
What are some tricks to baking a really good cake?
There are several things to consider when baking a cake. Paying particular attention to each will produce a consistent and delicious result. First, confirm that your oven temperature is correct. This is easily done with a meat thermometer. Place the thermometer in the middle of the center rack of the oven, where most food cooks. Then preheat the oven to a baking temperature (e.g. 350 degrees). After the oven has preheated, check the temperature on the thermometer and confirm that it also reads 350 degrees. Second, bring your refrigerated ingredients to room temperature before using. If your recipe calls for butter, do not microwave it as a shortcut. Doing so will change the consistency of your batter. Plan ahead and leave butter and eggs out on the counter for at least an hour before you begin baking. Third, pay close attention to recipe instructions related to sifting and measuring flour. "Two cups sifted flour" does not mean the same thing as "two cups flour, sifted." The first instruction means to sift the flour, then measure; whereas the second means measure first, then sift what was measured. In either scenario, do not scoop the measuring cup into the flour bin directly. This causes the flour to be tightly packed into the cup and will result in a cake that has the texture of a brick. Instead, use a spoon to lightly scoop flour from the bin into the measuring cup, leveling off the top with a butter knife. This method is messier, but will yield better results. Fourth, take the time to cream the butter and sugar well. The result should be smooth and lump free. This is the most important step to making an appealing cake, and again can't be expedited by using melted butter! Fifth, if your recipe calls for both wet ingredients (buttermilk, juice etc) and dry (flour mixture), alternate between the two in small increments rather than dumping them in sequentially. This small step will protect the texture of your cake. Sixth, once the cake batter has been poured into the pan, lift the pan and drop it onto the counter from a height of two to three inches. This will remove any air pockets that may have developed in the batter. Be careful not to bang the pan too hard, or you'll spill the batter and/or scare your pets out of the kitchen. And lastly, don't bake the cake for as long as the recipe suggests. Set the timer a few minutes early so that you don't over-bake due to any idiosyncrasy associated with your oven (referenced in step one). Please note this does not mean you should be opening and closing the oven door multiple times during the baking process. Good bakers know "if you're looking, it's not cooking," so try not to open the oven until your timer goes off. The only exception to this rule is if you smell something burning. Odds are this means the batter has spilled out over the pan and onto the bottom of the oven. if this happens, and the oven is now smoking, you have to open the door and try to remove the spill. If nothing is smoking, leave the mess until the cake is out and the oven has cooled. Back to the timer - When it goes off, quickly but carefully test the cake with several toothpicks in different areas while the cake remains in the oven. (Taking the cake in and out of the oven to test will change the temperature of both the oven and the cake). If the toothpicks are wet and sticky, the cake needs more time. If the toothpicks are clean (or a tad moist), the cake is ready to come out of the oven. Allow cake to cool on a rack, and then enjoy the fruits of your labor!
null
false
null
Human rights groups have criticized LIV Golf as sportswashing, a political strategy by Saudi Arabia to cleanse its repressive global image through sport. Human Rights Watch, for example, called the Saudi endeavor "an effort to distract from its serious human rights abuses by taking over events that celebrate human achievement". Greg Norman has been accused of aiding the repressive Saudi government for his own financial gain. In 2021, Norman denied that he was being used for sportswashing and said he works for LIV because of his passion for the sport. Later, in May 2022, Norman defended the involvement of Saudi Crown Prince Mohammed bin Salman with Jamal Khashoggi's murder, saying, "Look, we've all made mistakes, and you just want to learn from those mistakes and how you can correct them going forward." His statement drew extensive criticism. Khashoggi's fiancée, Hatice Cengiz, said it was hurtful that "Jamal's brutal killing is brushed off as a 'mistake' and that we should just move on". On 22 June 2022, a group of nearly 2,500 survivors of family members killed or injured during the September 11 attacks wrote an open letter to golfers who have remained loyal to the PGA Tour thanking them for not defecting to LIV Golf. The letter read in part, "Thank you for standing up for decency. Thank you for standing up for the 9/11 Families. Thank you for resisting the Kingdom of Saudi Arabia's efforts to cleanse its reputation by buying off professional athletes...To those of you who have chosen what is right over blood money from a corrupt, destructive sports entity and its Saudi backers, please continue to stand strong." On 17 July 2022, a group of September 11 victims' family members condemned former U.S. President Donald Trump for hosting the LIV Golf tour at his Trump National Golf Club in Bedminster, New Jersey; the group's letter to Trump noted that he himself had blamed Saudi Arabia for the 9/11 attack during a 2016 interview on Fox News. Tiger Woods, who turned down an offer of $700 million to $800 million to join LIV Golf, was asked in July 2022 about the golfers who had joined. "I disagree with it. I think that what they've done is they've turned their back on what has allowed them to get to this position," Woods said at a press conference before the 2022 Open Championship. "I know what the PGA Tour stands for and what we have done and what the tour has given us, the ability to chase after our careers and to earn what we get and the trophies we have been able to play for and the history that has been a part of this game."
Provide a brief summary of the reaction to the creation of LIV Golf
Overall, the reaction to the creation of the LIV Golf has been negative. For example, human rights groups have criticized LIV Golf as sportswashing, a group of 2500 survivors of family members killed during the September 11 attacks wrote a letter to golfers who stayed with the PGA Tour thanking them staying, and Tiger Woods said he thinks that the players who went to LIV Golf turned their back on what has allowed them to get to this position.
null
false
22
We design LiLi as a combination of two interconnected models: (1) a RL model that learns to formulate a query-specific inference strategy for performing the OKBC task, and (2) a lifelong prediction model to predict whether a triple should be in the KB, which is invoked by an action while executing the inference strategy and is learned for each relation as in C-PR. The framework improves its performance over time through user interaction and knowledge retention. Compared to the existing KB inference methods, LiLi overcomes the following three challenges for OKBC: 1. Mapping open-world to close-world. Being a closed-world method, C-PR cannot extract path features and learn a prediction model when any of INLINEFORM0 , INLINEFORM1 or INLINEFORM2 is unknown. LiLi solves this problem through interactive knowledge acquisition. If INLINEFORM3 is unknown, LiLi asks the user to provide a clue (an example of INLINEFORM4 ). And if INLINEFORM5 or INLINEFORM6 is unknown, LiLi asks the user to provide a link (relation) to connect the unknown entity with an existing entity (automatically selected) in the KB. We refer to such a query as a connecting link query (CLQ). The acquired knowledge reduces OKBC to KBC and makes the inference task feasible. 2. Spareseness of KB. A main issue of all PR methods like C-PR is the connectivity of the KB graph. If there is no path connecting INLINEFORM0 and INLINEFORM1 in the graph, path enumeration of C-PR gets stuck and inference becomes infeasible. In such cases, LiLi uses a template relation (“@-?-@") as the missing link marker to connect entity-pairs and continues feature extraction. A path containing “@-?-@" is called an incomplete path. Thus, the extracted feature set contains both complete (no missing link) and incomplete paths. Next, LiLi selects an incomplete path from the feature set and asks the user to provide a link for path completion. We refer to such a query as missing link query (MLQ). 3. Limitation in user knowledge. If the user is unable to respond to MLQs or CLQs, LiLi uses a guessing mechanism (discussed later) to fill the gap. This enables LiLi to continue its inference even if the user cannot answer a system question. We design LiLi as a combination of two interconnected models: (1) a RL model that learns to formulate a query-specific inference strategy for performing the OKBC task, and (2) a lifelong prediction model to predict whether a triple should be in the KB, which is invoked by an action while executing the inference strategy and is learned for each relation as in C-PR. The framework improves its performance over time through user interaction and knowledge retention.
What does LiLi consist of?
A RL model and a lifelong prediction model.
null
false
null
What does a drilling engineer do?
A drilling engineer is a specialist at drilling wells. They will create a plan for drilling different types of wells - vertical and horizontal wells. They have to have a very good understanding of the subsurface and how temperatures, pressures and depth affect the drilling of a well in real time. They will also oversee all drilling operations and are involved in completion design, well design and well abandonment.
null
false
507
An important barrier to the safe deployment of machine learning systems is the risk of task ambiguity, where multiple behaviors are consistent with the provided examples. We investigate whether pretrained models are better active learners, capable of asking for example labels that disambiguate between the possible tasks a user may be trying to specify. Across a range of image and text datasets with spurious correlations, latent minority groups, or domain shifts, finetuning pretrained models with data acquired through simple uncertainty sampling achieves the same accuracy with up to 6× fewer labels compared to random sampling. Moreover, the examples chosen by these models are preferentially minority classes or informative examples where the spurious feature and class label are decorrelated. Notably, gains from active learning are not seen in unpretrained models, which do not select such examples, suggesting that the ability to actively learn is an emergent property of the pretraining process.Figure 1: Active learning can resolve task ambiguity, which is especially salient in few-shot settings. Here, the provided training data leaves the model unsure of the task: is it to predict the shape or the color of the object? Pretraining enables models to disentangle and weigh various competing features, making them good active learners that can choose disambiguating examples (e.g. the blue square), resolving this task ambiguity. Vision For computer vision datasets, we finetune BiT, a recentlyproposed family of vision models which have achieved state-of-the-art performance on several vision tasks. We primarily consider the BiT-M-R50x1 model, pretrained on ImageNet-21k. To explore the effectiveness of larger architectures and pretraining sources, in Section 5.2 we also consider performance achieved by the same-size BiT-S-R50x1, trained on ImageNet-1k, and the deeper BiT-M-R101x1 model, also trained on ImageNet-21k. These models have been shown to have emergent few-shot learning abilities, where strong classifiers for new tasks can be obtained by simply finetuning on tens or hundreds of examples with typical gradient descent techniques (rather than meta-learning techniques, for example). Text For the text dataset (Amazon), we use RoBERTa-Large, another pretrained model with similar properties as BiT, and a representative of the BERT family of models which together have obtained state-of-the-art scores on modern NLP benchmarks. Other details, including hyperparameters and seed set/acquisition sizes are deferred to Appendix B. As a running baseline, we compare to the same model finetuned with a random acquisition function (equivalent to not doing active learning). That is, a(x; M) = rand(0, 1), so we simply sample a random batch of data from the pool at each acquisition step. Comparison with unpretrained models To examine whether effective active learning is an emergent property of the pretraining process, we also compare to the performance observed when applying active learning to a randomly initialized, instead of pretrained, BiT-M-R50x1. Vision For computer vision datasets, we finetune BiT, a recentlyproposed family of vision models which have achieved state-of-the-art performance on several vision tasks. We primarily consider the BiT-M-R50x1 model, pretrained on ImageNet-21k. To explore the effectiveness of larger architectures and pretraining sources, in Section 5.2 we also consider performance achieved by the same-size BiT-S-R50x1, trained on ImageNet-1k, and the deeper BiT-M-R101x1 model, also trained on ImageNet-21k. These models have been shown to have emergent few-shot learning abilities, where strong classifiers for new tasks can be obtained by simply finetuning on tens or hundreds of examples with typical gradient descent techniques (rather than meta-learning techniques, for example). Text For the text dataset (Amazon), we use RoBERTa-Large, another pretrained model with similar properties as BiT, and a representative of the BERT family of models which together have obtained state-of-the-art scores on modern NLP benchmarks. Other details, including hyperparameters and seed set/acquisition sizes are deferred to Appendix B. As a running baseline, we compare to the same model finetuned with a random acquisition function (equivalent to not doing active learning). That is, a(x; M) = rand(0, 1), so we simply sample a random batch of data from the pool at each acquisition step. Comparison with unpretrained models To examine whether effective active learning is an emergent property of the pretraining process, we also compare to the performance observed when applying active learning to a randomly initialized, instead of pretrained, BiT-M-R50x1. We broaden our coverage of computer vision models to include vision transformers, the other major architecture family currently in use. We train ViT-16/B, 3 on Treeperson, observing smaller gains for AL than the BiT model (Figure). This perhaps reflects the fact that the vision transformer was pretrained on ImageNet-21k for far fewer epochs then BiT (9 vs 70), and is corroborated by much lower oversampling of minority classes than BiT (Figure). Across a range of image and text datasets with spurious correlations, latent minority groups, or domain shifts, finetuning pretrained models with data acquired through simple uncertainty sampling achieves the same accuracy with up to 6× fewer labels compared to random sampling.****For computer vision datasets, we finetune BiT (Kolesnikov et al., 2020), a recentlyproposed family of vision models which have achieved state-of-the-art performance on several vision tasks.****For the text dataset (Amazon), we use RoBERTa-Large (Liu et al., 2019), another pretrained model with similar properties as BiT, and a representative of the BERT (Devlin et al., 2019) family of models which together have obtained state-of-the-art scores on modern NLP benchmarks (Wang et al., 2018).****We train ViT-16/B, on Treeperson, observing smaller gains for AL than the BiT model (Figure 8).
Can you verify if you are fine-tuning the embeddings or the overall model?
Confirming that we use the standard finetuning approaches used for RoBERTa, BiT, and ViT—namely finetuning all the parameters (including embeddings, in the case of transformer models).
1909.09268
false
null
In our methodology to design new evaluation metrics for comparing reference summaries/translations to hypothesis ones, we established first-principles criteria on what a good evaluator should do. The first one is that it should be highly correlated with human judgement of similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that a robust evaluator should also be able to identify unintelligible sentences. The last criteria is that a good evaluation metric should not give high scores to semantically distant sentences and low scores to semantically related sentences. In our methodology to design new evaluation metrics for comparing reference summaries/translations to hypothesis ones, we established first-principles criteria on what a good evaluator should do. The first one is that it should be highly correlated with human judgement of similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that a robust evaluator should also be able to identify unintelligible sentences. The last criteria is that a good evaluation metric should not give high scores to semantically distant sentences and low scores to semantically related sentences.
What is the criteria for a good metric?
The answers are shown as follows: * The first one is that it should be highly correlated with human judgement of similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that a robust evaluator should also be able to identify unintelligible sentences. The last criteria is that a good evaluation metric should not give high scores to semantically distant sentences and low scores to semantically related sentences.
null
false
null
Why is basketball the fastest growing sport in the world?
Basketball is the fastest growing sport in the world because of the growth the NBA has had globally over the past few years. With popularity rising in other countries like China, India, Africa and many more it has seen a boom in many talented players who has come from different parts of the world. It will continue to grow as current NBA stars such as Lebron James and Stephen Curry have become global ambassadors of the sport.
1909.03526
false
null
Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). This introduces a language variety mismatch due to the irony data involving a number of dialects that come from the Twitter domain. To mitigate this issue, we further pre-train BERT on an in-house dialectal Twitter dataset, showing the utility of this measure. To summarize, we make the following contributions: The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e. targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine. Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine.
What dialect is used in the Google BERT model and what is used in the task data?
The answers are shown as follows: * Modern Standard Arabic (MSA) * MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine
null
false
null
What makes Alabama Football coach Nick Saban so good at his job?
Nick Saban is process oriented. This is exemplified by his team's focus on processes rather than results. The philosophy is centered around the idea that building a "championship process" is the key to success. Focusing on improving your processes (e.g. tackling, blocking, footwork, catching, reading the defense) as opposed to being results-oriented will carry you further. Chasing results can cause someone to re-invent the wheel when they don't achieve immediate success, whereas being process-oriented allows you to hone in and improve upon the skill set needed to consistently reach your goals.
null
false
null
Tell me whether these are mammas or reptiles: cats, dogs, monkeys, whales, turtles, lizards, crocodiles, snakes
Cats are mammals. Dogs are mammals. Monkeys are mammals. Whales are mammals. Turtles are reptiles. Lizards are reptiles. Crocodiles are reptiles. Snakes are reptiles.
null
false
null
What is the oldest civilization in the world?
The oldest civilization in the world is the Mesopotamia civilization which lasted between 4000 - 3500 BC
null
false
null
What dishes can we eat as a Vegan?
We can eat all products which does not harm any animal or are by products of it. As a vegan, you have a variety of options. So, if you see you just want to replace dairy right, which is primarily curd - you have vegan curd for that made of peanut. Butter/Ghee you have vegan ghee/butter made from nuts like cashew, almond, peanut too. I mean seriously you have a lot of options. Then you can replace milk too, with same nut milk. And you know how beneficial it is. So basically, you can eat all the dishes you already eat just by changing the dairy, honey and animal product with plant based things.
null
false
null
Who was the American president for the majority of WWII?
FDR was the American president for most of the second World War
null
false
null
What is the First Amendment of the US constitution?
The First Amendment protects freedom of speech and religion.
null
false
33
Lesk86 observed that concepts that are related should share more words in their respective definitions than concepts that are less connected. He was able to perform word sense disambiguation by identifying the senses of words in a sentence with the largest number of overlaps between their definitions. An overlap is the longest sequence of one or more consecutive words that occur in both definitions. BanerjeeP03 extended this idea to WordNet, but observed that WordNet glosses are often very short, and did not contain enough information to distinguish between multiple concepts. Therefore, they created a super–gloss for each concept by adding the glosses of related concepts to the gloss of the concept itself (and then finding overlaps). PatwardhanP06 adapted this measure to second–order co–occurrence vectors. In this approach, a vector is created for each word in a concept's definition that shows which words co–occur with it in a corpus. These word vectors are averaged to create a single co-occurrence vector for the concept. The similarity between the concepts is calculated by taking the cosine between the concepts second–order vectors. LiuMPMP12 modified and extended this measure to be used to quantify the relatedness between biomedical and clinical terms in the UMLS. The work in this paper can be seen as a further extension of PatwardhanP06 and LiuMPMP12. The similarity between the concepts is calculated by taking the cosine between the concepts second–order vectors.
What is the similarity between the concepts calculated by Patwardhan and Pedersen?
By taking the cosine between the concepts second–order vectors.
null
false
null
How many plastic beads does it take to fill a ceramic duck?
This is dependent on factors such as the size of the plastic beads, the size and shape of the duck and its hollowness, and what is considered "full".
null
false
77
We evaluate the corpus using Named Entity Recognition (NER), i. e., the task of finding mentions of concepts of interest in unstructured text. We focus on detecting cases, conditions, factors, findings and modifiers in case reports (cf. Section SECREF6). We approach this as a sequence labeling problem. Four systems were developed to offer comparable robust baselines. The original documents are pre-processed (sentence splitting and tokenization with ScispaCy). We do not perform stop word removal or lower-casing of the tokens. The BIO labeling scheme is used to capture the order of tokens belonging to the same entity type and enable span-level detection of entities. Detection of nested and/or discontinuous entities is not supported. The annotated corpus is randomized and split in five folds using scikit-learn BIBREF9. Each fold has a train, test and dev split with the test split defined as .15% of the train split. This ensures comparability between the presented systems. We evaluate the corpus using Named Entity Recognition (NER), i. e., the task of finding mentions of concepts of interest in unstructured text. We focus on detecting cases, conditions, factors, findings and modifiers in case reports (cf. Section 2.2.).
What task is used to evaluate the corpus?
They evaluate the corpus using Named Entity Recognition focusing on detecting cases, conditions, factors, findings and modifiers in case reports.
null
false
null
India’s first Hydrogen Fuel Cell Bus is indigenously developed by
KPIT-CSIR in Pune
null
false
null
What is the successor liability and can you give me some practical examples about it
In law, successor liability is liability (debt or other obligation) that is inherited by a successor entity after a corporate restructuring. Here, ‘successor’ refers to the entity that exists after the restructuring. ‘Restructuring’ refers to any action that alters the identity or character of a business entity (e.g. a merger). Successor liability is a subset of the law governing corporate liability or liability of legal persons. A legal person is any person that can do the things a human person is able to do in law – such as enter into contracts, sue and be sued, own property, and so on. Depending on the jurisdiction, entities such as corporations, other types of firm, partnerships and state-owned enterprises may be treated as legal persons, with both rights and responsibilities under law. Successor liability rules influence how these rights and responsibilities vis-a-vis others are influenced by corporate restructurings. Under corporate law (or possibly other bodies of law, such as employment or environmental law), some countries regulate how liability passes from a ‘predecessor’ entity to a 'successor' entity in the event of such corporate restructurings as a name change, merger or acquisition, division or dissolution. Following such restructurings, successor liability rules determine whether and how the 'successor' or acquiring entity must assume some or all of the predecessor entity's liabilities. Product liability, environmental clean-up, employment law and criminal law are some of the areas in which successor liability may enter into play. Thus, it is an important component of the legal system that supports the structural change needed in dynamic business sectors while also protecting individual and societal interests. While a restructuring does not automatically give rise to successor liability, the following situations are often considered to be relevant when determining whether or not such liability exists: there is an explicit agreement between the parties on the assumption of some liabilities; the restructuring is considered to be a de facto merger; the restructuring results in the mere continuation of the predecessor’s business; and the transaction underpinning the restructuring was fraudulent and used to escape liability. Without successor liability, a business entity may avoid liability by reorganising itself or otherwise altering its corporate identity. A comparative law study of successor liability in the context of enforcement of laws criminalising foreign bribery shows that countries have adopted widely different approaches to successor liability in a criminal law context. In some jurisdictions, it is not clear whether successor liability exists at all, opening up the possibility that even minor business reorganisations could 'wipe the slate clean' from the standpoint of criminal law enforcement. Other countries have comprehensive statutory frameworks that address successor liability. Still others rely on established jurisprudence or other legal principles as the legal foundation for successor liability.
null
false
null
what is REST API
REST API is an architectural style for building network APIs. In practice, most REST APIs are using JSON over HTTP. REST APIs have several properties that make them very popular. HTTP is the foundational and the most popular network protocol across the internet. By using HTTP, REST APIs can fully utilize the internet infrastructure, which greatly reduce the complexity and operational costs. For example, using HTTPS for transport security, and using URL for request routing. JSON is a simple, stable, and intuitive data format , which is not expected to change ever. This offers great ease of use and stability to REST APIs. For high performance applications, binary formats can be used instead, such as protobuf or thrift. REST APIs are often designed as CRUD APIs, where API operations are commonly expressed as Create, Retrieve, Update, and Delete operations on resources. CRUD operations are naturally mapped to HTTP methods (POST, GET, PUT/PATCH, and DELETE), and API resources can be naturally identified using URLs.
null
false
null
What is a water hammer?
Water hammer is the often counterintuitive result of stopping water that is in motion in a fluid system; the collective momentum of the water acts as a "hammer" and can create a high-pressure shockwave that damages, or destroys piping, valves, or equipment. For example, it may be easy to close a water valve. Yet moments later it might explode from the sudden pressure of all the "upstream" water being forced to stop.
null
false
null
The Islamization of Iran occurred as a result of the Muslim conquest of Persia in 633–654. It was a long process by which Islam, though initially rejected, eventually spread among the population on the Iranian Plateau. Iranian peoples have maintained certain pre-Islamic traditions, including their language and culture, and adapted them with Islamic codes. These two customs and traditions merged as the "Iranian Islamic" identity.
When did Iranians convert to Islam?
Unlike the popular opinion that Arabs converted Iranians to Islam after defeating the Sasanian Empire, Iranians continued to practice their native religions for many centuries. The conversion to Islam was a slow process which was aided by differentiated tax systems for Muslims and non-muslims.
null
false
null
Hawaii (/həˈwaɪi/ (listen) hə-WY-ee; Hawaiian: Hawaiʻi [həˈvɐjʔi] or [həˈwɐjʔi]) is a state in the Western United States, about 2,000 miles (3,200 km) from the U.S. mainland in the Pacific Ocean. It is the only U.S. state outside North America, the only state that is an archipelago, and the only state in the tropics. Hawaii comprises nearly the entire Hawaiian archipelago, 137 volcanic islands spanning 1,500 miles (2,400 km) that are physiographically and ethnologically part of the Polynesian subregion of Oceania. The state's ocean coastline is consequently the fourth-longest in the U.S., at about 750 miles (1,210 km). The eight main islands, from northwest to southeast, are Niʻihau, Kauaʻi, Oʻahu, Molokaʻi, Lānaʻi, Kahoʻolawe, Maui, and Hawaiʻi—the last of these, after which the state is named, is often called the "Big Island" or "Hawaii Island" to avoid confusion with the state or archipelago. The uninhabited Northwestern Hawaiian Islands make up most of the Papahānaumokuākea Marine National Monument, the United States' largest protected area and the fourth-largest in the world.
According to this paragraph, what is the only state in North America that is an archpelago?
Hawaii is the only state in North America that is an archipelago, according to this text.
null
false
null
Which of these are movies that Will Ferrell starred in and which ones did Jim Carrey star in? Elf, Anchorman: The Legend of Ron Burgundy, Step Brothers, Megamind, Zoolander, The Mask, Liar Liar, Sonic the Hedgehog, Dumb and Dumber, The Cable Guy, Ace Ventura: Pet Detective
Will Ferrell: Elf, Anchorman: The Legend of Ron Burgundy, Step Brothers, Megamind, Zoolander Jim Carrey: The Mask, Liar Liar, Sonic the Hedgehog, Dumb and Dumber, The Cable Guy, Ace Ventura: Pet Detective
null
false
null
How long is the Mars 2020 mission?
At least one Mars year (about 687 Earth days).
null
false
null
A cryptocurrency, crypto-currency, or crypto is a digital currency designed to work as a medium of exchange through a computer network that is not reliant on any central authority, such as a government or bank, to uphold or maintain it. It is a decentralized system for verifying that the parties to a transaction have the money they claim to have, eliminating the need for traditional intermediaries, such as banks, when funds are being transferred between two entities.
What is Cryptocurrency?
Cryptocurrency is the digital currency built on the blockchain or other network which allows people to exchange and trade for goods. There are over 1000 different kinds of cryptocurrency around the world and each of them is designed differently. Currently, the popular cryptocurrencies are Bitcoin, Ethereum, and Dogecoin.
null
false
null
Classify the following fish as either inshore/nearshore or offshore fish species: Tarpon, Bonefish, redfish, snook, trout, bass, swordfish, dolphinfish, gag, red snapper, yellowfin tuna, blackfin tuna, grouper
Tarpon inshore, Bonefish inshore, redfish inshore, snook inshore, trout inshore, bass inshore, swordfish offshore, dolphinfish offshore, gag offshore, red snapper offshore, yellowfin tuna offshore, blackfin tuna offshore, grouper offshore
2004.04696
false
null
One way to expose Bleurt to a wide variety of sentence differences is to use existing sentence pairs datasets BIBREF22, BIBREF23, BIBREF24. These sets are a rich source of related sentences, but they may fail to capture the errors and alterations that NLG systems produce (e.g., omissions, repetitions, nonsensical substitutions). We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs $(, \tilde{})$ by randomly perturbing 1.8 million segments $$ from Wikipedia. We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words. We obtain about 6.5 million perturbations $\tilde{}$. Let us describe those techniques. We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs $(, \tilde{})$ by randomly perturbing 1.8 million segments $$ from Wikipedia. We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words.
How are the synthetic examples generated?
Random perturbation of Wikipedia sentences using mask-filling with BERT, backtranslation and randomly drop out
null
false
175
Electroencephalography (EEG) is a non invasive way of measuring electrical activity of human brain. In BIBREF0 we demonstrated deep learning based automatic speech recognition (ASR) using EEG signals for a limited English vocabulary of four words and five vowels. In this paper we extend our work for a much larger English vocabulary and we use state-of-art end-to-end continuous speech recognition models to perform recognition. In our prior work we predicted isolated words and vowels. ASR systems forms the front end or back end in many cutting edge voice activated technologies like Amazon Alexa, Apple Siri, Windows Cortana, Samsung Bixby etc. Unfortunately these systems are trained to recognize text only from acoustic features. This limits technology accessibility to people with speaking disabilities and disorders. The research work presented in this paper tries to address this issue by investigating speech recognition using only EEG signals with no acoustic input and also by combining EEG features along with traditional acoustic features to perform recognition. We believe the former will help with speech restoration for people who can not speak at all and the latter will help people who are having speaking disabilities like broken or discontinued speech etc to use voice activated technologies with better user experience there by helping in improving technology accessibility. ASR performance is degraded in presence of noisy speech and in real life situations most of the speech is noisy. Inspired from the unique robustness to environmental artifacts exhibited by the human auditory cortex BIBREF1 , BIBREF2 we used very noisy speech data for this work and demonstrated lower word error rate (WER) for smaller corpus using EEG features, concatenation of EEG features and acoustic features. In BIBREF3 authors decode imagined speech from EEG using synthetic EEG data and connectionist temporal classification (CTC) network but in our work we use real EEG data, use EEG data recorded along with acoustics. In BIBREF4 authors perform envisioned speech recognition using random forest classifier but in our case we use end to end state of art models and perform recognition for noisy speech. In BIBREF5 authors demonstrate speech recognition using electrocorticography (ECoG) signals, which are invasive in nature but in our work we use non invasive EEG signals. This work is mainly motivated by the results explained in BIBREF0 , BIBREF6 , BIBREF7 , BIBREF3 . In BIBREF6 the authors used classification approach for identifying phonological categories in imagined and silent speech but in our work we used continuous speech recognition state of art models and our models were predicting words, characters at each time step. Similarly in BIBREF7 neural network based classification approach was used for predicting phonemes. Major contribution of this paper is the demonstration of end to end continuous noisy speech recognition using only EEG features and this paper further validates the concepts introduced in BIBREF0 for a much larger English corpus. In BIBREF0 we demonstrated deep learning based automatic speech recognition (ASR) using EEG signals for a limited English vocabulary of four words and five vowels.
What limited English vocabulary do the authors perform envisioned speech recognition for?
The authors demonstrated deep learning based automatic speech recognition (ASR) for a limited English vocabulary of four words and five vowels.
1703.04617
false
null
Table 2 shows the ablation performances of various Q-code on the development set. Note that since the testset is hidden from us, we can only perform such an analysis on the development set. Our baseline model using no Q-code achieved a 68.00% and 77.36% EM and F1 scores, respectively. When we added the explicit question type T-code into the baseline model, the performance was improved slightly to 68.16%(EM) and 77.58%(F1). We then used TreeLSTM introduce syntactic parses for question representation and understanding (replacing simple question type as question understanding Q-code), which consistently shows further improvement. We further incorporated the soft adaptation. When letting the number of hidden question types ( $K$ ) to be 20, the performance improves to 68.73%/77.74% on EM and F1, respectively, which corresponds to the results of our model reported in Table 1 . Furthermore, after submitted our result, we have experimented with a large value of $K$ and found that when $K=100$ , we can achieve a better performance of 69.10%/78.38% on the development set. 69.10%/78.38%
how much of improvement the adaptation model can get?
The answers are shown as follows: * 69.10%/78.38%
null
false
null
Classify each of the following as food, company or both: apple, unilever, avon, mercedez, nestle, meat, rice, penny, pineapple, beans, dm, banana
both, company, company, company, company, food, food, company, food, food, company, food
1909.11189
true
null
The Style baseline achieves an Accuracy of 83%, LDA features 89% and a combination of the two gets 90%. However, training on full poems reduces this to 42—52%. The Style baseline achieves an Accuracy of 83%, LDA features 89% and a combination of the two gets 90%. However, training on full poems reduces this to 42—52%.
Is the outcome of the LDA analysis evaluated in any way?
Yes.
null
false
null
Write a bullet point list of at least 7 crops that are grown in California
- Artichokes - Almonds - Plums - Cherries - Garlic - Apples - Pears
null
false
300
The success of word representations (embeddings) learned from text has motivated analogous methods to learn representations of longer sequences of text such as sentences, a fundamental step on any task requiring some level of text understanding BIBREF0 . Sentence representation is a challenging task that has to consider aspects such as compositionality, phrase similarity, negation, etc. In order to evaluate sentence representations, intermediate tasks such as Semantic Textual Similarity (STS) BIBREF1 or Natural Language Inference (NLI) BIBREF2 have been proposed, with STS being popular among unsupervised approaches. Through a set of campaigns, STS has produced several manually annotated datasets, where annotators measure the similarity among sentences, with higher scores for more similar sentences, ranging between 0 (no similarity) to 5 (semantic equivalence). Human annotators exhibit high inter-tagger correlation in this task. In another strand of related work, tasks that combine representations of multiple modalities have gained increasing attention, including image-caption retrieval, video and text alignment, caption generation, and visual question answering. A common approach is to learn image and text embeddings that share the same space so that sentence vectors are close to the representation of the images they describe BIBREF3 , BIBREF4 . BIBREF5 provides an approach that learns to align images with descriptions. Joint spaces are typically learned combining various types of deep learning networks such us recurrent networks or convolutional networks, with some attention mechanism BIBREF6 , BIBREF7 , BIBREF8 . The complementarity of visual and text representations for improved language understanding have been shown also on word representations, where embeddings have been combined with visual or perceptual input to produce grounded representations of words BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . These improved representation models have outperformed traditional text-only distributional models on a series of word similarity tasks, showing that visual information coming from images is complementary to textual information. In this paper we present Visual Semantic Textual Similarity (vSTS), a dataset which allows to study whether better sentence representations can be built when having access to corresponding images, e.g. a caption and its image, in contrast with having access to the text alone. This dataset is based on a subset of the STS benchmark BIBREF1 , more specifically, the so called STS-images subset, which contains pairs of captions. Note that the annotations are based on the textual information alone. vSTS extends the existing subset with images, and aims at being a standard dataset to test the contribution of visual information when evaluating sentence representations. In addition we show that the dataset allows to explore two hypothesis: H1) whether the image representations alone are able to predict caption similarity; H2) whether a combination of image and text representations allow to improve the text-only results on this similarity task. We introduced the vSTS dataset, which contains caption pairs with human similarity annotations, where the systems can also access the actual images. The dataset aims at being a standard dataset to test the contribution of visual information when evaluating the similarity of sentences. Experiments confirmed our hypotheses: image representations are useful for caption similarity and they are complementary to textual representations, as results improve significantly when two modalities are combined together. In the future we plan to re-annotate the dataset with scores which are based on both the text and the image, in order to shed light on the interplay of images and text when understanding text. In addition we show that the dataset allows to explore two hypothesis: H1) whether the image representations alone are able to predict caption similarity; H2) whether a combination of image and text representations allow to improve the text-only results on this similarity task. The vSTS dataset The dataset is derived from a subset of the caption pairs already annotated in the Semantic Textual Similarity Task (see below). We selected some caption pairs with their similarity annotations, and added the images corresponding to each caption.****Experiments confirmed our hypotheses: image representations are useful for caption similarity and they are complementary to textual representations, as results improve significantly when two modalities are combined together.
Are the image representations alone able to predict caption similarity?
Yes.
1909.00338
true
null
The stance towards vaccination was categorized into `Negative’, `Neutral’, `Positive’ and `Not clear’. The latter category was essential, as some posts do not convey enough information about the stance of the writer. In addition to the four-valued stance classes we included separate classes grouped under relevance, subject and sentiment as annotation categories. With these additional categorizations we aimed to obtain a precise grasp of all possibly relevant tweet characteristics in relation to vaccination, which could help in a machine learning setting. The stance towards vaccination was categorized into `Negative’, `Neutral’, `Positive’ and `Not clear’.
Do they allow for messages with vaccination-related key terms to be of neutral stance?
Yes.
1707.05853
false
null
FLOAT SELECTED: Figure 1: The proposed model with GRU-based cnet encoder for a dialog with three turns. dt are one-hot word vectors of the system dialog acts; wti correspond to the word hypotheses in the timesteps of the cnets of the user utterances; sj , uj are the cnet GRU outputs at the end of each system or user utterance. FLOAT SELECTED: Figure 1: The proposed model with GRU-based cnet encoder for a dialog with three turns. dt are one-hot word vectors of the system dialog acts; wti correspond to the word hypotheses in the timesteps of the cnets of the user utterances; sj , uj are the cnet GRU outputs at the end of each system or user utterance.
What type of recurrent layers does the model use?
GRU
1802.00923
false
null
In this section we outline our pipeline for human communication comprehension: the Multi-attention Recurrent Network (MARN). MARN has two key components: Long-short Term Hybrid Memory and Multi-attention Block. Long-short Term Hybrid Memory (LSTHM) is an extension of the Long-short Term Memory (LSTM) by reformulating the memory component to carry hybrid information. LSTHM is intrinsically designed for multimodal setups and each modality is assigned a unique LSTHM. LSTHM has a hybrid memory that stores view-specific dynamics of its assigned modality and cross-view dynamics related to its assigned modality. The component that discovers cross-view dynamics across different modalities is called the Multi-attention Block (MAB). The MAB first uses information from hidden states of all LSTHMs at a timestep to regress coefficients to outline the multiple existing cross-view dynamics among them. It then weights the output dimensions based on these coefficients and learns a neural cross-view dynamics code for LSTHMs to update their hybrid memories. Figure 1 shows the overview of the MARN. MARN is differentiable end-to-end which allows the model to be learned efficiently using gradient decent approaches. In the next subsection, we first outline the Long-short Term Hybrid Memory. We then proceed to outline the Multi-attention Block and describe how the two components are integrated in the MARN. Long-short Term Memory (LSTM) networks have been among the most successful models in learning from sequential data BIBREF21 . The most important component of the LSTM is a memory which stores a representation of its input through time. In the LSTHM model, we seek to build a memory mechanism for each modality which in addition to storing view-specific dynamics, is also able to store the cross-view dynamics that are important for that modality. This allows the memory to function in a hybrid manner. Long-short Term Hybrid Memory and Multi-attention Block. Long-short Term Hybrid Memory (LSTHM) is an extension of the Long-short Term Memory (LSTM) by reformulating the memory component to carry hybrid information. In the LSTHM model, we seek to build a memory mechanism for each modality which in addition to storing view-specific dynamics, is also able to store the cross-view dynamics that are important for that modality. This allows the memory to function in a hybrid manner.
What is the difference between Long-short Term Hybrid Memory and LSTMs?
The answers are shown as follows: * Long-short Term Hybrid Memory (LSTHM) is an extension of the Long-short Term Memory (LSTM)
null
false
null
The Cambridge University Boat Club (CUBC) is the rowing club of the University of Cambridge, England. The club was founded in 1828 and has been located at the Goldie Boathouse on the River Cam, Cambridge since 1882. Nowadays, training primarily takes place on the River Great Ouse at Ely.
Given the reference paragraph, when was the CUBC formed?
The Cambridge University Boat Club (CUBC) was founded in 1828
null
false
null
The village school was closed during much of the war but re-opened in autumn 1944. Gorbachev did not want to return but when he did he excelled academically. He read voraciously, moving from the Western novels of Thomas Mayne Reid to the works of Vissarion Belinsky, Alexander Pushkin, Nikolai Gogol, and Mikhail Lermontov. In 1946, he joined the Komsomol, the Soviet political youth organization, becoming leader of his local group and then being elected to the Komsomol committee for the district. From primary school he moved to the high school in Molotovskoye; he stayed there during the week while walking the 19 km (12 mi) home during weekends. As well as being a member of the school's drama society, he organized sporting and social activities and led the school's morning exercise class. Over the course of five consecutive summers from 1946 onward he returned home to assist his father in operating a combine harvester, during which they sometimes worked 20-hour days. In 1948, they harvested over 8,000 centners of grain, a feat for which Sergey was awarded the Order of Lenin and his son the Order of the Red Banner of Labour.
Based on this passage, how many miles did Gorbachev walk each week for his round trip commute to school?
According to this passage, Gorbachev's round trip walking commute to school was 24 miles per week. He walked 12 miles each way and travelled to and from school only once per week.
null
false
null
Which is a bird or fish: Black-necked grebe, Black tetra
Black tetra is a fish, Black-necked grebe is a bird
null
false
null
The Sign of the Kiwi, originally called Toll House, is a small café and shop at Dyers Pass on the road between Christchurch and Governors Bay. It was built in 1916–17 by Harry Ell as a staging post and opened as a tearoom and rest house. It has a Category I heritage classification by Heritage New Zealand and is a popular destination for tourists and locals alike. The building was closed some time after the 22 February 2011 Christchurch earthquake and did not open again until 23 January 2017, six years later. Although located within the burned area, the building was not damaged by the 2017 Port Hills fires a month later. Historical context Harry Ell in October 1914 Harry Ell was a Member of Parliament (1899–1919) and a Christchurch City Councillor (1903 and 1917–1919). He devoted much of his life to the conservation of bush remnants on the Port Hills on Banks Peninsula, and a key element in his scheme was to make the area accessible to the public. To that end, he proposed a Summit Road that would go along the hilltops, connecting Godley Head with the tops above Akaroa. Whilst most of the public believed that the scenic reserves to be formed were too remote, Ell had a way with landowners and organised some government funding. His usual style was to pay a deposit for land, with the remainder to be raised later. In 1909, he decided to form a Summit Road Scenic Reserve Board to help with the task of establishing his dream, but there was tension over their scope. The board wanted to have influence on policy, but Ell regarded them as a mere fund-raising body. By 1915, Ell had secured 23 reserves in the Port Hills by buying private land, and none of these projects had the board's prior approval. By the 1930s, the road had been surveyed as far south as the saddle above Pigeon Bay. Rest houses were envisaged at regular intervals. The first to be built was the Sign of the Bellbird at Kennedy's Bush in 1914. This was the location of the first scenic reserve that Ell managed to get established in 1906. The Sign of the Packhorse, a smaller house on top of the Kaituna Saddle, was the second one to be established in 1916. The Sign of the Kiwi was the third rest house built by Ell. The Sign of the Takahe was the remaining house to be built.
What is the sign of the kiwi
Originally called Toll House, The Sign of the Kiwi is a small cafe and shop on Dyers Pass between Christchurch and Governors Bay. The building was closed sometime after February 22, 2011, Christchurch earthquake and was reopened on January 23, 2017, six years later. Harry Ell was a Member of Parliament and Christchurch City Councilor. He devoted most of his life to preserving the scrub remnants of Port Hills on the Banks Peninsula, and one of the critical elements of his plan was to make the area accessible to the public. While most people believed that the natural reserves to be created were far away, Ell had a way with the landowners and organized some government funds. In 1909, he decided to make a Summit Road Scenic Conservation Board to aid in his mission to fulfill his dream, but there needed to be more tension over their scope.
null
false
null
The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker. An electric field generally varies in space and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point. The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, so it follows that an electric field is a vector field. The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, that they originate at positive charges and terminate at negative charges; second, that they must enter any good conductor at right angles, and third, that they may never cross nor close in on themselves.
What is the difference between an electric field and gravitational field?
The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion.
null
false
null
What were some historical reasons for the invention of pad thai?
The dish was invented in part to help Thailand develop a national identity, and also to promote consumption of noodles to alleviate rice shortages during world war 2.
null
false
null
What is AWS ECS?
Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances.
null
false
null
Beavers (genus Castor) are large, semiaquatic rodents of the Northern Hemisphere. There are two existing species: the North American beaver (Castor canadensis) and the Eurasian beaver (C. fiber). Beavers are the second-largest living rodents, after capybaras, weighing up to 30 kilograms (66 lb). Beavers have stout bodies with large heads, long chisel-like incisors, brown or gray fur, hand-like front feet, webbed back feet, and tails that are flat and scaly. The two species differ in skull and tail shape and fur color. Beavers can be found in a number of freshwater habitats, such as rivers, streams, lakes and ponds. They are herbivorous, consuming tree bark, aquatic plants, grasses and sedges.
Given this paragraph how much do Beavers weigh?
Up to 30 kilograms or 66 pounds
null
false
null
What are ten places I should eat in Paris?
Ten places you should eat at in Paris are: - Les Enfants du Marche - Le Bistrot Paul Bert - Table - Parcelles - Sur Mer - Restaurant A.T. - La Tour d'Argent - Chez Nous - Le Baratin - L'avant Comptoir de la Terre
null
false
null
Identify which instrument is string or percussion: Carimbo, Gusle
Carimbo is percussion, Gusle is string.
null
false
null
In visual perception, an optical illusion (also called a visual illusion ) is an illusion caused by the visual system and characterized by a visual percept that arguably appears to differ from reality. Illusions come in a wide variety; their categorization is difficult because the underlying cause is often not clear but a classification proposed by Richard Gregory is useful as an orientation. According to that, there are three main classes: physical, physiological, and cognitive illusions, and in each class there are four kinds: Ambiguities, distortions, paradoxes, and fictions. A classical example for a physical distortion would be the apparent bending of a stick half immerged in water; an example for a physiological paradox is the motion aftereffect (where, despite movement, position remains unchanged). An example for a physiological fiction is an afterimage. Three typical cognitive distortions are the  Ponzo , Poggendorff, and Müller-Lyer illusion. Physical illusions are caused by the physical environment, e.g. by the optical properties of water. Physiological illusions arise in the eye or the visual pathway, e.g. from the effects of excessive stimulation of a specific receptor type.  Cognitive visual illusions are the result of unconscious inferences and are perhaps those most widely known.
Jot down some important points about optical illusion from the given passage.
1. Visual Illusion is caused by the visual system and characterized by a visual percept that appears to be different from reality. 2. According to Richard Gregory a classification proposed by him is useful as an orientation that, there are three main classes: physical, physiological, and cognitive illusions, and in each class there are four kinds: Ambiguities, distortions, paradoxes, and fictions. 3. An example for a physical distortion would be the apparent bending of a stick half immerged in water; an example for a physiological paradox is the motion aftereffect (where, despite movement, position remains unchanged).  4. An example for a physiological fiction is an afterimage 5. Three cognitive distortions are the  Ponzo , Poggendorff, and Müller-Lyer illusion. 6. Physical illusions are caused by the physical environment, e.g. by the optical properties of water. Physiological illusions arise in the eye or the visual pathway. 7. The most widely know illusion is visual illusions which are the result of unconscious inferences.
1602.07618
true
null
In order to understand what INLINEFORM0 is, we need to understand the mathematics of grammar. The study of the mathematical structure of grammar has indicated that the fundamental things making up sentences are not the words, but some atomic grammatical types, such as the noun-type and the sentence-type BIBREF23 , BIBREF24 , BIBREF25 . The transitive verb-type is not an atomic grammatical type, but a composite made up of two noun-types and one sentence-type. Hence, particularly interesting here is that atomic doesn't really mean smallest... The study of the mathematical structure of grammar has indicated that the fundamental things making up sentences are not the words, but some atomic grammatical types, such as the noun-type and the sentence-type BIBREF23 , BIBREF24 , BIBREF25 . The transitive verb-type is not an atomic grammatical type, but a composite made up of two noun-types and one sentence-type. Hence, particularly interesting here is that atomic doesn't really mean smallest...
Do they argue that all words can be derived from other (elementary) words?
No.
null
false
null
Classify each of the following Hogwarts students in the houses Gryffindor, Hufflepuff, Ravenclaw, and Slytherin: Harry Potter, Hermione Granger, Neville Longbottom, Ronald Weasley, Fred Weasley, George Weasley, Percy Weasley, Charles Weasley, Ginny Weasley, Seamus Finnigan, Oliver Wood, Angelina Johnson, Alicia Spinnet, Katie Bell, Andrew Kirke, Jack Sloper, Demelza Robins, Dean Thomas, Cedric Diggory, Zacharias Smith, Ernie Macmillan, Hannah Abbott, Justin Finch-Fletchley, Susan Bones, Zacharias Smith, Luna Lovegood, Marcus Belby, Cho Chang, Myrtle Warren, Padma Patil, Terry Boot, Michael Corner, Roger Davies, Anthony Goldstein, Marietta Edgecombe, Robert Hilliard, Penelope Clearwater, Lisa Turpin, Morag MacDougal, Mandy Brocklehurst, Nanette Desford, Maria Glossop, Millicent Bagnold, Sue Li, Ignatia Wildsmith, Chester Davies, Tulip Karasu, Badeea Ali, Talbott Winger, Andre Egwu, Adelaide Murton, Adrian Pucey, Aesop Sharp, Barnaby Lee, Blaise Zabini, Felix Rosier, Gemma Farley, Ismelda Murk, Liz Tuttle, Marcus Flint, Merula Snyde, Millicent Bulstrode, Pansy Parkinson, Terence Higgs, Tracey Davis, Tom Riddle, Viola Richmond, Yatin Bhagat, Yurika Haneda, Zoe Accrington, Zubeida Khan
The Gryffindor students are: Harry Potter, Hermione Granger, Neville Longbottom, Ronald Weasley, Fred Weasley, George Weasley, Percy Weasley, Charles Weasley, Ginny Weasley, Seamus Finnigan, Oliver Wood, Angelina Johnson, Alicia Spinnet, Katie Bell, Andrew Kirke, Jack Sloper, Demelza Robins, Dean Thomas. The Hufflepuff students are: Cedric Diggory, Zacharias Smith, Ernie Macmillan, Hannah Abbott, Justin Finch-Fletchley, Susan Bones, Zacharias Smith The Ravenclaw students are: Luna Lovegood, Marcus Belby, Cho Chang, Myrtle Warren, Padma Patil, Terry Boot, Michael Corner, Roger Davies, Anthony Goldstein, Marietta Edgecombe, Robert Hilliard, Penelope Clearwater, Lisa Turpin, Morag MacDougal, Mandy Brocklehurst, Nanette Desford, Maria Glossop, Millicent Bagnold, Sue Li, Ignatia Wildsmith, Chester Davies, Tulip Karasu, Badeea Ali, Talbott Winger, Andre Egwu The Slytherin students are: Adelaide Murton, Adrian Pucey, Aesop Sharp, Barnaby Lee, Blaise Zabini, Felix Rosier, Gemma Farley, Ismelda Murk, Liz Tuttle, Marcus Flint, Merula Snyde, Millicent Bulstrode, Pansy Parkinson, Terence Higgs, Tracey Davis, Tom Riddle, Viola Richmond, Yatin Bhagat, Yurika Haneda, Zoe Accrington, Zubeida Khan
null
false
null
What is an ATV?
An ATV typically refers to an all-terrain vehicle which can take many forms. The most popular ATV consists of a 4 wheel motorized vehicle which has large low-pressure tires and can carry one or two riders that straddle a seat like a motor cycle. They are highly maneuverable and often a joy to ride. Other types of vehicles can be described as ATVs as well. These include off-road vehicles such as SUVs, Jeeps, buggies and motorcycles with large wheels that can navigate through a variety of difficult terrain much easier that other forms of transportation.
null
false
null
How can I learn to code?
There are thousands and thousands of resources online that you can use to learn how to code. It can seem overwhelming to take the first step. A good way to learn is to start with a small, easy project (eg: adding 2 numbers and showing the result) and build on it. Important to keep in mind is coding languages are just languages. So there is a structure and a set of rules to them. Once you start getting a feel for this, it gets easier to learn more quickly. In fact, coding languages are much simpler than languages we humans use to talk to each other :) Depending on how you learn best, you can start with your favorite search engine and/or video app to find a good starting point to start learning coding. Have fun !