paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Write a point of view on personal privacy in the age of software and technology.
|
The definition of “privacy” is elusive, but some may associate a reduction in privacy with the burgeoning of technology. The advent of digital databases has made it possible to easily track and retrieve user records and user activities. Records which are reported to the government are deemed “public” and may be used to vet teachers, bus drivers, crime rates, and more. Similar information can be used to determine the layouts and prices of homes and to track spending habits to perfect search engine recommendations.
Despite technological infringements, some may argue that there is more privacy today than in centuries past. We can consume entertainment without leaving the confines of our home; when we do step out, we can travel in private vehicles. Consequently, we have become more estranged from society and now request credentials or an ordeal before trusting others. More than ever, we seek recommendations from our close peers, we frequently consult user reviews, and we are weary of trusting unvetted strangers.
|
|
null | false
| null |
Born in Scranton, Pennsylvania, Biden moved with his family to Delaware in 1953. He studied at the University of Delaware before earning his law degree from Syracuse University. He was elected to the New Castle County Council in 1970 and became the sixth-youngest senator in U.S. history after he was elected in 1972, at age 29. Biden was the chair or ranking member of the Senate Foreign Relations Committee for 12 years. He chaired the Senate Judiciary Committee from 1987 to 1995; drafted and led the effort to pass the Violent Crime Control and Law Enforcement Act and the Violence Against Women Act; and oversaw six U.S. Supreme Court confirmation hearings, including the contentious hearings for Robert Bork and Clarence Thomas. Biden ran unsuccessfully for the Democratic presidential nomination in 1988 and 2008. Barack Obama chose Biden as his running mate in the 2008 and 2012 presidential elections. Biden was a close counselor to Obama during his two terms as Obama's vice president. Biden and his running mate, Kamala Harris, defeated incumbents Donald Trump and Mike Pence in the 2020 presidential election. On January 20, 2021, he became the oldest president in U.S. history, the first to have a female vice president, and the first from Delaware.
https://en.wikipedia.org/wiki/Joe_Biden
|
Based on this paragraph, how many times has Joe Biden run for president and what characteristics make him unique among all past presidents?
|
Based on this paragraph, Joe Biden has run for president three times, losing in 1988 and 2008 and eventually winning in 2020. As president, he is unique in three ways: the first to have a female vice president, and the first president from the state of Delaware.
|
null | false
| null |
Choose the word which is different from the rest: Sketch, Diagram, Poster, Chart, Paper
|
Sketch is different from the rest. The others can be written upon.
|
|
null | false
| 17
|
Following developing news stories is imperative to making real-time decisions on important political and public safety matters. Given the abundance of media providers and languages, this endeavor is an extremely difficult task. As such, there is a strong demand for automatic clustering of news streams, so that they can be organized into stories or themes for further processing. Performing this task in an online and efficient manner is a challenging problem, not only for newswire, but also for scientific articles, online reviews, forum posts, blogs, and microblogs.
A key challenge in handling document streams is that the story clusters must be generated on the fly in an online fashion: this requires handling documents one-by-one as they appear in the document stream. In this paper, we provide a treatment to the problem of online document clustering, i.e. the task of clustering a stream of documents into themes. For example, for news articles, we would want to cluster them into related news stories.
To this end, we introduce a system which aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream. Our clustering approach is part of a larger media monitoring project to solve the problem of monitoring massive text and TV/Radio streams (speech-to-text). In particular, media monitors write intelligence reports about the most relevant events, and being able to search, visualize and explore news clusters assists in gathering more insight about a particular story. Since relevant events may be spawned from any part of the world (and from many multilingual sources), it becomes imperative to cluster news across different languages.
In terms of granularity, the type of story clusters we are interested in are the group of articles which, for example : (i) Narrate recent air-strikes in Eastern Ghouta (Syria); (ii) Describe the recent launch of Space X's Falcon Heavy rocket.
To this end, we introduce a system which aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream.
|
What kind of system do they introduce?
|
It aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream.
|
null | false
| null |
The Smiths were an English rock band formed in Manchester in 1982. They comprised the singer Morrissey, the guitarist Johnny Marr, the bassist Andy Rourke and the drummer Mike Joyce. Morrissey and Marr formed the band's songwriting partnership. They are regarded as one of the most important acts to emerge from the 1980s British independent music.
|
Who are the Smiths?
|
The Smiths were an English rock band from the 1980s. The lead singer was Morrissey. The other band members were: Johnny Marr on guitar, Andy Rourke on bass, and Mike Joyce on drums.
|
null | false
| null |
Which are the main reasons for doing a PhD?
|
A PhD degree provides advanced critical thinking, not only in the field of Science that the PhD was achieved, but also generally. Additionally, PhD programs give the opportunity to students to perform dedicated research to topics which are of specific interest to them.
|
|
null | false
| 2
|
There has been significant progress on Named Entity Recognition (NER) in recent years using models based on machine learning algorithms BIBREF0 , BIBREF1 , BIBREF2 . As with other Natural Language Processing (NLP) tasks, building NER systems typically requires a massive amount of labeled training data which are annotated by experts. In real applications, we often need to consider new types of entities in new domains where we do not have existing annotated data. For such new types of entities, however, it is very hard to find experts to annotate the data within short time limits and hiring experts is costly and non-scalable, both in terms of time and money.
In order to quickly obtain new training data, we can use crowdsourcing as one alternative way at lower cost in a short time. But as an exchange, crowd annotations from non-experts may be of lower quality than those from experts. It is one biggest challenge to build a powerful NER system on such a low quality annotated data. Although we can obtain high quality annotations for each input sentence by majority voting, it can be a waste of human labors to achieve such a goal, especially for some ambiguous sentences which may require a number of annotations to reach an agreement. Thus majority work directly build models on crowd annotations, trying to model the differences among annotators, for example, some of the annotators may be more trustful BIBREF3 , BIBREF4 .
Here we focus mainly on the Chinese NER, which is more difficult than NER for other languages such as English for the lack of morphological variations such as capitalization and in particular the uncertainty in word segmentation. The Chinese NE taggers trained on news domain often perform poor in other domains. Although we can alleviate the problem by using character-level tagging to resolve the problem of poor word segmentation performances BIBREF5 , still there exists a large gap when the target domain changes, especially for the texts of social media. Thus, in order to get a good tagger for new domains and also for the conditions of new entity types, we require large amounts of labeled data. Therefore, crowdsourcing is a reasonable solution for these situations.
In this paper, we propose an approach to training a Chinese NER system on the crowd-annotated data. Our goal is to extract additional annotator independent features by adversarial training, alleviating the annotation noises of non-experts. The idea of adversarial training in neural networks has been used successfully in several NLP tasks, such as cross-lingual POS tagging BIBREF6 and cross-domain POS tagging BIBREF7 . They use it to reduce the negative influences of the input divergences among different domains or languages, while we use adversarial training to reduce the negative influences brought by different crowd annotators. To our best knowledge, we are the first to apply adversarial training for crowd annotation learning.
In the learning framework, we perform adversarial training between the basic NER and an additional worker discriminator. We have a common Bi-LSTM for representing annotator-generic information and a private Bi-LSTM for representing annotator-specific information. We build another label Bi-LSTM by the crowd-annotated NE label sequence which reflects the mind of the crowd annotators who learn entity definitions by reading the annotation guidebook. The common and private Bi-LSTMs are used for NER, while the common and label Bi-LSTMs are used as inputs for the worker discriminator. The parameters of the common Bi-LSTM are learned by adversarial training, maximizing the worker discriminator loss and meanwhile minimizing the NER loss. Thus the resulting features of the common Bi-LSTM are worker invariant and NER sensitive.
For evaluation, we create two Chinese NER datasets in two domains: dialog and e-commerce. We require the crowd annotators to label the types of entities, including person, song, brand, product, and so on. Identifying these entities is useful for chatbot and e-commerce platforms BIBREF8 . Then we conduct experiments on the newly created datasets to verify the effectiveness of the proposed adversarial neural network model. The results show that our system outperforms very strong baseline systems. In summary, we make the following contributions:
In the learning framework, we perform adversarial training between the basic NER and an additional worker discriminator.
|
How do the authors perform adversarial training in the learning framework?
|
In the learning framework, the authors perform adversarial training between the basic NER and an additional worker discriminator.
|
null | false
| null |
What is the stock market?
|
In capitalist economies, business activities often happen through companies, which are entities that pool resources from investors to pursue a joint activity. Individuals and entities that invest in a company receives stock in exchange for the investment. The stock represents partial ownership in the company and makes them shareholders of the company.
If other investors see value in the company, they can invest in the company by purchasing stock from shareholders. The market which enables the exchange of shares of various companies constitues the stock market.
|
|
null | false
| 65
|
Word embeddings are representations of words in numerical form, as vectors of typically several hundred dimensions. The vectors are used as an input to machine learning models; for complex language processing tasks these are typically deep neural networks. The embedding vectors are obtained from specialized learning tasks, based on neural networks, e.g., word2vec BIBREF0, GloVe BIBREF1, FastText BIBREF2, ELMo BIBREF3, and BERT BIBREF4. For training, the embeddings algorithms use large monolingual corpora that encode important information about word meaning as distances between vectors. In order to enable downstream machine learning on text understanding tasks, the embeddings shall preserve semantic relations between words, and this is true even across languages.
Probably the best known word embeddings are produced by the word2vec method BIBREF5. The problem with word2vec embeddings is their failure to express polysemous words. During training of an embedding, all senses of a given word (e.g., paper as a material, as a newspaper, as a scientific work, and as an exam) contribute relevant information in proportion to their frequency in the training corpus. This causes the final vector to be placed somewhere in the weighted middle of all words' meanings. Consequently, rare meanings of words are poorly expressed with word2vec and the resulting vectors do not offer good semantic representations. For example, none of the 50 closest vectors of the word paper is related to science.
The idea of contextual embeddings is to generate a different vector for each context a word appears in and the context is typically defined sentence-wise. To a large extent, this solves the problems with word polysemy, i.e. the context of a sentence is typically enough to disambiguate different meanings of a word for humans and so it is for the learning algorithms. In this work, we describe high-quality models for contextual embeddings, called ELMo BIBREF3, precomputed for seven morphologically rich, less-resourced languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian, and Swedish. ELMo is one of the most successful approaches to contextual word embeddings. At time of its creation, ELMo has been shown to outperform previous word embeddings BIBREF3 like word2vec and GloVe on many NLP tasks, e.g., question answering, named entity extraction, sentiment analysis, textual entailment, semantic role labeling, and coreference resolution.
This report is split into further five sections. In section SECREF2, we describe the contextual embeddings ELMo. In Section SECREF3, we describe the datasets used and in Section SECREF4 we describe preprocessing and training of the embeddings. We describe the methodology for evaluation of created vectors and results in Section SECREF5. We present conclusion in Section SECREF6 where we also outline plans for further work.
In this work, we describe high-quality models for contextual embeddings, called ELMo (Peters et al., 2018), precomputed for seven morphologically rich, less-resourced languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian, and Swedish.
|
What languages are explored?
|
ELMo is precomputed for seven morphologically rich, less-resourced languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian, and Swedish.
|
null | false
| null |
Tell me the history of Singapore
|
Singapore was founded on 6 February 1819 by Sir Stamford Raffles. Singapore started out as a British colony, where a free trading port was established for ships passing through the Malacca Straits. Singapore continued to thrive and became an important port globally which attracted many Chinese, Malay, Indian and Arab traders to operate in Singapore.
Singapore continued to prosper for many years until World War II. On 15 February 1942, the British surrendered to Japan and Singapore was renamed to Syonan-to. During the Japanese occupation, over 2500 lives were executed and many suffered hardships.
Post World War II, Singapore operated as a self-government led by Lee Kuan Yew, reducing its reliance on the British. During 1963 to 1965, Singapore had a merger with neighbouring country Malayisa. However, the merger brought about many disagreement, tensions and unrest. Eventually, Singapore separated from Malaysia and became a fully independent country on 9 August 1965.
|
|
null | false
| null |
Wthat is the difference between Speed and Velocity ?
|
Speed denotes only how fast an object is moving, whereas velocity describes both how fast and in which direction the object is moving. If a car is said to travel at 60 miles per hour, its speed has been specified. However, if the car is said to move at 60 miles per hour to the north, its velocity has now been specified.
The big difference can be discerned when considering movement around a circle. When something moves in a circular path and returns to its starting point, its average velocity is zero, but its average speed is found by dividing the circumference of the circle by the time taken to move around the circle. This is because the average velocity is calculated by considering only the displacement between the starting and end points, whereas the average speed considers only the total distance travelled.
|
|
null | false
| null |
What are the typical subjects that are studied in grade school in America?
|
English - reading and writing, math, science, and social studies. Some schools also have dedicated class time for physical education, art, and music.
|
|
null | false
| null |
Classify the fruits based on their color.
raspberries, lemon, strawberries, cranberries, star fruit, banana, cherries, apples
|
Red - raspberries, strawberries, cranberries, cherries, apples
Yellow - lemon, banana, star fruit
|
|
null | false
| null |
Which of the following words are conjunctions, and which are prepositions: and, between, but, for, in, of, on, or, under, yet
|
The following are conjunctions: and, but, or, yet
The following are prepositions: between, for, in, of, on, under
|
|
null | false
| null |
Will robots ever feel and think like humans? Will these robots solve all of our problems, or will they spell the end of the human race?
|
Imagine a world where hundreds of robots are working with humans. Some of them deliver food, and some might be working at a cafe or a restaurant. They are all around us, helping people to do many things. Sometimes, they can do what humans can’t do, and they can also sometimes perform better than humans. Nowadays, many scientists and engineers are arguing if artificial intelligence could dramatically make our lives better, or would it spell the end of the human race?
Robots and Artificial Intelligence(A.I. for short) are developing systematically but dramatically, so someday, they might feel and think like humans. For example, some robots can “feel” pain by detecting how strong the force is to technically “feel” the pain. Furthermore, robots can think like humans by programming them to think how humans do. However, robots have less flexible thinking, since they are only doing what others programmed them to do, especially robots like Industrial robots that are less complex.
However, because A.I. is developing at a great speed, it might eventually become a threat to the human race. Many people such as Steven Hawkings warned that as A.I. develops further, they might be unstoppable, and might become one of the most dangerous enemies. Furthermore, according to the UN, A.I. might have stereotypical thoughts about certain races because of people’s discriminative behavior, and it can further have an impact on judgment between many kinds of people. Primarily, robots and A.I. can be a threat to the human race, so we need to take care of its program and rule it better so that it can be a good tool that can improve our lives.
On the other hand, I think that although now, A.I. still has some difficulties in doing certain things, A.I. can solve some of our problems, and make our lives better, and easier. A.I. can solve many problems, and they can even do some work better than humans. For example, robots can go to dangerous places like space, deep sea, and poisonous areas. Furthermore, robots can do dangerous jobs like welding which is dangerous work that needs to be done precisely. Robots are used in places like this because Industrial robots can do work precisely, and they can even save the worker’s life. For another example, a remote robot called Canadarm2 that works in the International Space Station can help astronauts by doing work in space to help the other astronauts that make their life easier.
In conclusion, A.I. is starting to become part of our lives since it’s starting to think and feel like humans with a great speed, and although it might become a threat to human races, I still think A.I. can solve many problems by doing what we have trouble doing. Once, Nick Boston claimed that “Machine intelligence is the last invention that humanity will ever need to invent” which means that humans only need to make A.I. because it can do things that other materials can, and it can also create and build itself. Since robots and A.I. can do many things that can help people, A.I. starting to become our reliable assistants as long as we take care of them.
|
|
null | false
| null |
Classify each of the following as a ridesharing or food delivery company: Doordash, Lyft, Wolt, Deliveroo, Bolt, Didi, Rappi
|
Doordash: Food Delivery
Lyft: Ridesharing
Wolt: Food Delivery
Deliveroo: Food Delivery
Bolt: Ridesharing
Didi: Ridesharing
Rappi: Food Delivery
|
|
null | false
| null |
The Kawasaki KX 500 is a 499 cc (30.5 cu in) two-stroke single motocross motorcycle made by Kawasaki from 1983 until 2004.
The Kawasaki KX500 was developed as an air-cooled 500cc motocross bike for competition in the 500cc and Open-Class of motocross. At the time of its release, several top manufactured sported entries in this class, including Yamaha, Suzuki, Honda and the class-leading Maico.
Kawasaki developed the bike on an annual basis through the 80's. The addition of liquid cooling in 1985, a new frame with improved suspension in 1987, engine updates in 1988 and reversed forks in 1990 highlight these revisions.
Kawasaki would take several years to produce a competitive bike, and when it finally did, would find the KX500 winning in events for which it was not initially designed for. Its first victories came in 1986, when Donnie Griewe won two National Hare and Hound events. These only foreshadowed Kawasaki's dominance in distance and desert racing that exploded in the 90's.
In 1989 Kawasaki captures its first National MX championship on a KX500, repeating in '90 and '92 before the series was discontinued at the close of the '93 season. But that would not be the end of the KX500's racing legacy.
Beginning in 1992, the KX500 took the victory in every Desert Nationals race through 1995, and continued to win the Desert Nationals Championship every year though 2001. It further cemented its legendary dominance in desert racing by being the victors bike at the Baja 1000 from 1988 through 1996.
|
Extract from the text the events won by the Kawasaki KX 500 motorcycle, and separate them by a comma.
|
National Hare and Hound, National MX championship, Desert Nationals, Baja 1000
|
null | false
| null |
Give me a list of the ten best rock guitarists of all time
|
Jimi Hendrix, Stevie Ray Vaughan, Jimmy Page, Eddie Van Halen, Eric Clapton, Jeff Beck, David Gilmour, Brian May, George Harrison, Randy Rhoads
|
|
null | false
| 161
|
In the present work, we propose FriendsBERT and ChatBERT for the multi-utterance emotion recognition task on EmotionLines dataset. The proposed models are adapted from BERT BIBREF5 with three main improvement during the model training procedure, which are the causal utterance modeling mechanism, specific model pre-training, and adapt weighted loss. The causal utterance modeling takes the advantages of the sentence-level context information during model inference. The specific model pre-training helps to against the bias in different text domain. The weighted loss avoids our model to only predict on large size sample. The effectiveness and generalizability of the proposed methods are demonstrated from the experiments.
In future work, we consider to include the conditional probabilistic constraint $P ({\rm Emo}_{B} | \hat{\rm Emo}_{A})$. Model should predict the emotion based on a certain understanding about context emotions. This might be more reasonable for guiding model than just predicting emotion of ${\rm Sentence}_B$ directly. In addition, due to the limitation of BERT input format, ambiguous number of input sentences is now becoming an important design requirement for our future work. Also, personality embedding development will be another future work of the emotion recognition. The personality embedding will be considered as sentence embedding injected into word embedding, and it seems this additional information can contribute some improvement potentially.
The proposed models are adapted from BERT [Devlin et al., 2018] with three main improvement during the model training procedure, which are the causal utterance modeling mechanism, specific model pretraining, and adapt weighted loss. The causal utterance modeling takes the advantages of the sentence-level context information during model inference. The specific model pretraining helps to against the bias in different text domain. The weighted loss avoids our model to only predict on large size sample. The effectiveness and generalizability of the proposed methods are demonstrated from the experiments.
|
What conclusion did the author draw from the experiment?
|
The effectiveness and generalizability of the proposed methods are demonstrated from the experiments.
|
null | false
| 184
|
Given a piece of text and the span of an entity mention in this text, fine-grained entity typing (FET) is the task of assigning fine-grained type labels to the mention BIBREF0. The assigned labels should be context dependent BIBREF1. For example, in the sentence “Trump threatens to pull US out of World Trade Organization,” the mention “Trump” should be labeled as /person and /person/politician, although Donald Trump also had other occupations such as businessman, TV personality, etc.
This task is challenging because it usually uses a relatively large tag set, and some mentions may require the understanding of the context to be correctly labeled. Moreover, since manual annotation is very labor-intensive, existing approaches have to rely on distant supervision to train models BIBREF0, BIBREF2.
Thus, the use of extra information to help with the classification process becomes very important. In this paper, we improve FET with entity linking (EL). EL is helpful for a model to make typing decisions because if a mention is correctly linked to its target entity, we can directly obtain the type information about this entity in the knowledge base (KB). For example, in the sentence “There were some great discussions on a variety of issues facing Federal Way,” the mention “Federal Way” may be incorrectly labeled as a company by some FET models. Such a mistake can be avoided after linking it to the city Federal Way, Washington. For cases that require the understanding of the context, using entity linking results is also beneficial. In the aforementioned example where “Trump” is the mention, obtaining all the types of Donald Trump in the knowledge base (e.g., politician, businessman, TV personality, etc.) is still informative for inferring the correct type (i.e., politician) that fits the context, since they narrows the possible labels down.
However, the information obtained through EL should not be fully trusted since it is not always accurate. Even when a mention is correctly linked to an entity, the type information of this entity in the KB may be incomplete or outdated. Thus, in this paper, we propose a deep neural fine-grained entity typing model that flexibly predicts labels based on the context, the mention string, and the type information from KB obtained with EL.
Using EL also introduces a new problem for the training process. Currently, a widely used approach to create FET training samples is to use the anchor links in Wikipedia BIBREF0, BIBREF3. Each anchor link is regarded as a mention, and is weakly labeled with all the types of its referred entity (the Wikipedia page the anchor link points to) in KB. Our approach, when links the mention correctly, also uses all the types of the referred entity in KB as extra information. This may cause the trained model to overfit the weakly labeled data. We design a variant of the hinge loss and introduce noise during training to address this problem.
We conduct experiments on two commonly used FET datasets. Experimental results show that introducing information obtained through entity linking and having a deep neural model both helps to improve FET performance. Our model achieves more than 5% absolute strict accuracy improvement over the state of the art on both datasets.
Our contributions are summarized as follows:
We propose a deep neural fine-grained entity typing model that utilizes type information from KB obtained through entity linking.
We address the problem that our model may overfit the weakly labeled data by using a variant of the hinge-loss and introducing noise during training.
We demonstrate the effectiveness of our approach with experimental results on commonly used FET datasets.
Our code is available at https://github.com/HKUST-KnowComp/IFETEL.
We conduct experiments on two commonly used FET datasets.
|
What dataset does the paper use for experiments?
|
Two commonly used FET datasets.
|
null | false
| null |
What causes an airplane wing to fly?
|
As Bernoulli's principal explains, because of the shape, air moving across the curved upper surface on top of the wing will travel faster and thus produce less pressure than the air moving more slowly along the flatter bottom of the wing's surface. This imbalance of high and low pressures results in a force called lift which is applied in the direction of the lower pressure.
|
|
null | false
| null |
What are the words of House Marbrand?
|
"Burning Bright"
|
|
null | false
| null |
Give me a simpler way to tell someone that Layli and Majnun is Persian literature that borrows storytelling techniques from Varqeh and Golshah, Orwa wa Afra, Vameq and Ozra, Metiochus and Parthenope, and Vis and Ramin.
|
Layli and Majnun is based on several prior romantic tales, such as Varqeh and Golshah, Orwa wa Afra, Vameq and Ozra, Metiochus and Parthenope, and Vis and Ramin.
|
|
null | false
| null |
Cabernet Sauvignon (French: [kabɛʁnɛ soviɲɔ̃]) is one of the world's most widely recognized red wine grape varieties. It is grown in nearly every major wine producing country among a diverse spectrum of climates from Australia and British Columbia, Canada to Lebanon's Beqaa Valley. Cabernet Sauvignon became internationally recognized through its prominence in Bordeaux wines, where it is often blended with Merlot and Cabernet Franc. From France and Spain, the grape spread across Europe and to the New World where it found new homes in places like California's Santa Cruz Mountains, Paso Robles, Napa Valley, New Zealand's Hawke's Bay, South Africa's Stellenbosch region, Australia's Margaret River, McLaren Vale and Coonawarra regions, and Chile's Maipo Valley and Colchagua. For most of the 20th century, it was the world's most widely planted premium red wine grape until it was surpassed by Merlot in the 1990s. However, by 2015, Cabernet Sauvignon had once again become the most widely planted wine grape, with a total of 341,000 hectares (3,410 km2) under vine worldwide.
|
Where is Cabernet Sauvignon grown?
|
France, Spain, California, New Zealand, South Africa, Australia, Chile, Canada
|
2002.00652
| false
| null |
In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance.
Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods.
|
What are two datasets models are tested on?
|
The answers are shown as follows:
* SParC BIBREF2 and CoSQL BIBREF6
|
null | false
| null |
Who is Robert Kiyosaki?
|
Robert Kiyosaki is an American entrepreneur and author who became famous primarily due to his book Rich Dad Poor Dad, a book which many refer to as "the bible of financial literacy". He has acquired much of his wealth through investing in real estate successfully and teaching others financial skills that are not taught in typical school systems.
|
|
null | false
| null |
Identify which car manufacturer is Japanese or American: Lila, Lincoln
|
Lila is Japanese, Lincoln is American
|
|
null | false
| 36
|
We evaluate our model on two publicly available datasets. The statistics for both are shown in Table TABREF3 . The details of these datasets are as follows:
OntoNotes: OntoNotes 5.0 BIBREF16 includes texts from five different text genres: broadcast conversation (200k), broadcast news (200k), magazine (120k), newswire (625k), and web data (300k). This dataset is annotated with 18 categories.
Wiki(gold): The training data consists of Wikipedia sentences and was automatically generated using a distant supervision method, mapping hyperlinks in Wikipedia articles to Freebase, which we do not use in this study. The test data, mainly consisting of sentences from news reports, was manually annotated as described in BIBREF8 . The class hierarchy is shown in Figure FIGREF2 . This dataset is annotated with 7 main categories (bold text in Figure FIGREF2 ), which maps directly to OntoNotes. The miscellaneous category in Figure FIGREF2 does not have direct mappings, so future work may include redefining these categories so the mappings are more meaningful.
We evaluate our model on two publicly available datasets.
|
What kind of datasets are used to evaluate the authors' model?
|
Two publicly available datasets.
|
null | false
| 200
|
Recent studies have shown the vulnerability of ML models to adversarial attacks, small perturbations which lead to misclassification of inputs. Adversarial example generation in NLP BIBREF0 is more challenging than in common computer vision tasks BIBREF1, BIBREF2, BIBREF3 due to two main reasons: the discrete nature of input space and ensuring semantic coherence with the original sentence. A major bottleneck in applying gradient based BIBREF4 or generator model BIBREF5 based approaches to generate adversarial examples in NLP is the backward propagation of the perturbations from the continuous embedding space to the discrete token space.
Recent works for attacking text models rely on introducing errors at the character level in words BIBREF6, BIBREF7 or adding and deleting words BIBREF8, BIBREF9, BIBREF10, etc. for creating adversarial examples. These techniques often result in adversarial examples which are unnatural looking and lack grammatical correctness, and thus can be easily identified by humans.
TextFooler BIBREF11 is a black-box attack, that uses rule based synonym replacement from a fixed word embedding space to generate adversarial examples. These adversarial examples do not account for the overall semantics of the sentence, and consider only the token level similarity using word embeddings. This can lead to out-of-context and unnaturally complex replacements (see Table ), which can be easily identifiable by humans.
The recent advent of powerful language models BIBREF12, BIBREF13 in NLP has paved the way for using them in various downstream applications. In this paper, we present a simple yet novel technique: BAE (BERT-based Adversarial Examples), which uses a language model (LM) for token replacement to best fit the overall context. We perturb an input sentence by either replacing a token or inserting a new token in the sentence, by means of masking a part of the input and using a LM to fill in the mask (See Figure FIGREF1). BAE relies on the powerful BERT masked LM for ensuring grammatical correctness of the adversarial examples. Our attack beats the previous baselines by a large margin and confirms the inherent vulnerabilities of modern text classification models to adversarial attacks. Moreover, BAE produces more richer and natural looking adversarial examples as it uses the semantics learned by a LM.
To the best of our knowledge, we are the first to use a LM for adversarial example generation. We summarize our major contributions as follows:
We propose BAE, a novel strategy for generating natural looking adversarial examples using a masked language model.
We introduce 4 BAE attack modes, all of which are almost always stronger than previous baselines on 7 text classification datasets.
We show that, surprisingly, just a few replace/insert operations can reduce the accuracy of even a powerful BERT-based classifier by over $80\%$ on some datasets.
We introduce 4 BAE attack modes, all of which are almost always stronger than previous baselines on 7 text classification datasets.
|
How many BAE attack modes do the authors introduce?
|
4.
|
null | false
| 133
|
We treat document quality assessment as a classification problem, i.e., given a document, we predict its quality class (e.g., whether an academic paper should be accepted or rejected). The proposed model is a joint model that integrates visual features learned through Inception V3 with textual features learned through a biLSTM. In this section, we present the details of the visual and textual embeddings, and finally describe how we combine the two. We return to discuss hyper-parameter settings and the experimental configuration in the Experiments section.
We optimize our model based on cross-entropy loss.
|
How to optimize the joint model by the authors?
|
They optimize the model based on cross-entropy loss.
|
1702.03274
| true
| null |
Recently, end-to-end approaches have trained recurrent neural networks (RNNs) directly on text transcripts of dialogs. A key benefit is that the RNN infers a latent representation of state, obviating the need for state labels. However, end-to-end methods lack a general mechanism for injecting domain knowledge and constraints. For example, simple operations like sorting a list of database results or updating a dictionary of entities can expressed in a few lines of software, yet may take thousands of dialogs to learn. Moreover, in some practical settings, programmed constraints are essential – for example, a banking dialog system would require that a user is logged in before they can retrieve account information.
A key benefit is that the RNN infers a latent representation of state, obviating the need for state labels.
|
Does the latent dialogue state heklp their model?
|
Yes.
|
null | false
| null |
The Kushan Empire was a syncretic empire, formed by the Yuezhi, in the Bactrian territories in the early 1st century. It spread to encompass much of modern-day territory of Uzbekistan, Afghanistan, Pakistan and Northern India. The Sasanian or Sassanid Empire officially known as Eranshahr was the last Iranian empire before the early Muslim conquests of the 7th–8th centuries AD. Named after the House of Sasan, it endured for over four centuries, from 224 to 651 AD, making it the longest-lived Persian imperial dynasty.[
|
What's the relation between the Kushan Empire and the Sassanid Empire in Iran?
|
Kushano-Sasanian Kingdom (also called Kushanshahs, or Koshano Shao in Bactrian, or Indo-Sasanians) is a historiographic term used by modern scholars to refer to a branch of the Sasanian Persians who established their rule in Bactria during the 3rd and 4th centuries CE at the expense of the declining Kushans. They captured the provinces of Sogdiana, Bactria and Gandhara from the Kushans in 225 CE. The Sasanians established governors for the Sasanian Empire, who minted their own coinage and took the title of Kushanshas, i.e. "Kings of the Kushans". They are sometimes considered as forming a "sub-kingdom" inside the Sassanid Empire.
|
null | false
| null |
Write a story about getting stuck on a train
|
The subway suddenly stops. I groan in frustration. I have been waiting for months to go to this particular art exhibit that happens once a year and now I’m going to be late. I tap my foot out of impatience. Minutes pass by and the train becomes loud with angry passengers.
I slip away to the emptier part of the train. I need to call my friends stating I will be late. I search for my phone in my bag but I could not find it. I think I forgot it at home. I look around to see if I could borrow anyone's phone. I walk up to the person in the seat across from me.
“Excuse me is there any possible chance I could borrow your phone?” I question
“There's no use calling anyone down here. There’s no signal,” She responds
“Thank you anyway,” I say and walk back to my seat in defeat. I look out the window to see that the conductor's part of the subway is gone. I start to panic, my mind looking at all the worst possible scenarios. Did anyone else notice? I sit in silence for a few minutes not knowing what will happen next.
“Where’s the conductor's car?” Someone yells out. The entire subway goes crazy. I continue to look out the window to block everyone out. I am lost in my daydreaming until I see a crack appear on the wall. I jump up. The crack begins to spread and spread.
Water starts to pour out, suddenly I’m not the only one who noticed it anymore. Everyone is stunned. Clearly thinking of the same thing I’m thinking of. We are going to drown.
I watch in fear as the water is spreading more quickly and it soon covers the subway floor. I managed to hop on a chair in time but others weren't as lucky as me.
It begins to reach up to the chairs. The water seems to never stop pouring in. I then see another light, it’s firemen in a special train car. I am close to the exit door so I was one of the first ones evacuated.
|
|
1710.01507
| false
| null |
One common point in all the approaches yet has been the use of only textual features available in the dataset. Our model not only incorporates textual features, modeled using BiLSTM and augmented with an attention mechanism, but also considers related images for the task.
One common point in all the approaches yet has been the use of only textual features available in the dataset. Our model not only incorporates textual features, modeled using BiLSTM and augmented with an attention mechanism, but also considers related images for the task.
|
What are the differences with previous applications of neural networks for this task?
|
This approach considers related images
|
null | false
| 356
|
In this work, we propose a new evaluation framework for highly multilingual document classification which significantly extends the current state. We continue to use Reuters Corpus Volume 2, but based on the above mentioned limitations of the current subset of RCV2, we propose new tasks for cross-lingual document classification. The design choices are as follow:
Uniform class coverage: we sample from RCV2 the same number of examples for each class and language;
Split the data into train, development and test corpus: for each languages, we provide training data of different sizes (1k, 2k, 5k and 10k stories), a development (1k) and a test corpus (4k);
Support more languages: German (DE), English (EN), Spanish (ES), French (FR), Italian (IT), Japanese (JA), Russian (RU) and Chinese (ZH). Reference baseline results are available for all languages.
Most works in the literature use only 1 000 examples to train the document classifier. To invest the impact of more training data, we also provide training corpora of 2 000, 5 000 and 10 000 documents. The development corpus for each language is composed of 1 000 and the test set of 4 000 documents respectively. All have uniform class distributions. An important aspect of this work is to provide a framework to study and evaluate cross-lingual document classification for many language pairs. In that spirit, we will name this corpus “Multilingual Document Classification Corpus”, abbreviated as MLDoc. The full Reuters Corpus Volume 2 has a special license and we can not distribute it ourselves. Instead, we provide tools to extract all the subsets of MLDoc at https://github.com/facebookresearch/MLDoc.
Most works in the literature use only 1 000 examples to train the document classifier. To invest the impact of more training data, we also provide training corpora of 2 000, 5 000 and 10 000 documents.
|
How many documents do the authors provide to invest the impact of more training data?
|
The authors also provide training corpora of 2 000, 5 000 and 10 000 documents.
|
null | false
| null |
Blockchains are typically managed by a peer-to-peer (P2P) computer network for use as a public distributed ledger, where nodes collectively adhere to a consensus algorithm protocol to add and validate new transaction blocks. Although blockchain records are not unalterable, since blockchain forks are possible, blockchains may be considered secure by design and exemplify a distributed computing system with high Byzantine fault tolerance. A blockchain was created by a person (or group of people) using the name (or pseudonym) Satoshi Nakamoto in 2008 to serve as the public distributed ledger for bitcoin cryptocurrency transactions, based on previous work by Stuart Haber, W. Scott Stornetta, and Dave Bayer. The implementation of the blockchain within bitcoin made it the first digital currency to solve the double-spending problem without the need of a trusted authority or central server. The bitcoin design has inspired other applications and blockchains that are readable by the public and are widely used by cryptocurrencies. The blockchain may be considered a type of payment rail.
|
Given this paragraph about blockchain, how did blockchain influence bitcoin cryptocurrency?
|
Blockchain is the technology that enables the existence of cryptocurrency and bitcoin is the name of the best-known cryptocurrency. The implementation of bitcoin using blockchain made it the first digital currency to solve the double-spending problem without the need of a trusted authority or central server. Bitcoin has inspired other applications and blockchains that are highly available, public, and are widely used by cryptocurrencies.
|
null | false
| null |
What kind of species is the Epophthalmia
|
Epophthalmia a genus of dragonflies in the family Macromiidae.
|
|
null | false
| null |
What is the best skill in judo?
|
Judo is a martial art, consistency, and dedication should be your doctrine if you want to succeed. In Japan, you could stay white belt for ten years before reaching the first dan grade, which stands for black belt level 1. This could be a bit disappointing for common standards, so in the rest of the world, to encourage practitioners and demonstrate progress, a belt colors system has been created.
Anyone can practice, at any age. Strength is important in competition, but the real spirit would be to rely on using the strength of the opponent, breaking balance with swift movements, and using your body to block, push, rotate, and endless combinations of posture to conclude the movement by a clean and controlled projection.
|
|
1709.02271
| false
| null |
To further study the information encoded in the discourse embeddings, we perform t-SNE clustering BIBREF20 on them, using the best performing model CNN2-DE (global). We examine the closest neighbors of each embedding, and observe that similar discourse relations tend to go together (e.g., explanation and interpretation; consequence and result). Some examples are given in Table TABREF29 . However, it is unclear how this pattern helps improve classification performance. We intend to investigate this question in future work.
To further study the information encoded in the discourse embeddings, we perform t-SNE clustering BIBREF20 on them, using the best performing model CNN2-DE (global).
|
How are discourse embeddings analyzed?
|
They perform t-SNE clustering to analyze discourse embeddings
|
null | false
| null |
There are five phases to a project lifecycle; known as process groups. Each process group represents a series of inter-related processes to manage the work through a series of distinct steps to be completed. This type of project approach is often referred to as "traditional" or "waterfall". The five process groups are:
Typical development phases of an engineering project
Initiating
Planning
Executing
Monitoring and Controlling
Closing
Some industries may use variations of these project stages and rename them to better suit the organization. For example, when working on a brick-and-mortar design and construction, projects will typically progress through stages like pre-planning, conceptual design, schematic design, design development, construction drawings (or contract documents), and construction administration.
While the phased approach works well for small, well-defined projects, it often results in challenge or failure on larger projects, or those that are more complex or have more ambiguities, issues, and risks
|
What is the project management approach that have a serious of distinct steps?
|
The waterfall, sometimes called traditional, approach is the project management approach that has a series of related processes.
|
null | false
| null |
American Fantasy Press is a science fiction/fantasy/horror specialty press owned and operated by Robert T. Garcia and Nancy Garcia. Located in Woodstock, Illinois, the press has published: The first U.S. hardcover edition of Dennis Etchison's Darkside (A 10th anniversary edition); the chapbook edition of The Man on the Ceiling by Steve Rasnic and Melanie Tem, the chapbook of A Walking Tour of the Shambles by Neil Gaiman and Gene Wolfe, the first U.S. hardcover edition of Michael Moorcock's The Dreamthief's Daughter (a signed, limited edition), the chapbook The Broecker Sampler, featuring the artwork of Randy Broecker, a broadsheet of Jo Fletcher's poem Midnight Monster illustrated by Gahan Wilson, and the first hardcover edition of the Mary Frances Zambreno's short story collection Invisible Pleasures. It published the fantasy short story collection by Mike Resnick, Stalking the Zombie which features his hard-boiled detective John J. Mallory in an alternate Manhattan. In 2015, the press published The Collectors' Book of Virgil Finlay, by Robert Weinberg, Douglas Ellis and AF publisher Robert Garcia.
|
What kind of publisher is the American Fantasy Press
|
American Fantasy Press is a science fiction/fantasy/horror specialty press owned and operated by Robert T. Garcia and Nancy Garcia.
Located in Woodstock, Illinois, the press has published: The first U.S. hardcover edition of Dennis Etchison's Darkside (A 10th anniversary edition); the chapbook edition of The Man on the Ceiling by Steve Rasnic and Melanie Tem, the chapbook of A Walking Tour of the Shambles by Neil Gaiman and Gene Wolfe, the first U.S. hardcover edition of Michael Moorcock's The Dreamthief's Daughter (a signed, limited edition), the chapbook The Broecker Sampler, featuring the artwork of Randy Broecker, a broadsheet of Jo Fletcher's poem Midnight Monster illustrated by Gahan Wilson, and the first hardcover edition of the Mary Frances Zambreno's short story collection Invisible Pleasures.
It published the fantasy short story collection by Mike Resnick, Stalking the Zombie which features his hard-boiled detective John J. Mallory in an alternate Manhattan. In 2015, the press published The Collectors' Book of Virgil Finlay, by Robert Weinberg, Douglas Ellis and AF publisher Robert Garcia.
|
null | false
| 5
|
We train the LangID and NoLangID versions of our model each on three subsets of the Wiktionary data:
LangID-High and NoLangID-High: Trained on data from the 85 languages for which BIBREF13 used non-adapted wFST models.
LangID-Adapted and NoLangID-Adapted: Trained on data from any of the 229 languages for which they built adapted models. Because many of these languages had no training data at all, the model is actually only trained on data in 157 languages. As is noted above, the Adapted set omits 23 languages which are in the High test set.
LangID-All and NoLangID-All: Trained on data in all 311 languages in the Wiktionary training corpus.
In order to ease comparison to Deri and Knight's system, we limited our use of the training corpus to 10,000 words per language. We set aside 10 percent of the data in each language for validation, so the maximum number of training words for any language is 9000 for our systems.
LangID-Adapted and NoLangID-Adapted: Trained on data from any of the 229 languages for which they built adapted models. Because many of these languages had no training data at all, the model is actually only trained on data in 157 languages. As is noted above, the Adapted set omits 23 languages which are in the High test set.
|
How many languages are involved in the training of the LangID-Adapted and NoLangID-Adapted models?
|
157 languages.
|
null | false
| null |
What happens when you bring flammable objects closer to a fire source or extreme heat?
|
Flammable objects can become inflamed when placed on or near a fire source or extreme heat. Its recommended to keep flammable objects far away from heat sources.
|
|
null | false
| null |
Give me a list of 3 hobbies that are good for your health and inexpensive
|
1. Long-distance running - Long-distance running is good for heart health and also mental health.
2. Hiking - Going for a hike can be a great way to clear your head and improve your fitness. Hiking is almost always free, but some parks can have an entry fee.
3. Walking - Not only is walking good for your heart, it can also help you to manage stress
|
|
null | false
| 349
|
This section demonstrates how we constructed our new Vietnamese dataset. The dataset consists of 3,850 images relating to sports played with balls from 2017 edition of Microsoft COCO. Similar to most Image Captioning datasets, we provide five Vietnamese captions for each image, summing up to 19,250 captions in total.
The dataset consists of 3,850 images relating to sports played with balls from 2017 edition of Microsoft COCO.
|
Where are the images relating to sports from?
|
They are from 2017 edition of Microsoft COCO.
|
null | false
| null |
Forest Park in Springfield, Massachusetts, is one of the largest urban, municipal parks in the United States, covering 735 acres (297 ha) of land overlooking the Connecticut River. Forest Park features a zoo, aquatic gardens, and outdoor amphitheater, in addition to design elements like winding wooded trails, and surprising, expansive views. The site of America's first public, municipal swimming pool, currently, during the holiday months Forest Park hosts a popular high-tech lighting display, known as Bright Nights. Contrary to popular belief, the park was not designed by Frederick Law Olmsted.
|
What is Forest Park (Springfield, MA)?
|
Forest Park is an urban park in Springfield, MA. It covers 735 acres and overlooks the Connecticut River. The park features a zoo, aquatic gardens, an outdoor amphitheater, as well as winding wooded trails and expansive views. Forest Park is the site of the first public municipal swimming pool in the US. During holiday months it hosts a high-tech lighting display known as Bright Nights.
|
null | false
| null |
What is the difference between a refractor telescope and a reflector telescope?
|
A refractor telescope uses lenses to focus the light into an image where a reflector telescope uses mirrors. Reflector telescopes are generally better for visual viewing of brighter celestial objects like the moon or planets and can collect more light due to their larger apertures. Refractor telescopes provide better contrast and image quality but collect less light which makes them more suitable for astrophotography where exposure time can be adjusted to compensate.
|
|
null | false
| null |
Is caffeine good for mental health?
|
Anything less than 400 mg of caffeine is reasonably good. Over consumption may lead to insomnia, shakiness and palpitations in excessive consumption scenarios. There are many neuro scientists promoting and encouraging 150 mg of caffeine consumption in a day.
|
|
1809.05752
| false
| null |
Our current feature set for training a machine learning classifier is relatively small, consisting of paragraph domain scores, bag-of-words, length of stay, and number of previous admissions, but we intend to factor in many additional features that extend beyond the scope of the present study. These include a deeper analysis of clinical narratives in EHRs: our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain. This will involve a series of annotation tasks that will allow us to generate lexicon-based and corpus-based sentiment analysis tools. We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time.
Our current feature set for training a machine learning classifier is relatively small, consisting of paragraph domain scores, bag-of-words, length of stay, and number of previous admissions, but we intend to factor in many additional features that extend beyond the scope of the present study. These include a deeper analysis of clinical narratives in EHRs: our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain. This will involve a series of annotation tasks that will allow us to generate lexicon-based and corpus-based sentiment analysis tools. We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time.
|
What additional features are proposed for future work?
|
distinguishing between clinically positive and negative phenomena within each risk factor domain and accounting for structured data collected on the target cohort
|
null | false
| null |
What's the difference between a wolf and a dog?
|
A dog is a domesticated pet, while a wolf is a wild animal. They both are part of the same species, and share a lot of physical similarities, but at the same time display very different behaviour.
|
|
null | false
| 156
|
We exploit the nonparametric Bayesian AUD framework in BIBREF10 based on variational inference, rather than the maximum likelihood training in BIBREF4 which may oversimplify the parameter estimations, nor the Gibbs Sampling training in BIBREF17 which is not amenable to large scale applications. Specifically, a phone-loop model is formulated where each phoneme-like unit is modeled as an HMM with a Gaussian mixture model of output densities (GMM-HMM). Under the Dirichlet process framework, we consider the phone loop as an infinite mixture of GMM-HMMs, and the mixture weights are based on the stick-breaking construction of Dirichlet process. The infinite number of units in the mixture is truncated in practice, giving zero mixture weight to any unit beyond some large count. We treat such mixture of GMM-HMMs as a single unified HMM and thus the segmentation of the data is performed using standard forward-backward algorithm. Training is fully unsupervised and parallelized; after a fixed number of training iterations, we use Viterbi decoding algorithm to obtain acoustic unit tokenizations of the data.
We exploit the nonparametric Bayesian AUD framework in [11] based on variational inference, rather than the maximum likelihood training in [5] which may oversimplify the parameter estimations, nor the Gibbs Sampling training in [19] which is not amenable to large scale applications.
|
What framework do they exploit?
|
The nonparametric Bayesian AUD framework.
|
null | false
| null |
What are the top 7 cities to ride a bicycle?
|
Based on the Global Bicycle Index, the top 7 cities for riding a bike are:
1. Utrecht, Netherlands
2. Munster, Germany
3. Antwerp, Belgium
4. Copenhagen, Denmark
5. Amsterdam, Netherlands
6. Malmö, Sweden
7. Hangzhou, China
|
|
null | false
| null |
Give me a list of the best TV shows that I must watch.
|
Everyone has a different taste when it comes to TV shows, but here are some that you should watch:
- Planet Earth (2006)
- Breaking Bad (2008)
- Band of Brothers (2001)
- Chernobyl (2019)
- The Sopranos (1999)
- Game of Thrones (2011)
- Rick and Morty (2013)
- The Office (2005)
- The Last of Us (2023)
- Curb Your Enthusiasm (2000)
- Narcos (2015)
- Dragon Ball Z (1989)
- Rome (2005)
- The Mandalorian (2019)
- Mr. Bean (1990)
- The Queen's Gambit (2020)
|
|
null | false
| null |
Suyash Prabhudessai (born 6 December 1997), is an Indian cricketer from Goa who plays for Goa in domestic matches and for the Royal Challengers Bangalore in the Indian Premier League (IPL). He made his List A debut for Goa in the 2016–17 Vijay Hazare Trophy on 28 February 2017. He made his first-class debut for Goa in the 2018–19 Ranji Trophy on 20 November 2018. He made his Twenty20 debut for Goa in the 2018–19 Syed Mushtaq Ali Trophy on 22 February 2019.
In February 2021, Prabhudessai was bought by the Royal Challengers Bangalore in the IPL auction ahead of the 2021 Indian Premier League. In February 2022, he was bought by the Royal Challengers Bangalore in the auction for the 2022 Indian Premier League tournament.
|
Extract all unique clubs Suyash played for from the below text as a bulleted list.
|
* Goa
* Royal Challengers Bangalore
|
null | false
| null |
Meteor Crater is a popular tourist destination with roughly 270,000 visitors per year. The crater is owned by a family company, the Barringer Crater Company. Meteor Crater is an important educational and research site. It was used to train Apollo astronauts and continues to be an active training site for astronauts. The Meteor Crater Visitor Center sits on the north rim of the crater. It features interactive exhibits and displays about meteorites and asteroids, space, the Solar System, and comets including the American Astronaut Wall of Fame and such artifacts on display as an Apollo boilerplate command module (BP-29), a 1,406 lb (638 kg) meteorite found in the area, and meteorite specimens from Meteor Crater that can be touched. Formerly known as the Museum of Astrogeology, the Visitor Center includes a Discovery Center & Space Museum, a movie theater, a gift shop, and observation areas with views inside the rim of the crater. Guided tours of the rim are offered daily, weather permitting.
|
Give an example of places where astronauts are trained?
|
Meteor Crater a tourist destination is one of the places where astronauts are trained.
|
null | false
| null |
Think of a bunch of ways that you could clean a house.
|
You can use a Vacuum or bring in a rug cleaner. You could dust extensively. You could powerwash the outside of the house. You could clean the bathrooms with bleach. You could scrub down all of the kitchen with cleaning supplies. You could paint the walls.
|
|
null | false
| 405
|
We use the PyHessian library (Yao et al., 2020) to calculate the Hessian eigenvalues and the Hessian
Eigen Spectral Density. All the calculations are performed using 50% of the source data at the last
checkpoint. Only the source class loss is used for calculating to clearly illustrate our point. The
partition was selected randomly, and the same partition was used across all the runs. We also made
sure to use the same environment to run all the Hessian experiments. A subset of the data was used
for Hessian calculation mainly because the hessian calculation is computationally expensive (Yao
et al., 2020). This is commonly done in hessian experiments. For example, (Chen et al., 2021) (refer
Appendix D) uses 10% of training data for Hessian Eigenvalue calculation The PyHessian library
uses Lanczos algorithm (Ghorbani et al., 2019) for calculating the Eigen Spectral density of the
Hessian and uses the Hutchinson method to calculate the trace of the Hessian efficiently
|
Is it better to discuss the influence of the number of layers on the performance of the model?
|
We conduct a sensitivity analysis for the number of layers on the political blogs dataset in Appendix D, and the performance of our method continuously increasing until the layers exceeding four.
|
null | false
| null |
Tell me whether these London tourist attractions are free or paid: The National Gallery, The London Eye, Royal Academy of Arts', Tower of London, St. Dunstan in the East
|
The National Gallery - free
The London Eye - paid
Royal Academy of Arts' - free
Tower of London - paid
St. Dunstan in the East - free
Victoria and Albert Museum - free
Science Gallery London - free
Barbican Conservatory - free
Madame Tussauds London - paid
Harry Potter Studios - paid
London Zoo - paid
|
|
null | false
| null |
Which golf tournaments are considered majors in the PGA?
|
The Masters, The U.S. Open, The Open Championship and the PGA Championship
|
|
null | false
| null |
List of American films of the 1890s
|
List of American films of the 1890s:
1890
- Monkeyshines No. 1
- Monkeyshines No. 2
- Monkeyshines, No. 3
1891
- Dickson Greeting
- Newark Athlete
1893
- Blacksmith Scene
1894
- The Dickson Experimental Sound Film
- The Barbershop
1896
- Rip's Twenty Years' Sleep
- Dancing Darkies
- McKinley at Home, Canton, Ohio
- The Kiss
1897
- The Corbett-Fitzsimmons Fight
- Peeping Tom
1899
- How Would You Like to Be the Ice Man?
|
|
null | false
| 4
|
Previous research BIBREF4 , BIBREF20 formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work.
(1) Relation Name as a Single Token (relation-level). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of open-domain relations. For example, in Figure 1 , when treating relation names as single tokens, it will be difficult to match the questions to relation names “episodes_written” and “starring_roles” if these names do not appear in training data – their relation embeddings $\mathbf {h}^r$ s will be random vectors thus are not comparable to question embeddings $\mathbf {h}^q$ s.
(2) Relation as Word Sequence (word-level). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure 1 (b), when doing only word-level matching, it is difficult to rank the target relation “starring_roles” higher compared to the incorrect relation “plays_produced”. This is because the incorrect relation contains word “plays”, which is more similar to the question (containing word “play”) in the embedding space. On the other hand, if the target relation co-occurs with questions related to “tv appearance” in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like “tv show” and “play on”.
The two types of relation representation contain different levels of abstraction. As shown in Table 1 , the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section "Improved KB Relation Detection" gives the details of our proposed approach.
Here we give an overview of two types of relation sequence representations commonly used in previous work. (1)Relation Name as a Single Token (relation_x001f_-level). ****(2)Relation as Word Sequence (word-level).
|
What type of relation sequence representation is commonly used in previous work?
|
Usually there are two: Relation Name as a Single Token (relation-level) and Relation as Word Sequence (word-level).
|
null | false
| 16
|
For the task of unsupervised dependency parse induction, we employ the Dependency Model with Valence (DMV) BIBREF2 as the syntax model in our approach. DMV is a generative model that defines a probability distribution over dependency parse trees and syntactic categories, generating tokens and dependencies in a head-outward fashion. While, traditionally, DMV is trained using gold POS tags as observed syntactic categories, in our approach, we treat each tag as a latent variable, as described in sec:general-neural.
Most existing approaches to this task are not fully unsupervised since they rely on gold POS tags following the original experimental setup for DMV. This is partially because automatically parsing from words is difficult even when using unsupervised syntactic categories BIBREF29 . However, inducing dependencies from words alone represents a more realistic experimental condition since gold POS tags are often unavailable in practice. Previous work that has trained from words alone often requires additional linguistic constraints (like sentence internal boundaries) BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , acoustic cues BIBREF33 , additional training data BIBREF4 , or annotated data from related languages BIBREF34 . Our approach is naturally designed to train on word embeddings directly, thus we attempt to induce dependencies without using gold POS tags or other extra linguistic information.
Like previous work we use sections 02-21 of WSJ corpus as training data and evaluate on section 23, we remove punctuations and train the models on sentences of length INLINEFORM0 , “head-percolation” rules BIBREF39 are applied to obtain gold dependencies for evaluation. We train basic DMV, extended DMV (E-DMV) BIBREF35 and Gaussian DMV (which treats POS tag as unknown latent variables and generates observed word embeddings directly conditioned on them following Gaussian distribution) as baselines. Basic DMV and E-DMV are trained with Viterbi EM BIBREF40 on unsupervised POS tags induced from our Markov-structured model described in sec:pos. Multinomial parameters of the syntax model in both Gaussian DMV and our model are initialized with the pre-trained DMV baseline. Other parameters are initialized in the same way as in the POS tagging experiment. The directed dependency accuracy (DDA) is used for evaluation and we report accuracy on sentences of length INLINEFORM1 and all lengths. We train the parser until training data likelihood converges, and report the mean and standard deviation over 20 random restarts.
Our model directly observes word embeddings and does not require gold POS tags during training. Thus, results from related work trained on gold tags are not directly comparable. However, to measure how these systems might perform without gold tags, we run three recent state-of-the-art systems in our experimental setting: UR-A E-DMV BIBREF36 , Neural E-DMV BIBREF11 , and CRF Autoencoder (CRFAE) BIBREF37 . We use unsupervised POS tags (induced from our Markov-structured model) in place of gold tags. We also train basic DMV on gold tags and include several state-of-the-art results on gold tags as reference points.
As shown in Table TABREF39 , our approach is able to improve over the Gaussian DMV by 4.8 points on length INLINEFORM0 and 4.8 points on all lengths, which suggests the additional latent embedding layer and neural projector are helpful. The proposed approach yields, to the best of our knowledge, state-of-the-art performance without gold POS annotation and without sentence-internal boundary information. DMV, UR-A E-DMV, Neural E-DMV, and CRFAE suffer a large decrease in performance when trained on unsupervised tags – an effect also seen in previous work BIBREF29 , BIBREF34 . Since our approach induces latent POS tags jointly with dependency trees, it may be able to learn POS clusters that are more amenable to grammar induction than the unsupervised tags. We observe that CRFAE underperforms its gold-tag counterpart substantially. This may largely be a result of the model's reliance on prior linguistic rules that become unavailable when gold POS tag types are unknown. Many extensions to DMV can be considered orthogonal to our approach – they essentially focus on improving the syntax model. It is possible that incorporating these more sophisticated syntax models into our approach may lead to further improvements.
Our model directly observes word embeddings and does not require gold POS tags during training. Thus, results from related work trained on gold tags are not directly comparable. However, to measure how these systems might perform without gold tags, we run three recent state-of-theart systems in our experimental setting: URA E-DMV (Tu and Honavar, 2012), Neural EDMV (Jiang et al., 2016), and CRF Autoencoder (CRFAE) (Cai et al., 2017).5 We use unsupervised POS tags (induced from our Markov-structured model) in place of gold tags.6 We also train basic DMV on gold tags and include several stateof-the-art results on gold tags as reference points.
|
What system is run in their experimental setting to measure how these systems might perform without gold tags?
|
URA E-DMV (Tu and Honavar, 2012), Neural EDMV (Jiang et al., 2016), and CRF Autoencoder (CRFAE) (Cai et al., 2017).
|
null | false
| null |
What is magic the gathering?
|
Magic the gathering is a tabletop digital and collectable card game released in 1993 by Wizards of the Coast.
|
|
null | false
| null |
Bull sharks are large and stout, with females being larger than males. The bull shark can be up to 81 cm (2 ft 8 in) in length at birth. Adult female bull sharks average 2.4 m (8 ft) long and typically weigh 130 kg (290 lb), whereas the slightly smaller adult male averages 2.25 m (7 ft) and 95 kg (209 lb). While a maximum size of 3.5 m (11 ft) is commonly reported, a single record exists of a female specimen of exactly 4.0 m (13 ft). A 3.25 m (10.7 ft) long pregnant individual reached 450 kg (990 lb). Bull sharks are wider and heavier than other requiem sharks of comparable length, and are grey on top and white below. The second dorsal fin is smaller than the first. The bull shark's caudal fin is longer and lower than that of the larger sharks, and it has a small snout, and lacks an interdorsal ridge.
|
Given this paragraph about bull sharks, how long are adult female adult sharks in feet?
|
Adult female bull sharks average 8 feet long.
|
null | false
| null |
Identify which instrument is string or percussion: Daf, Bandura
|
Bandura is string, Daf is percussion.
|
|
null | false
| null |
Why are gaming conferences fun for everyone?
|
Gaming conferences can be fun for everyone because they are interactive and immersive experiences that go on for days. Gaming conferences allow you to meet new people from all walks of life and ages. People enjoy gaming with family, friends, and colleagues. There are creative people selling new gaming ideas, competitive people trying to beat their opponents with strategy and luck, and outgoing people who just like to talk and interact with others.
|
|
null | false
| 484
|
The original PPO implementation employs GAE for estimating the advantage, which is important for learning performance. Therefore, we employed two variants of GAE in experiments for option learning according to the experimental setup. In the following paragraphs, we let N denote the rollout length used for advantage estimation and let t + k denote the time step at which the current option o t terminates. Thus, we need to consider the effect of option-switching in advantage estimation When k < N . Furthermore, we use two variants of the option-specific TD errors δ
Independent GAE for Reward-Free RL For no-reward experiments with VIC, we used the following variant of GAE:
Here, we ignore the future rewards produced by other options after the current option o t terminates. This formulation enhances learning diverse intra-option policies per option.
Upgoing GAE for task adaptation For single-task learning, increasing the rollout step N often speeds up learning. However, future rewards after option termination heavily depend on the selected option and have high variance, especially when learning diverse options. This high variance of future rewards slows advantage learning and causes underestimation of Âo . Thus, to prevent underestimation, we introduce an upgoing GAE (UGAE) for estimating advantage with options:
Like the upgoing policy update, the idea is optimistic regarding future rewards after option termination by taking the maximum over 0. We use Âo upg for task adaptation experiments in Section 5, and confirmed its effectivity in the ablation study in Appendix C.6.
The original PPO implementation employs GAE for estimating the advantage, which is important for learning performance. Therefore, we employed two variants of GAE in experiments for option learning according to the experimental setup. In the following paragraphs, we let N denote the rollout length used for advantage estimation and let t + k denote the time step at which the current option o t terminates. Thus, we need to consider the effect of option-switching in advantage estimation When k < N . Furthermore, we use two variants of the option-specific TD errors δ
Independent GAE for Reward-Free RL For no-reward experiments with VIC, we used the following variant of GAE:
Here, we ignore the future rewards produced by other options after the current option o t terminates. This formulation enhances learning diverse intra-option policies per option.
Upgoing GAE for task adaptation For single-task learning, increasing the rollout step N often speeds up learning. However, future rewards after option termination heavily depend on the selected option and have high variance, especially when learning diverse options. This high variance of future rewards slows advantage learning and causes underestimation of Âo . Thus, to prevent underestimation, we introduce an upgoing GAE (UGAE) for estimating advantage with options:
Like the upgoing policy update, the idea is optimistic regarding future rewards after option termination by taking the maximum over 0. We use Âo upg for task adaptation experiments in Section 5, and confirmed its effectivity in the ablation study in Appendix C.6.
Thus, to prevent underestimation, we introduce an upgoing GAE (UGAE) for estimating advantage with options****Like the upgoing policy update (Vinyals et al., 2019), the idea is optimistic regarding future rewards after option termination by taking the maximum over 0.
|
On eq. 9, what is the assumption on the reward function to take the maximum over 0 as optimistic?
|
There, we take the maximum of 0 and an estimated advantage. Since advantages should converge to 0 in theory, we called taking the maximum over 0 optimistic.
|
null | false
| null |
Ebikewenimo Welson is a Nigerian freestyle wrestler. He is a four-time medalist at the Commonwealth Games and a two-time medalist at the African Games.
He won four medals at the Commonwealth Games: in 2010, he won the silver medal in the men's 55 kg event and he repeated this in 2014 with the silver medal in the men's 57 kg event. In 2018, he won one of the bronze medals in the men's 57 kg event. He won the silver medal in the men's 57 kg event at the 2022 Commonwealth Games.
He represented Nigeria at the 2019 African Games held in Rabat, Morocco and he won the silver medal in the men's 57 kg event.
He won the gold medal in the men's 57 kg event at the 2014 African Wrestling Championships and the silver medal in that event at the 2016 African Wrestling Championships. At the 2018 African Wrestling Championships held in Port Harcourt, Nigeria, he also won the silver medal in the men's 57 kg event. He was unable to compete at the 2019 African Wrestling Championships due to injury.
In 2021, he won a gold medal at the Baraza Champion of Champions wrestling tournament held in Yenagoa, Bayelsa State, Nigeria. A few months later, he competed at the 2021 African & Oceania Wrestling Olympic Qualification Tournament hoping to qualify for the 2020 Summer Olympics in Tokyo, Japan.
He won the silver medal in his event at the 2022 African Wrestling Championships held in El Jadida, Morocco. He won the silver medal in the men's 57 kg event at the 2022 Commonwealth Games held in Birmingham, England.
|
Extract the gold medals championships and the year won by Ebikewenimo Welson from the text. List them using bullets in the format {Champioship} - {Year} in a comma separated list.
|
African Wrestling Championships - 2014, Baraza Champion of Champions - 2021
|
1909.03242
| false
| null |
FLOAT SELECTED: Table 1: An example of a claim instance. Entities are obtained via entity linking. Article and outlink texts, evidence search snippets and pages are not shown.
FLOAT SELECTED: Table 1: An example of a claim instance. Entities are obtained via entity linking. Article and outlink texts, evidence search snippets and pages are not shown.
|
What metadata is included?
|
besides claim, label and claim url, it also includes a claim ID, reason, category, speaker, checker, tags, claim entities, article title, publish data and claim date
|
null | false
| null |
What should you do when your rock climbing partner calls out "On Belay?"
|
1. Check that their harness is properly buckled at the waist and doubled back.
2. Ensure the leg straps are properly bucked and doubled back. Check that the leg loops are not twisted.
3. Ensure a figure eight knot is tied at the end of the climbing rope, extending through both loops in the harness and has a fisherman's knot as a backup.
4. Check that your own harness is properly buckled around the waist, legs and that leg straps are not twisted.
5. Ensure the climbing rope is in your belay device, that a locking carabiner goes through both the belay device (and rope if an ATC device) and your belay loop on your harness. Ensure the carabiner is locked.
6. Ensure that you and your partner are on the same rope. Your partner should be on the side of the rope closest to the climbing wall.
7. After all safety checks are completed, then call out "On Belay!"
|
|
1712.02121
| true
| null |
Recently, convolutional neural networks (CNNs), originally designed for computer vision BIBREF27 , have significantly received research attention in natural language processing BIBREF28 , BIBREF29 . CNN learns non-linear features to capture complex relationships with a remarkably less number of parameters compared to fully connected neural networks. Inspired from the success in computer vision, BIBREF30 proposed ConvE—the first model applying CNN for the KB completion task. In ConvE, only $v_h$ and $v_r$ are reshaped and then concatenated into an input matrix which is fed to the convolution layer. Different filters of the same $3\times 3$ shape are operated over the input matrix to output feature map tensors. These feature map tensors are then vectorized and mapped into a vector via a linear transformation. Then this vector is computed with $v_t$ via a dot product to return a score for (h, r, t). See a formal definition of the ConvE score function in Table 1 . It is worth noting that ConvE focuses on the local relationships among different dimensional entries in each of $v_h$ or $v_r$ , i.e., ConvE does not observe the global relationships among same dimensional entries of an embedding triple ( $v_h$ , $v_r$ , $v_t$ ), so that ConvE ignores the transitional characteristic in transition-based models, which is one of the most useful intuitions for the task.
In ConvE, only $v_h$ and $v_r$ are reshaped and then concatenated into an input matrix which is fed to the convolution layer
|
Did the authors try stacking multiple convolutional layers?
|
No.
|
null | false
| 38
|
We consider the dataset consisting of the entire collection of articles of the Wikipedia Medicine Portal, updated at the end of 2014. Wikipedia articles are written according to the Media Wiki markup language, a HTML-like language. Among the structural elements of one page, which differs from standard HTML pages, there are i) the internal links, i.e., links to other Wikipedia pages, different from links to external resources); ii) categories, which represent the Media Wiki categories a page belongs to: they are encoded in the part of text within the Media Wiki “categories" tag in the page source, and iii) informative boxes, so called “infoboxes", which summarize in a structured manner some peculiar pieces of information related the topic of the article. The category values for the articles in the medical portal span over the ones listed at https://en.wikipedia.org/wiki/Portal:Medicine. Examples of categories, which appear at the bottom of each Wikipedia page, are in Fig. 1 .
Infoboxes of the medical portal feature medical content and standard coding. As an example, Fig. 2 shows the infobox in the Alzheimer's disease page of the portal. The infobox contains explanatory figures and text denoting peculiar characteristics of the disease and the value for the standard code of such disease (ICD9, as for the international classification of the disease).
Thanks to WikiProject Medicine, the dataset of articles we collected from the Wikipedia Medicine Portal has been manually labeled into seven quality classes. They are ordered as Stub, Start, C, B, A, Good Article (GA), Featured Article (FA). The Featured and Good article classes are the highest ones: to have those labels, an article requires a community consensus and an official review by selected editors, while the other labels can be achieved with reviews from a larger, even controlled, set of editors. Actually, none of the articles in the dataset is labeled as A, thus, in the following, we do not consider that class, restricting the investigation to six classes.
At the date of our study, we were able to gather 24,362 rated documents. Remarkably, only a small percentage of them (1%) is labeled as GA and FA. Indeed, the distribution of the articles among the classes is highly skewed. There are very few (201) articles for the highest quality classes (FA and GA), while the vast majority (19,108) belongs to the lowest quality ones (Stub and Start). This holds not only for the medical portal. Indeed, it is common in all Wikipedia, where, on average, only one article in every thousand is a Featured one.
In Section "Experiments and results" , we will adopt a set of machine-learning classifiers to automatically label the articles into the quality classes. Dealing with imbalanced classes is a common situation in many real applications of classification learning: healthy patients over the population, fraudulent actions over daily genuine transactions, and so on. Without any countermeasure, common classifiers tend to correctly identify only articles belonging to the majority classes, clearly leading to severe mis-classification of the minority classes, since typical learning algorithms strive to maximize the overall prediction accuracy. To reduce the disequilibrium among the size of the classes, we have first randomly sampled the articles belonging to the most populated classes. Then, we have performed some further elaboration, as shown in the following.
Many studies have been conducted to improve learning algorithms accuracy in presence of imbalanced data BIBREF8 . For the current work, we have considered one of the most popular approaches, namely the Synthetic Sampling with Data Generation, detailed in BIBREF9 . It consists in generating synthetic instances from the minority classes, to balance the overall dataset. The approach has been broadly applied to problems relying on NLP features, see, e.g., BIBREF10 . In our case, we resampled the input data set by applying the Synthetic Minority Oversampling TEchnique (SMOTE), with percentage 40% for GA and 180%, for FA. In particular, the steps to oversample are the following:
Table 1 shows the number of articles in the dataset, divided per class, as well as the random samples we have considered for our study. The experiments presented in Section "Experiments and results" are based on the articles of the right-hand column in the table.
Many studies have been conducted to improve learning algorithms accuracy in presence of imbalanced data [15]. For the current work, we have considered one of the most popular approaches, namely the Synthetic Sampling with Data Generation, detailed in [9].
|
How to improve learning algorithms' accuracy in presence of imbalanced data in this paper?
|
The authors used the Synthetic Sampling with Data Generation to improve learning algorithms accuracy in presence of imbalanced data.
|
null | false
| null |
What are some guidelines or rules for allowing a teen to use a mobile phone?
|
Advise the teen that the phone use is a privilege
The phone will be monitored
Phone passwords must be disclosed
Permission to install apps must be obtained from parent
Parental controls will be placed on the phone
The phone can only be used by you and not any friends/others
The phone use is restricted to certain times of the day
|
|
null | false
| 356
|
In this work, we propose a new evaluation framework for highly multilingual document classification which significantly extends the current state. We continue to use Reuters Corpus Volume 2, but based on the above mentioned limitations of the current subset of RCV2, we propose new tasks for cross-lingual document classification. The design choices are as follow:
Uniform class coverage: we sample from RCV2 the same number of examples for each class and language;
Split the data into train, development and test corpus: for each languages, we provide training data of different sizes (1k, 2k, 5k and 10k stories), a development (1k) and a test corpus (4k);
Support more languages: German (DE), English (EN), Spanish (ES), French (FR), Italian (IT), Japanese (JA), Russian (RU) and Chinese (ZH). Reference baseline results are available for all languages.
Most works in the literature use only 1 000 examples to train the document classifier. To invest the impact of more training data, we also provide training corpora of 2 000, 5 000 and 10 000 documents. The development corpus for each language is composed of 1 000 and the test set of 4 000 documents respectively. All have uniform class distributions. An important aspect of this work is to provide a framework to study and evaluate cross-lingual document classification for many language pairs. In that spirit, we will name this corpus “Multilingual Document Classification Corpus”, abbreviated as MLDoc. The full Reuters Corpus Volume 2 has a special license and we can not distribute it ourselves. Instead, we provide tools to extract all the subsets of MLDoc at https://github.com/facebookresearch/MLDoc.
Uniform class coverage: we sample from RCV2 the same number of examples for each class and language; • Split the data into train, development and test corpus: for each languages, we provide training data of different sizes (1k, 2k, 5k and 10k stories), a development (1k) and a test corpus (4k); • Support more languages: German (DE), English (EN), Spanish (ES), French (FR), Italian (IT), Japanese (JA), Russian (RU) and Chinese (ZH). Reference baseline results are available for all languages.
|
What are the choices of new tasks for cross-lingual document classification?
|
The tasks uniform class coverage, split the data into train and support more languages.
|
null | false
| null |
Couple's Retreat Garden (Chinese: 耦园; pinyin: Ǒu Yuán; Suzhou Wu: Ngeu yoe, Wu Chinese: [ŋøʏ ɦyø]) located in Suzhou city, Jiangsu province, China is a famous classical Chinese garden. It is recognized with other classical Suzhou gardens as a UNESCO World Heritage Site.
|
What is the Couple's Retreat Garden?
|
The original garden was built by Lu Jin, prefect of Baoning district, in 1874. It was purchased by Shen Bingcheng, the magistrate of Susong. who rebuilt it in its current form. He also changed the name to the Couple's Garden Retreat. This name refers to the garden's two parts and alludes to a couple. It is recognized with other classical Suzhou gardens as a UNESCO World Heritage Site.
The 0.33 ha garden is divided into an east and west section by the residential core in the middle; an unusual composition for a classical garden. The eccentric design is continued in the form and details of many of the garden buildings, especially the Taoism Tower. The garden is located in the intersection of canals and is surrounded by water on three sides. It is accessible directly from the canal by boat. The West garden is composed of several structures grouped around a small grotto and a Library annex. It is structurally joined to the central residence. East Garden is the main garden of the complex. It consists of a grotto and pond ringed by a covered walkway that connects the structures. A smaller fruit orchard annex is attached.
|
null | false
| 145
|
The goal of an language model is to assign meaningful probabilities to a sequence of words. Given a set of tokens $\mathbf {X}=(x_1,....,x_T)$, where $T$ is the length of a sequence, our task is to estimate the joint conditional probability $P(\mathbf {X})$ which is
were $(x_{1}, \ldots , x_{i-1})$ is the context. An Intrinsic evaluation of the performance of Language Models is perplexity (PPL) which is defined as the inverse probability of the set of the tokens and taking the $T^{th}$ root were $T$ is the number of tokens
In our two approaches we use transformer based architectures: BERT and Transformer-XL as mentioned before. Calculating the auto-regressive $P(\mathbf {X})$ for the transformer-XL is quite straight-forward as the model is unidirectional but it doesn't factorize the same way for a bi-directional model like BERT.
BERT's bi-directional context poses a problem for us to calculate an auto-regressive joint probability. A simple fix could be that we mask all the tokens $\mathbf {x}_{>i}$ and we calculate the conditional factors as we do for an unidirectional model. By doing so though, we loose upon the advantage of bi-directional context the BERT model enables. We propose an approximation of the joint probability as,
This type of approximations has been previously explored with Bi-directional RNN LM's BIBREF9 but not for deep transformer models. We therefore, define a pseudo-perplexity score from the above approximated joint probability.
The original BERT has two training objectives: 'Masked language modelling', in which you mask input tokens randomly and then predict the masked tokens using the left and right context. Additionally, there is the 'next sentence prediction' task that jointly trains text-pair representations. For training the Masked language model the original BERT used Byte Pair Encoding (BPE) BIBREF10 for subword tokenization BIBREF11.For example the rare word "unaffable" to be split up into more frequent subwords such as ["un", "##aff", "##able"]. To remain consistent with experiments performed with LSTM's we use the morfessor for the subword tokenization in the Finnish Language. In Addition, we also apply boundary markers as in (Table TABREF7) and train two separate models using this distinction. We train with left-marked markings as the original BERT was trained with such a scheme and the left+right-marked as it was the previous SOTA with the Finnish Language. For the transformer-XL experiments, we just train with the left+right marked scheme.
The Next Sentence Prediction (NSP) is a binary classification task which predicts whether two segments follow each other in the original text. This pre-training task was proposed to further improve the performance on downstreaming tasks, like Natural Language Inference(NLI) but in reality removing the NSP loss matches or slightly improves the downstream task performance BIBREF12. In this paper, we have omitted the NSP task from the BERT pre-training procedure and changed the input from a SEGMENT-PAIR input to a SINGLE SEGMENT input. As seen in (Fig FIGREF8)
Transformer-XL introduced the notion of recurrence in self-attention by caching the hidden state sequence to compute the hidden states of a new segment. It also introduces a novel relative positional embedding scheme and both of them combined address the issue of fixed context lengths. Transformer-XL as mentioned is a unidirectional deep transformer architecture, therefore the perplexity can be calculated as (Eq DISPLAY_FORM5). The only change is in the input format, were we use sub-word units rather than whole word units as Finnish is morphologically richer than English.
Calculating the auto-regressive P(X) for the transformer-XL is quite straight-forward as the model is unidirectional but it doesn’t factorize the same way for a bi-directional model like BERT.
|
Is the calculation of the auto-regressive P(X) for the transformer-XL quite straight-forward?
|
Yes.
|
null | false
| null |
Think up a couple of ways to use old shirts
|
Here are some ways to use old shirts:
- As cleaning / dusting rags
- To make tie-dye shirts
- To make headbands / head accessories
- As garden ties for your plants
- turning them into quilts
- turning them into rugs
|
|
null | false
| null |
Give me an example of a screened subnet
|
In network security a screened subnet refers to the use of one or more logical screening routers as a firewall to define three separate subnets: an external router (sometimes called an access router), that separates the external network from a perimeter network, and an internal router (sometimes called a choke router) that separates the perimeter network from the internal network. The perimeter network, also called a border network or demilitarized zone (DMZ), is intended for hosting servers (sometimes called bastion hosts) that are accessible from or have access to both the internal and external networks. The purpose of a screened subnet or DMZ is to establish a network with heightened security that is situated between an external and presumed hostile network, such as the Internet or an extranet, and an internal network.
A screened subnet is an essential concept for e-commerce or any entity that has a presence in the World Wide Web or is using electronic payment systems or other network services because of the prevalence of hackers, advanced persistent threats, computer worms, botnets, and other threats to networked information systems.
|
|
null | false
| null |
What is Oktoberfest
|
The Oktoberfest is the world's largest Volksfest, featuring a beer festival and a travelling carnival. It is held annually in Munich, Bavaria. It is a 16- to 18-day folk festival running from mid- or late-September to around the first Sunday in October, with more than six million international and national visitors attending the event. Locally, it is called d’Wiesn, after the colloquial name for the fairgrounds, Theresienwiese. The Oktoberfest is an important part of Bavarian culture, having been held since the year 1810. Other cities across the world also hold Oktoberfest celebrations that are modeled after the original Munich event.
During the event, large quantities of Oktoberfest Beer are consumed. For example, during the 16-day festival in 2014, 7.7 million litres (2,000,000 US gal) were served, making it the year where the most beer was consumed at the Oktoberfest. Visitors also enjoy numerous attractions, such as amusement rides, sidestalls, and games. There is also a wide variety of traditional foods available.
The Munich Oktoberfest originally took place in the 16-day period leading up to the first Sunday in October. In 1994, this longstanding schedule was modified in response to German reunification. As a result, if the first Sunday in October falls on the 1st or the 2nd, then the festival would run until 3 October (German Unity Day). Thus, the festival now runs for 17 days when the first Sunday is 2 October and 18 days when it is 1 October. In 2010, the festival lasted until the first Monday in October (4 October), to mark the event's bicentennial.
|
|
null | false
| null |
Things to consider when applying for a new job...
|
Consider the following before actually applying for a job; career goals, industries to consider, location, size of companies, qualifications for new roles and why are you actually making a change. These considerations will help you create a focus on the type of role, company, location, and industry that will be fulfilling to your next role. Once you've mapped this out, then look at your network of friends, family and acquaintances that you can engage and learn from. Exploring information from those within the target roles/companies will help you confirm your goals are being met in the new role search. Spend time doing research about the sector and various companies that interest you. Understand where those companies and the industry sector is headed in the future? (i.e. is it growing, stable, or shrinking?). Once you feel comfortable and knowledgeable, use networking to engage companies and hiring managers. (Don't be afraid to send that linkedin message) As a word of warning, do not directly apply to multiple jobs in the same company as it tends to come across as desperate and with a lack of focus on specific career goals.
|
|
null | false
| null |
Sea or Mountain
|
I believe Mountain are more attractive but Ocean has it's own beauty and this tropical weather definitely turn you on! SO 50% 50%
|
|
1912.01673
| false
| null |
We asked for two distinct paraphrases of each sentence because we believe that a good sentence embedding should put paraphrases close together in vector space.
Several modification types were specifically selected to constitute a thorough test of embeddings. In different meaning, the annotators should create a sentence with some other meaning using the same words as the original sentence. Other transformations which should be difficult for embeddings include minimal change, in which the sentence meaning should be significantly changed by using only very small modification, or nonsense, in which words of the source sentence should be shuffled so that it is grammatically correct, but without any sense.
We asked for two distinct paraphrases of each sentence because we believe that a good sentence embedding should put paraphrases close together in vector space.
Several modification types were specifically selected to constitute a thorough test of embeddings. In different meaning, the annotators should create a sentence with some other meaning using the same words as the original sentence. Other transformations which should be difficult for embeddings include minimal change, in which the sentence meaning should be significantly changed by using only very small modification, or nonsense, in which words of the source sentence should be shuffled so that it is grammatically correct, but without any sense.
|
What annotations are available in the dataset?
|
For each source sentence, transformation sentences that are transformed according to some criteria (paraphrase, minimal change etc.)
|
null | false
| null |
Extract the current cash rate from the following using the format: {x} percent
At its meeting today, the Board decided to increase the cash rate target by 25 basis points to 3.60 per cent. It also increased the interest rate on Exchange Settlement balances by 25 basis points to 3.50 per cent.
Global inflation remains very high. In headline terms it is moderating, although services price inflation remains elevated in many economies. It will be some time before inflation is back to target rates. The outlook for the global economy remains subdued, with below average growth expected this year and next.
The monthly CPI indicator suggests that inflation has peaked in Australia. Goods price inflation is expected to moderate over the months ahead due to both global developments and softer demand in Australia. Services price inflation remains high, with strong demand for some services over the summer. Rents are increasing at the fastest rate in some years, with vacancy rates low in many parts of the country. The central forecast is for inflation to decline this year and next, to be around 3 per cent in mid-2025. Medium-term inflation expectations remain well anchored, and it is important that this remains the case.
Growth in the Australian economy has slowed, with GDP increasing by 0.5 per cent in the December quarter and 2.7 per cent over the year. Growth over the next couple of years is expected to be below trend. Household consumption growth has slowed due to the tighter financial conditions and the outlook for housing construction has softened. In contrast, the outlook for business investment remains positive, with many businesses operating at a very high level of capacity utilisation.
|
3.60 percent
|
|
null | false
| null |
Which of these animals can be pets? Wolf, dog, rabbit, squirrel, cat, lion.
|
While most animals can be domesticated, dogs, rabbits and cats are common pets.
|
|
null | false
| null |
Classify each activity as athletic or academic: reading, running, studying, jumping, writing, sprinting
|
Reading: Academic
Running: Athletic
Studying: Academic
Jumping: Athletic
Writing: Academic
Sprinting: Athletic
|
|
null | false
| null |
What's so special about the Nissan GT-R?
|
The Nissan GT-R, which initially went into production in 2007, is powered by a hand-built engine assembled by one of only 5 specially-trained technicians (known as Takumi Craftsmen) in a cleanroom in Nissan's Yokohama factory. Early models produced roughly 473 horsepower, with incremental improvements yielding upwards of 560hp in current models.
|
|
null | false
| 306
|
Given a sentence and a list of pre-defined schemas which define the relation P and the classes of its corresponding subject S and object O, for example, (S_TYPE: Person, P: wife, O_TYPE: Person), (S_TYPE: Company, P: founder, O_TYPE: Person), a participating information extraction (IE) system is expected to output all correct triples [(S1, P1, O1), (S2, P2, O2) ...] mentioned in the sentence under the constraints of given schemas. A largest schema-based Chinese information extraction dataset is released in this competition. Precision, Recall and F1 score are used as the basic evaluation metrics to measure the performance of participating systems.
From the example shown in Figure FIGREF1, we can notice that one entity can be involved in multiple triplets and entity spans have overlaps, which is the difficulties of this task.
Precision, Recall and F1 score are used as the basic evaluation metrics to measure the performance of participating systems.
|
What is used as the basic evaluation metrics to measure the performance of participating systems?
|
Precision, Recall and F1 score are used as the basic evaluation metrics to measure the performance of participating systems.
|
null | false
| null |
Which is a species of fish? Slimehead or Slimeball
|
Slimehead
|
|
null | false
| null |
What are a couple of simple tricks to make a simple wood carving look more polished and professional?
|
Okay the first obvious point to make here is that the thing that will make any piece of art - including a wood carving - look more professional is the quality of the art! That means the skill and the artistic quality that goes into it, specifically. But there are a couple things you can do to dress it up a bit - think about your school days and sliding that english paper into a fresh new folder with a carefully penned title on the front. So back to wood carving. What can you do? Let's say you've carved yourself a nice salmon complete with fens and gill lines. Well here are three ways to dress it up. First and most obvious - paint it. You can go for detail paint or you can use more of a water color approach that will give the wood a colored wash. The sky is really the limit here. Second, you mount it on a stand so it floats about the ground. Drill a small whole in the bottom of the fish, do the same in a wood block you've sanded up and you have s stand. Third and a bit more out there - consider if there are simple additions you can add. I made an angler fish and used a drive flower with stem as the attached antenna. Look around and think about what you can add to your project. Last and most basic, put a coat of varnish on it to seal it up - and if you choose give it a bit of shine. All these ideas will give a little bit of extra finish and pop to your project.
|
|
null | false
| null |
Dixie Violet Egerickx (born 31 October 2005) is an English actress. Egerickx has appeared three times on the London stage, as Iphigenia in Robert Icke's adaptation of Oresteia, Rosalind in Alexi Kay Campbell's Sunset at the Villa Thalia at the National Theatre and as Jenny Caroline 'Qui Qui' Marx in Richard Bean and Clive Coleman's Young Marx at The Bridge Theatre, directed by Sir Nicholas Hytner. She appeared in the 2017 National Geographic series Genius and the 2018 television series Patrick Melrose making her feature film debut in the 2018 film The Little Stranger. Egerickx was named in Screen International's Screen Stars of Tomorrow list in 2019 at the age of 13. She starred as Edie in the 2020 drama film Summerland, and portrayed Mary Lennox in the 2020 film The Secret Garden. In 2019 she filmed HBO's Unaired Game of Thrones Prequel Pilot and most recently has completed filming the role of Jo Ransome in the Apple TV miniseries of The Essex Serpent, directed by Clio Barnard.
|
Extract the names of the movies Dixie Violet Egerickx starred in and the year those movies were released. List them using bullets in the format {Movie} - {Year}
|
The Little Stranger - 2018
The Secret Garden - 2020
Summerland - 2020
|
null | false
| 335
|
While the demand for physical and manual labor is gradually declining, there is a growing need for a workforce with soft skills. Which soft skill do you think would be the most valuable in your daily life? According to an article in Forbes BIBREF0 , 70% of employed Americans agree that public speaking skills are critical to their success at work. Yet, it is one of the most dreaded acts. Many people rate the fear of public speaking even higher than the fear of death BIBREF1 . To alleviate the situation, several automated systems are now available that can quantify behavioral data for participants to reflect on BIBREF2 . Predicting the viewers' ratings from the speech transcripts would enable these systems to generate feedback on the potential audience behavior.
Predicting human behavior, however, is challenging due to its huge variability and the way the variables interact with each other. Running Randomized Control Trials (RCT) to decouple each variable is not always feasible and also expensive. It is possible to collect a large amount of observational data due to the advent of content sharing platforms such as YouTube, Massive Open Online Courses (MOOC), or ted.com. However, the uncontrolled variables in the observational dataset always keep a possibility of incorporating the effects of the “data bias” into the prediction model. Recently, the problems of using biased datasets are becoming apparent. BIBREF3 showed that the error rates in the commercial face-detectors for the dark-skinned females are 43 times higher than the light-skinned males due to the bias in the training dataset. The unfortunate incident of Google's photo app tagging African-American people as “Gorilla” BIBREF4 also highlights the severity of this issue.
We address the data bias issue as much as possible by carefully analyzing the relationships of different variables in the data generating process. We use a Causal Diagram BIBREF5 , BIBREF6 to analyze and remove the effects of the data bias (e.g., the speakers' reputations, popularity gained by publicity, etc.) in our prediction model. In order to make the prediction model less biased to the speakers' race and gender, we confine our analysis to the transcripts only. Besides, we normalize the ratings to remove the effects of the unwanted variables such as the speakers' reputations, publicity, contemporary hot topics, etc.
For our analysis, we curate an observational dataset of public speech transcripts and other meta-data collected from the ted.com website. This website contains a large collection of high-quality public speeches that are freely available to watch, share, rate, and comment on. Every day, numerous people watch and annotate their perceptions about the talks. Our dataset contains 2231 public speech transcripts and over 5 million ratings from the spontaneous viewers of the talks. The viewers annotate each talk by 14 different labels—Beautiful, Confusing, Courageous, Fascinating, Funny, Informative, Ingenious, Inspiring, Jaw-Dropping, Long-winded, Obnoxious, OK, Persuasive, and Unconvincing.
We use two neural network architectures in the prediction task. In the first architecture, we use LSTM BIBREF7 for a sequential input of the words within the sentences of the transcripts. In the second architecture, we use TreeLSTM BIBREF8 to represent the input sentences in the form of a dependency tree. Our experiments show that the dependency tree-based model can predict the TED talk ratings with slightly higher performance (average F-score 0.77) than the word sequence model (average F-score 0.76). To the best of our knowledge, this is the best performance in the literature on predicting the TED talk ratings. We compare the performances of these two models with a baseline of classical machine learning techniques using hand-engineered features. We find that the neural networks largely outperform the classical methods. We believe this gain in performance is achieved by the networks' ability to capture better the natural relationship of the words (as compared to the hand engineered feature selection approach in the baseline methods) and the correlations among different rating labels.
We use two neural network architectures in the prediction task.
|
How many neural network architectures did the authors use in the prediction task?
|
Two.
|
null | false
| null |
In July 1905, Lord Curzon, the Viceroy and Governor-General (1899–1905), ordered the partition of the province of Bengal. The stated aim was to improve administration. However, this was seen as an attempt to quench nationalistic sentiment through divide and rule. The Bengali Hindu intelligentsia exerted considerable influence on local and national politics. The partition outraged Bengalis. Widespread agitation ensued in the streets and in the press, and the Congress advocated boycotting British products under the banner of swadeshi, or indigenous industries. A growing movement emerged, focussing on indigenous Indian industries, finance, and education, which saw the founding of National Council of Education, the birth of Indian financial institutions and banks, as well as an interest in Indian culture and achievements in science and literature. Hindus showed unity by tying Rakhi on each other's wrists and observing Arandhan (not cooking any food). During this time, Bengali Hindu nationalists like Sri Aurobindo, Bhupendranath Datta, and Bipin Chandra Pal began writing virulent newspaper articles challenging the legitimacy of British rule in India in publications such as Jugantar and Sandhya, and were charged with sedition.
The Partition also precipitated increasing activity from the then still Nascent militant nationalist revolutionary movement, which was particularly gaining strength in Bengal and Maharashtra from the last decade of the 1800s. In Bengal, Anushilan Samiti, led by brothers Aurobindo and Barin Ghosh organised a number of attacks of figureheads of the Raj, culminating in the attempt on the life of a British judge in Muzaffarpur. This precipitated the Alipore bomb case, whilst a number of revolutionaries were killed, or captured and put on trial. Revolutionaries like Khudiram Bose, Prafulla Chaki, Kanailal Dutt who were either killed or hanged became household names.
|
Extract the names of the freedom fighters of Bengal, India from the text. Separate them with a comma.
|
Sri Aurobindo, Bhupendranath Datta, Bipin Chandra Pal, Khudiram Bose, Prafulla Chaki, Kanailal Dutt
|
null | false
| null |
Tell me whether each of these refers to the book called "Don Quixote": "Don Quixote", "El Ingenioso Hidalgo Don Quixote de la Mancha", "El Ingenioso Hidalgo Don Quijote de la Mancha", "El Quijote", "The Man Who Killed Don Quixote", "Don Quixote, Knight Errant"
|
"Don Quixote" - yes
"El Ingenioso Hidalgo Don Quixote de la Mancha" - yes
"El Ingenioso Hidalgo Don Quijote de la Mancha" - yes
"El Quijote" - yes
"Don Quijote" - yes
"The Man Who Killed Don Quixote" - no
"Don Quixote, Knight Errant" - no
|
|
null | false
| 106
|
The goal of this study was to select documents suitable for a fine-grained analysis of arguments. In a preliminary study on annotating argumentation using a small sample (50 random documents) of forum posts and comments from the raw corpus, we found that many documents convey no argumentation at all, even in discussions about controversies. We observed that such contributions do not intend to persuade; these documents typically contain story-sharing, personal worries, user interaction (asking questions, expressing agreement), off-topic comments, and others. Such characteristics are typical to on-line discussions in general, but they have not been examined with respect to argumentation or persuasion. Indeed, we observed that there are (1) documents that are completely unrelated and (2) documents that are related to the topic, but do not contain any argumentation. This issue has been identified among argumentation theorist; for example as external relevance by Paglieri.Castelfranchia.2014. Similar findings were also confirmed in related literature in argumentation mining, however never tackled empirically BIBREF53 , BIBREF8 These documents are thus not suitable for analyzing argumentation.
In order to filter documents that are suitable for argumentation annotation, we defined a binary document-level classification task. The distinction is made between either persuasive documents or non-persuasive (which includes all other sorts of texts, such as off-topic, story sharing, unrelated dialog acts, etc.).
The two annotated categories were on-topic persuasive and non-persuasive. Three annotators with near-native English proficiency annotated a set of 990 documents (a random subset of comments and forum posts) reaching 0.59 Fleiss' INLINEFORM0 . The final label was selected by majority voting. The annotation study took on average of 15 hours per annotator with approximately 55 annotated documents per hour. The resulting labels were derived by majority voting. Out of 990 documents, 524 (53%) were labeled as on-topic persuasive. We will refer to this corpus as gold data persuasive.
We examined all disagreements between annotators and discovered some typical problems, such as implicitness or topic relevance. First, the authors often express their stance towards the topic implicitly, so it must be inferred by the reader. To do so, certain common-ground knowledge is required. However, such knowledge heavily depends on many aspects, such as the reader's familiarity with the topic or her cultural background, as well as the context of the source website or the discussion forum thread. This also applies for sarcasm and irony. Second, the decision whether a particular topic is persuasive was always made with respect to the controversial topic under examination. Some authors shift the focus to a particular aspect of the given controversy or a related issue, making the document less relevant.
We achieved moderate agreement between the annotators, although the definition of persuasiveness annotation might seem a bit fuzzy. We found different amounts of persuasion in the specific topics. For instance, prayer in schools or private vs. public schools attract persuasive discourse, while other discussed controversies often contain non-persuasive discussions, represented by redshirting and mainstreaming. Although these two topics are also highly controversial, the participants of on-line discussions seem to not attempt to persuade but they rather exchange information, support others in their decisions, etc. This was also confirmed by socio-psychological researchers. Ammari.et.al.2014 show that parents of children with special needs rely on discussion sites for accessing information and social support and that, in particular, posts containing humor, achievement, or treatment suggestions are perceived to be more socially appropriate than posts containing judgment, violence, or social comparisons. According to Nicholson.Leask.2012, in the online forum, parents of autistic children were seen to understand the issue because they had lived it. Assuming that participants in discussions related to young kids (e.g., redshirting, or mainstreaming) are usually females (mothers), the gender can also play a role. In a study of online persuasion, Guadagno.Cialdini.2002 conclude that women chose to bond rather than compete (women feel more comfortable cooperating, even in a competitive environment), whereas men are motivated to compete if necessary to achieve independence.
In order to filter documents that are suitable for argumentation annotation, we defined a binary document-level classification task.
|
What did the authors do to filter documents that are suitable for argumentation annotation?
|
The authors defined a binary document-level classification task to filter documents.
|
null | false
| null |
Tell me if these technologies would have existed before the invention of electricity or not: steam engine, candles, iPad, radio, oil lamps, fax machine, printing press
|
steam engine: would have existed, candles: would have existed, iPad: would not have existed, radio: would not have existed, oil lamps: would have existed, fax machine: would not have existed, printing press: would have existed
|
|
null | false
| null |
Give me a list of the top MLB home run leaders of all time.
|
Barry Bonds, Hank Aaron, Babe Ruth, Albert Pujols, Alex Rodriguez, Willie Mays
|
|
null | false
| 325
|
Training a dialogue agent to converse like a human requires extensive supervision. The most common approach is to train models to imitate humans in large corpora of crowdsourced or scraped conversations BIBREF0 . These fully-supervised conversations tend to be expensive to collect in sufficient quantity and/or occur in settings with significant differences from the deployment environment BIBREF1 . Instead, dialogue agents would ideally learn directly from dialogue, the conversations they participate in after deployment, which are usually abundant, task-specific, dynamic, and cheap. This corresponds to the way humans learn to converse—not merely observing others engaging in “expert-level” conversations, but instead actively adjusting and correcting our speech based on feedback woven throughout our own conversations BIBREF2 , BIBREF3 . Giving a dialogue agent this ability would enable it to continuously improve and adapt over its lifetime, rather than requiring additional annotation costs for each and every improvement.
However, naively training a dialogue agent on its own conversations yields poor results. For example, training a model on its own output can simply reinforce its existing failure modes, and mistakes by the agent can lead to absurd conversations that no longer resemble the target domain BIBREF4 . To combat this, one approach is to allow the agent to request feedback during conversations BIBREF5 , BIBREF6 , e.g., when it believes it is about to make a mistake. This approach, however, falls victim to the Dunning-Kruger effect BIBREF7 , which in this case suggests that a bad model will also be bad at knowing when it is doing a bad job. Regardless of when feedback is requested, existing methods typically require accompanying scalar rewards or adherence to particular templates or structure to ensure that the feedback is usable by the model BIBREF8 , BIBREF9 , BIBREF10 . These requirements may be acceptable for paid annotators, but they impose unnatural workflows on unpaid conversation partners in a standard dialogue environment. Humans are able to request and provide feedback using only natural language; ideally, dialogue agents would be able to do the same.
In this work we propose the self-feeding chatbot, a dialogue agent with the ability to extract new examples from the conversations it participates in after deployment (Figure FIGREF1 ). Concretely, in addition to being trained on the primary Dialogue task, the agent is trained to predict its speaking partner's satisfaction with its responses. When the conversation seems to be going well, the user's responses (but not the bot's own utterances) become the targets in new training examples for the Dialogue task. When the agent believes it has made a mistake, it instead requests feedback on what it could have said instead. Predicting the feedback that will be provided in a given context becomes an auxiliary task (Feedback) on which the model is also trained. Importantly, these new examples improve the agent's dialogue abilities while using only natural responses from the user that do not require special structure, accompanying numerical feedback, or additional human intervention in order to be used.
With this approach, the conversations the chatbot participates in are sliced into two complementary datasets—one largely protected from the chatbot's mistakes (Dialogue examples), and one which directly addresses them (Feedback examples). We validate our approach on the PersonaChat BIBREF11 dialogue dataset, finding empirically that regardless of the number of available supervised examples, the dialogue ability of the chatbot is always improved by adding the automatically extracted examples of either type, and improves the most by adding both.
The main contributions of this work thus include the following:
The datasets and models described in this paper are available via the ParlAI platform BIBREF12 , along with training code. Hyperparameter values are included in Appendix SECREF13 .
We validate our approach on the PersonaChat dialogue dataset, finding empirically that regardless of the number of available supervised examples, the dialogue ability of the chatbot is always improved by adding the automatically extracted examples of either type, and improves the most by adding both.
|
Which dataset do the authors validate their approach on?
|
PersonaChat.
|
1910.04519
| false
| null |
We use the MedWeb (“Medical Natural Language Processing for Web Document”) dataset BIBREF4 that was provided as part of a subtask at the NTCIR-13 Conference BIBREF5. The data is summarised in Table TABREF1. There are a total of 2,560 pseudo-tweets in three different languages: Japanese (ja), English (en) and Chinese (zh). These were created in Japanese and then manually translated into English and Chinese (see Figure FIGREF2). Each pseudo-tweet is labelled with a subset of the following 8 labels: influenza, diarrhoea/stomach ache, hay fever, cough/sore throat, headache, fever, runny nose, and cold. A positive label is assigned if the author (or someone they live with) has the symptom in question. As such it is more than a named entity recognition task, as can be seen in pseudo-tweet #3 in Figure FIGREF2 where the term “flu” is mentioned but the label is negative.
We use the MedWeb (“Medical Natural Language Processing for Web Document”) dataset BIBREF4 that was provided as part of a subtask at the NTCIR-13 Conference BIBREF5. The data is summarised in Table TABREF1. There are a total of 2,560 pseudo-tweets in three different languages: Japanese (ja), English (en) and Chinese (zh).
|
How big is dataset used for fine-tuning model for detection of red flag medical symptoms in individual statements?
|
The answers are shown as follows:
* a total of 2,560 pseudo-tweets in three different languages: Japanese (ja), English (en) and Chinese (zh)
|
null | false
| null |
Marburg Virus Disease was first detected in which year?
|
1967
|
|
null | false
| null |
How long does it take to get to mars?
|
About 7 months.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.