paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
What football team did Jerome Bettis play for?
|
The Steelers
|
|
null | false
| null |
How did the Royal Navy Protect the British Task Force from Argentine air attacks during the Falklands War?
|
The British managed to minimize their shipping losses through a combination of layered air defenses and signals intelligence. Type 42 destroyers and Sea Dart missiles provided adequate high-altitude air defense, forcing Argentine pilots to fly at very low altitudes to evade Royal Navy ship-based radar. Royal Navy submarines operating off the coast of Argentina could use electronic sensors in their periscopes to detect electronic signals from Argentine aircraft as they took off. These subs provided the fleet with enough advanced warning to launch Sea Harriers on interception missions. Consequently, an Argentine pilot needed to be ready to dogfight a Harrier, dodge surface-to-air missiles, and then pull off a low-level bombing run. All three of these tasks required a level of training that most Argentine pilots lacked. Argentina ran out of fighter jets before the British ran out of ships.
|
|
null | false
| null |
What is DeFi?
|
Decentralized finance (often stylized as DeFi) offers financial instruments without relying on intermediaries such as brokerages, exchanges, or banks by using smart contracts on a blockchain. DeFi platforms allow people to lend or borrow funds from others, speculate on price movements on assets using derivatives, trade cryptocurrencies, insure against risks, and earn interest in savings-like accounts. DeFi uses a layered architecture and highly composable building blocks. Some applications promote high interest rates but are subject to high risk. Coding errors and hacks have been common in DeFi.
|
|
null | false
| 146
|
We proposed a new problem of predicting whether different people would answer with the same response to the same visual question. Towards motivating the practical implications for this problem, we analyzed nearly half a million visual questions and demonstrated there is nearly a 50/50 split between visual questions that lead to answer agreement versus disagreement. We observed that crowd disagreement arose for various types of answers (yes/no, counting, other) for many different reasons. We next proposed a system that automatically predicts whether a visual question will lead to a single versus multiple answers from a crowd. Our method outperforms a strong existing VQA system limited to estimating system uncertainty rather than crowd disagreement. Finally, we demonstrated how to employ the prediction system to accelerate the collection of diverse answers from a crowd by typically at least 20% over today's status quo of fixed redundancy allocation.
Finally, we demonstrated how to employ the prediction system to accelerate the collection of diverse answers from a crowd by typically at least 20% over today’s status quo of fixed redundancy allocation.
|
How much does their system increase the speed of the collection of diverse answers from a crowd?
|
At least 20%.
|
1809.10644
| false
| null |
On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters.
Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters.
|
how many parameters did their model use?
|
The answers are shown as follows:
* Excluding the embedding weights, our model requires 100k parameters
|
1806.00722
| false
| null |
We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German.
We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German.
|
what datasets were used?
|
IWSLT14 German-English, IWSLT14 Turkish-English, WMT14 English-German
|
null | false
| null |
What is the national bird of New Zealand?
|
The kiwi bird is the national bird of New Zealand. This bird is particularly interesting because it cannot fly and it runs very fast. If you see one in person consider yourself lucky because sightings are rare during the day and in highly populated areas. It also has strong legs and no tail. New Zealanders adore this bird and often refer to it as the 'humble kiwi.'
|
|
null | false
| 200
|
Recent studies have shown the vulnerability of ML models to adversarial attacks, small perturbations which lead to misclassification of inputs. Adversarial example generation in NLP BIBREF0 is more challenging than in common computer vision tasks BIBREF1, BIBREF2, BIBREF3 due to two main reasons: the discrete nature of input space and ensuring semantic coherence with the original sentence. A major bottleneck in applying gradient based BIBREF4 or generator model BIBREF5 based approaches to generate adversarial examples in NLP is the backward propagation of the perturbations from the continuous embedding space to the discrete token space.
Recent works for attacking text models rely on introducing errors at the character level in words BIBREF6, BIBREF7 or adding and deleting words BIBREF8, BIBREF9, BIBREF10, etc. for creating adversarial examples. These techniques often result in adversarial examples which are unnatural looking and lack grammatical correctness, and thus can be easily identified by humans.
TextFooler BIBREF11 is a black-box attack, that uses rule based synonym replacement from a fixed word embedding space to generate adversarial examples. These adversarial examples do not account for the overall semantics of the sentence, and consider only the token level similarity using word embeddings. This can lead to out-of-context and unnaturally complex replacements (see Table ), which can be easily identifiable by humans.
The recent advent of powerful language models BIBREF12, BIBREF13 in NLP has paved the way for using them in various downstream applications. In this paper, we present a simple yet novel technique: BAE (BERT-based Adversarial Examples), which uses a language model (LM) for token replacement to best fit the overall context. We perturb an input sentence by either replacing a token or inserting a new token in the sentence, by means of masking a part of the input and using a LM to fill in the mask (See Figure FIGREF1). BAE relies on the powerful BERT masked LM for ensuring grammatical correctness of the adversarial examples. Our attack beats the previous baselines by a large margin and confirms the inherent vulnerabilities of modern text classification models to adversarial attacks. Moreover, BAE produces more richer and natural looking adversarial examples as it uses the semantics learned by a LM.
To the best of our knowledge, we are the first to use a LM for adversarial example generation. We summarize our major contributions as follows:
We propose BAE, a novel strategy for generating natural looking adversarial examples using a masked language model.
We introduce 4 BAE attack modes, all of which are almost always stronger than previous baselines on 7 text classification datasets.
We show that, surprisingly, just a few replace/insert operations can reduce the accuracy of even a powerful BERT-based classifier by over $80\%$ on some datasets.
In this paper, we present BAE (BERT-based Adversarial Examples), a novel technique using the BERT masked language model (MLM) for word replacements to better fit the overall context of the English language.
|
What is the BAE (BERT-based Adversarial Examples)?
|
It is a novel technique using the BERT masked language model (MLM) for word replacements to better fit the overall context of the English language.
|
null | false
| null |
The Atlantic Ocean is the second-largest of the world's five oceans, with an area of about 106,460,000 km2 (41,100,000 sq mi). It covers approximately 20% of Earth's surface and about 29% of its water surface area. It is known to separate the "Old World" of Africa, Europe, and Asia from the "New World" of the Americas in the European perception of the World.
Through its separation from Africa, Europe, and Asia from the Americas, the Atlantic Ocean has played a central role in the development of human society, globalization, and the histories of many nations. While the Norse were the first humans to cross the Atlantic, it was the 1492 expedition of Christopher Columbus that proved to be the most consequential. Columbus's expedition ushered in an age of exploration and colonization of the Americas by European powers, most notably Spain, France, Portugal, and the United Kingdom. From the 16th to 19th centuries, the Atlantic Ocean was the center of both an eponymous slave trade and the Columbian exchange while occasionally hosting naval battles. Such naval battles, as well as growing trade from regional American powers like the United States and Brazil, both increased in degree during the 20th century, and while no major military conflicts took place in the Atlantic in the present 21st century, the ocean remains a core component of trade across the world.
The Atlantic Ocean occupies an elongated, S-shaped basin extending longitudinally between Europe and Africa to the east, and North and South America to the west. As one component of the interconnected World Ocean, it is connected in the north to the Arctic Ocean, to the Pacific Ocean in the southwest, the Indian Ocean in the southeast, and the Southern Ocean in the south (other definitions describe the Atlantic as extending southward to Antarctica). The Atlantic Ocean is divided in two parts, by the Equatorial Counter Current, with the North(ern) Atlantic Ocean and the South(ern) Atlantic Ocean split at about 8°N.
|
Based on the above passage mention the parts of Atlantic Ocean formed by Equatorial Counter Current. Display the results in a comma separated format.
|
North(ern) Atlantic Ocean, South(ern) Atlantic Ocean
|
null | false
| null |
HDFC Bank Limited (also known as HDB) is an Indian banking and financial services company headquartered in Mumbai. It is India's largest private sector bank by assets and world's 10th largest bank by market capitalisation as of April 2021. It is the third largest company by market capitalisation of $127.16 billion on the Indian stock exchanges. It is also the fifteenth largest employer in India with nearly 150,000 employees.
HDFC Bank was incorporated in 1994 as a subsidiary of the Housing Development Finance Corporation, with its registered office in Mumbai, Maharashtra, India. Its first corporate office and a full-service branch at Sandoz House, Worli were inaugurated by the then Union Finance Minister, Manmohan Singh.
As of 30 June 2022, the bank's distribution network was at 6,378 branches across 3,203 cities. It has installed 430,000 POS terminals and issued 23,570,000 debit cards and 12 million credit cards in FY 2017. It has a base of 1,52,511 permanent employees as of 30 June 2022.
HDFC Bank provides a number of products and services including wholesale banking, retail banking, treasury, auto loans, two-wheeler loans, personal loans, loans against property, consumer durable loan, lifestyle loan and credit cards. Along with this various digital products are Payzapp and SmartBUY.
|
From the passage identify the various products and services which are provided by HDFC Bank including the digital products. Display the results in comma separated format.
|
wholesale banking, retail banking, treasury, auto loans, two-wheeler loans, personal loans, loans against property, consumer durable loan, lifestyle loan , credit cards, Payzapp, SmartBUY
|
1908.05969
| false
| null |
Table TABREF34 shows the inference speed of our method when implementing the sequnece modeling layer with the LSTM-based, CNN-based, and Transformer-based architecture, respectively. The speed was evaluated by average sentences per second using a GPU (NVIDIA TITAN X). For a fair comparison with Lattice-LSTM and LR-CNN, we set the batch size of our method to 1 at inference time. From the table, we can see that our method has a much faster inference speed than Lattice-LSTM when using the LSTM-based sequence modeling layer, and it was also much faster than LR-CNN, which used an CNN architecture to implement the sequence modeling layer. And as expected, our method with the CNN-based sequence modeling layer showed some advantage in inference speed than those with the LSTM-based and Transformer-based sequence model layer.
Table TABREF34 shows the inference speed of our method when implementing the sequnece modeling layer with the LSTM-based, CNN-based, and Transformer-based architecture, respectively.
From the table, we can see that our method has a much faster inference speed than Lattice-LSTM when using the LSTM-based sequence modeling layer, and it was also much faster than LR-CNN, which used an CNN architecture to implement the sequence modeling layer. And as expected, our method with the CNN-based sequence modeling layer showed some advantage in inference speed than those with the LSTM-based and Transformer-based sequence model layer.
|
What percentage of improvement in inference speed is obtained by the proposed method over the newest state-of-the-art methods?
|
Across 4 datasets, the best performing proposed model (CNN) achieved an average of 363% improvement over the state of the art method (LR-CNN)
|
null | false
| null |
Identify which car manufacturer is Japanese or American: GMC or Nissan
|
Nissan is Japanese, GMC is American
|
|
null | false
| 6
|
ROUGE Evaluation Results.
Table TABREF13 presents summarization performance results for the 4 extractive baselines, for the original and proposed variants of PG-net, and finally for template-summarization. Following BIBREF0, performance is evaluated using ROUGE (1, 2, and $L$) BIBREF22 on F1. The motivation for using domain transfer and data synthesis is our hypothesis (H1). Table TABREF13 supports this hypothesis. All ROUGE scores for PG-net that outperform all extractive baselines (in italics) involve tuning and/or use of synthesised data, except for one R-1 (row 18).
As for our second hypothesis (H2), table TABREF13 shows that it is a valid one. For PG-net, comparing the CNN/DM out-of-domain and Student Reflection in-domain results in rows (5 and 6) and (17 and 18) with their corresponding tuned results in rows 9 and 21, we see that fine tuning improves R-1, R-2, and R-$L$ for all courses (rows 5, 6, 9 and 17, 18, 21). Qualitatively, the examples presented in Table TABREF11 clearly show that tuning yields a more coherent and relevant summary. Over all courses, the tuned version of PG-net consistently outperforms the best baseline result for each metric (rows 9 vs. 1, 2, 3, 4 and 21 vs. 13, 14, 15, 16) except for R-2 in Stat2016.
To validate our next set of hypothesises (H3, H4. H5), we use the synthesized data in two settings: either using it for training (rows 7, 8 and 19, 20) or tuning (rows 10, 11 and 22, 23). Table TABREF13 supports H4 by showing that the proposed synthesis model outperforms the WordNet baseline in training (rows 7, 8 and 19, 20) except Stat2016, and tuning (10, 11 and 22, 23) over all courses. It also shows that while adding synthetic data from the baseline is not always helpful, adding synthetic data from the template model helps to improve both the training and the tuning process. In both CS and ENGR courses, tuning with synthetic data enhances all ROUGE scores compared to tuning with only the original data. (rows 9 and 11). As for Stat2015, R-1 and R-$L$ improved, while R-2 decreased. For Stat2016, R-2 and R-$L$ improved, and R-1 decreased (rows 21 and 23). Training with both student reflection data and synthetic data compared to training with only student reflection data yields similar improvements, supporting H3 (rows 6, 8 and 18, 20). While the increase in ROUGE scores is small, our results show that enriching training data with synthetic data can benefit both the training and tuning of other models. In general, the best results are obtained when using data synthesis for both training and tuning (rows 11 and 23), supporting H5.
Finally, while the goal of our template model was to synthesize data, using it for summarization is surprisingly competitive, supporting H6. We believe that training the model with little data is doable due to the small number of parameters (logistic regression classifier only). While rows 12 and 24 are never the best results, they are close to the best involving tuning. This encourages us to enhance our template model and explore templates not so tailored to our data.
Human Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don't read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance.
All ROUGE scores for PG-net that outperform all extractive baselines (in italics) involve tuning and/or use of synthesised data, except for one R-1 (row 18).
|
Do the ROUGE scores for PG-net outperform all extractive baselines?
|
No, they don't.
|
null | false
| null |
Plasma: Free charged particles, usually in equal numbers, such as ions and electrons. Unlike gases, plasma may self-generate magnetic fields and electric currents and respond strongly and collectively to electromagnetic forces. Plasma is very uncommon on Earth (except for the ionosphere), although it is the most common state of matter in the universe.
Supercritical fluid: At sufficiently high temperatures and pressures, the distinction between liquid and gas disappears.
Degenerate matter: matter under very high pressure, supported by the Pauli exclusion principle.
Electron-degenerate matter: found inside white dwarf stars. Electrons remain bound to atoms but can transfer to adjacent atoms.
Neutron-degenerate matter: found in neutron stars. Vast gravitational pressure compresses atoms so strongly that the electrons are forced to combine with protons via Inverse beta decay, resulting in a super dense conglomeration of neutrons. (Normally free neutrons outside an atomic nucleus will decay with a half-life of just under fifteen minutes, but in a neutron star, as in the nucleus of an atom, other effects stabilize the neutrons.)
Strange matter: A type of quark matter that may exist inside some neutron stars close to the Tolman–Oppenheimer–Volkoff limit (approximately 2–3 solar masses). May be stable at lower energy states once formed.
Quantum spin Hall state: a theoretical phase that may pave the way for developing electronic devices that dissipate less energy and generate less heat. This is a derivation of the quantum Hall state of matter.
Bose–Einstein condensate: a phase in which a large number of bosons all inhabit the same quantum state, in effect becoming one single wave/particle. This is a low-energy phase that can only be formed in laboratory conditions and at very low temperatures. It must be close to zero kelvin, or absolute zero. Satyendra Nath Bose and Albert Einstein predicted the existence of such a state in the 1920s, but it was not observed until 1995 by Eric Cornell and Carl Wieman.
Fermionic condensate: Similar to the Bose-Einstein condensate but composed of fermions, also known as Fermi-Dirac condensate. The Pauli exclusion principle prevents fermions from entering the same quantum state, but a pair of fermions can behave like a boson, and multiple such pairs can then enter the same quantum state without restriction.
Superconductivity: is a phenomenon of exactly zero electrical resistance and expulsion of magnetic fields occurring in certain materials when cooled below a characteristic critical temperature. Superconductivity is the ground state of many elemental metals.
Superfluid: A phase achieved by a few cryogenic liquids at extreme temperature at which they become able to flow without friction. A superfluid can flow up the side of an open container and down the outside. Placing a superfluid in a spinning container will result in quantized vortices.
Supersolid: similar to a superfluid, a supersolid can move without friction but retains a rigid shape.
Quantum spin liquid: A disordered state in a system of interacting quantum spins which preserves its disorder to shallow temperatures, unlike other disordered states.
String-net liquid: Atoms in this state have unstable arrangements, like a liquid, but are still consistent in the overall pattern, like a solid.
Time crystals: A state of matter where an object can have movement even at its lowest energy state.
Rydberg polaron: A state of matter that can only exist at ultra-low temperatures and consists of atoms inside of atoms.
Black superionic ice: A state of matter that can exist under very high pressure while excited by super lasers.
|
Beyond the classical states of matter, what are the modern states of matter?
|
1. Plasma: Plasma is the most common state of matter in the universe, but it is uncommonly found on Earth. This state of matter is comprised of free charged particles, such as ions and electrons. Similar to a gaseous state, however, it may generate a magnetic field.
2. Supercritical fluid: Matter that lacks a clear distinction between being a gas and a liquid. This state is found at high temperatures and pressures.
3. Degenerative Matter: A highly dense state of matter due to very high pressures. This state of matter is naturally found in stars in their final evolutionary states.
4. Bose-Einstein condensate: A phase of matter in which a large number of bosons all inhabit the same quantum state. This phase requires temperatures near absolute zero of very low density gas.
5. Fermionic condensate: Similar to Bose-Einstein condensate, except with fermions instead of bosons.
6. Superconductivity: Certain elemental materials may achieve a ground state of superconductivity where there is no electrical resistance and magnetic flux fields are expelled. While most conductors will have reduced resistance as the temperature is lowered, superconductors will abruptly achieve zero resistance at a certain critical temperature.
7. Superfluid: Some fluids may enter this state of matter where, at extremely low temperatures, they become able to flow without friction.
8. Supersolid: Similar to a superfluid, except it can move without friction in a rigid, solid shape.
9. Quantum spin liquid: A state of matter found in certain materials related to the quantum spin. In most magnetic materials, spins tend to form patterns based on how they interact with other neighboring spins. In a quantum spin liquid, however, they remain in a disordered state, never forming a pattern that prevents it from reaching a stable, ordered state.
10. String-net liquid: Atoms in this state have a consistent overall pattern like a solid, but their arrangement is unstable.
11. Time crystals: A quantum system of particles that exhibits non-kinetic motion of particles at their lowest-energy state.
12. Rydberg polaron: An exotic state of matter created at low temperatures in which a very large atom contains other ordinary atoms between the nucleus and electrons.
13. Black superionic ice: This matter is a phase of water that exists at extremely high pressures and temperatures.
|
null | false
| null |
Costco Wholesale Corporation (doing business as Costco Wholesale and also known simply as Costco) is an American multinational corporation which operates a chain of membership-only big-box retail stores (warehouse club). As of 2022, Costco is the fifth largest retailer in the world and is the world's largest retailer of choice and prime beef, organic foods, rotisserie chicken, and wine as of 2016. Costco is ranked #11 on the Fortune 500 rankings of the largest United States corporations by total revenue.
Costco's worldwide headquarters are in Issaquah, Washington, an eastern suburb of Seattle, although its Kirkland Signature house label bears the name of its former location in Kirkland. The company opened its first warehouse (the chain's term for its retail outlets) in Seattle in 1983. Through mergers, however, Costco's corporate history dates back to 1976, when its former competitor Price Club was founded in San Diego, California. As of February 2023, Costco has 848 warehouses worldwide: 584 in the United States, 107 in Canada, 40 in Mexico, 31 in Japan, 29 in the United Kingdom, 18 in South Korea, 14 in Taiwan, 14 in Australia, four in Spain, two each in France and China, and one each in Iceland, New Zealand and Sweden.
|
List the number of warehouses Costco has. Segregate based on location in a descending order. Use the format ‘ Number of warehouse : Location’.
|
584 : United States
107 : Canada,
40 : Mexico,
31 : Japan,
29 : the United Kingdom
18 : South Korea,
14 : Taiwan,
14 : Australia,
4 : Spain,
2 : France
2 : China
1 : Iceland
1 : New Zealand
1 : Sweden
|
null | false
| null |
Classify each as of the following as physical or digital: pencil, pen, paper, email, social media, pavers, gutters, cryptocurrency
|
Pencil: Physical
Pen: Physical
Paper: Physical
Email: Digital
Social Media: Digital
Pavers: Physical
Gutters: Physical
Cryptocurrency: Digital
|
|
null | false
| 340
|
* Equal contribution. Listing order is random.
Lemmatization is a core NLP task that involves a string-to-string transduction from an inflected word form to its citation form, known as the lemma. More concretely, consider the English sentence: The bulls are running in Pamplona. A lemmatizer will seek to map each word to a form you may find in a dictionary—for instance, mapping running to run. This linguistic normalization is important in several downstream NLP applications, especially for highly inflected languages. Lemmatization has previously been shown to improve recall for information retrieval BIBREF0 , BIBREF1 , to aid machine translation BIBREF2 , BIBREF3 and is a core part of modern parsing systems BIBREF4 , BIBREF5 .
However, the task is quite nuanced as the proper choice of the lemma is context dependent. For instance, in the sentence A running of the bulls took place in Pamplona, the word running is its own lemma, since, here, running is a noun rather than an inflected verb. Several counter-examples exist to this trend, as discussed in depth in haspelmath2013understanding. Thus, a good lemmatizer must make use of some representation of each word's sentential context. The research question in this work is, then, how do we design a lemmatization model that best extracts the morpho-syntax from the sentential context?
Recent work BIBREF7 has presented a system that directly summarizes the sentential context using a recurrent neural network to decide how to lemmatize. As N18-1126's system currently achieves state-of-the-art results, it must implicitly learn a contextual representation that encodes the necessary morpho-syntax, as such knowledge is requisite for the task. We contend, however, that rather than expecting the network to implicitly learn some notion of morpho-syntax, it is better to explicitly train a joint model to morphologically disambiguate and lemmatize. Indeed, to this end, we introduce a joint model for the introduction of morphology into a neural lemmatizer. A key feature of our model is its simplicity: Our contribution is to show how to stitch existing models together into a joint model, explaining how to train and decode the model. However, despite the model's simplicity, it still achieves a significant improvement over the state of the art on our target task: lemmatization.
Experimentally, our contributions are threefold. First, we show that our joint model achieves state-of-the-art results, outperforming (on average) all competing approaches on a 20-language subset of the Universal Dependencies (UD) corpora BIBREF8 . Second, by providing the joint model with gold morphological tags, we demonstrate that we are far from achieving the upper bound on performance—improvements on morphological tagging could lead to substantially better lemmatization. Finally, we provide a detailed error analysis indicating when and why morphological analysis helps lemmatization. We offer two tangible recommendations: one is better off using a joint model (i) for languages with fewer training data available and (ii) languages that have richer morphology.
Our system and pre-trained models on all languages in the latest version of the UD corpora are released at https://sigmorphon.github.io/sharedtasks/2019/task2/.
Lemmatization is a core NLP task that involves a string-to-string transduction from an inflected word form to its citation form, known as the lemma.
|
What is the Lemmatization?
|
Lemmatization is a core NLP task that involves a string-to-string transduction from an inflected word form to its citation form, known as the lemma.
|
null | false
| null |
There has been much speculation over the functions of Triceratops' head adornments. The two main theories have revolved around use in combat and in courtship display, with the latter now thought to be the most likely primary function.
Early on, Lull postulated that the frills may have served as anchor points for the jaw muscles to aid chewing by allowing increased size and thus power for the muscles. This has been put forward by other authors over the years, but later studies do not find evidence of large muscle attachments on the frill bones.
Triceratops were long thought to have used their horns and frills in combat with predators such as Tyrannosaurus, the idea being discussed first by Charles H. Sternberg in 1917 and 70 years later by Robert Bakker. There is evidence that Tyrannosaurus did have aggressive head-on encounters with Triceratops, based on partially healed tyrannosaur tooth marks on a Triceratops brow horn and squamosal; the bitten horn is also broken, with new bone growth after the break. Which animal was the aggressor is not known. Paleontologist Peter Dodson estimates that in a battle against a bull Triceratops, the Triceratops had the upper hand and would successfully defend itself by inflicting fatal wounds to the Tyrannosaurus using its sharp horns. Tyrannosaurus is also known to have fed on Triceratops, as shown by a heavily tooth-scored Triceratops ilium and sacrum.
|
What did the Triceratops use their horns for?
|
The Triceratops used their horns for both courtship and combat, including defending against aggressive predators such as the Tyrannosaurus.
|
null | false
| 154
|
Four publicly available corpora are selected for our analysis. These corpora are based on Wikipedia, so more comparable than the others, and have already been used for the evaluation of several QA systems.
WikiQA BIBREF6 comprises questions selected from the Bing search queries, where user click data give the questions and their corresponding Wikipedia articles. The abstracts of these articles are then extracted to create answer candidates. The assumption is made that if many queries lead to the same article, it must contain the answer context; however, this assumption fails for some occasions, which makes this dataset more challenging. Since the existence of answer contexts is not guaranteed in this task, it is called answer triggering instead of answer selection.
SelQA BIBREF7 is a product of five annotation tasks through crowdsourcing. It consists of about 8K questions where a half of the questions are paraphrased from the other half, aiming to reduce contextual similarities between questions and answers. Each question is associated with a section in Wikipedia where the answer context is guaranteed, and also with five sections selected from the entire Wikipedia where the selection is made by the Lucene search engine. This second dataset does not assume the existence of the answer context, so can be used for the evaluation of answer triggering.
SQuAD BIBREF12 presents 107K+ crowdsourced questions on 536 Wikipedia articles, where the answer contexts are guaranteed to exist within the provided paragraph. It contains annotation of answer phrases as well as the pointers to the sentences including the answer phrases; thus, it can be used for both answer extraction and selection. This corpus also provides human accuracy on those questions, setting up a reasonable upper bound for machines. To avoid overfitting, the evaluation set is not publicly available although system outputs can be evaluated by their provided script.
InfoboxQA BIBREF13 gives 15K+ questions based on the infoboxes from 150 articles in Wikipedia. Each question is crowdsourced and associated with an infobox, where each line of the infobox is considered an answer candidate. This corpus emphasizes the gravity of infoboxes, which summary arguably the most commonly asked information about those articles. Although the nature of this corpus is different from the others, it can also be used to evaluate answer selection.
SELQA (Jurczyk et al., 2016) is a product of five annotation tasks through crowdsourcing.
|
What is the SELQA?
|
SELQA is a product of five annotation tasks through crowdsourcing.
|
1909.01093
| false
| null |
Sentiment: For each cluster, its overall sentiment score is quantified by the mean of the sentiment scores among all tweets.
Sentiment: For each cluster, its overall sentiment score is quantified by the mean of the sentiment scores among all tweets.
|
How is sentiment polarity measured?
|
The answers are shown as follows:
* For each cluster, its overall sentiment score is quantified by the mean of the sentiment scores among all tweets
|
null | false
| null |
Florida literature is as varied as the state itself. Genres traditionally include fiction, nonfiction, and poetry, and some of it may be considered part of the American regional Southern literature genre. Writers affiliated with the locale of Florida include William Bartram, Elizabeth Bishop, James Branch Cabell, Hart Crane, Stephen Crane, Harry Crews, Nilo Cruz, John Fleming, Ernest Hemingway, Carl Hiaasen, Jay Hopler, Zora Neale Hurston, José Martí, Campbell McGrath, Marjorie Kinnan Rawlings, Wallace Stevens, and Harriet Beecher Stowe.
|
Give me a list of authors from Florida
|
The most well-known authors from Florida include:
- Ernest Heminway
- John Fleming
- José Martí
- Wallace Stevens
- Harriet Beecher Stowe
- Jay Hopler
- William Bartram
- Elizabeth Bishop
- Stephen Crane
- Hart Crane
- Harry Crews
- Campbell McGrath
- Marjorie Kinnan Rawlings
- James Branch Cabell
- Nilo Cruz
- Zora Neale Hurston
|
null | false
| null |
Round Circuit Date
1 United States Circuit of the Americas, Elroy, Texas 2–3 March
2 United States Virginia International Raceway, Alton, Virginia 27–28 April
3 Canada Canadian Tire Motorsport Park, Bowmanville, Ontario 18–19 May
4 United States Sonoma Raceway, Sonoma, California 8–9 June
5 United States Watkins Glen International, Watkins Glen, New York 31 August–1 September
6 United States Road America, Elkhart Lake, Wisconsin 21–22 September
7 United States Las Vegas Motor Speedway, Las Vegas, Nevada 19–20 October
|
Given this table of races and dates, how many races took place after July? Give the answer as a number.
|
3
|
1909.00170
| true
| null |
To evaluate the influence of our hypersphere feature for off-the-shelf NER systems, we perform the NE recognition on two standard NER benchmark datasets, CoNLL2003 and ONTONOTES 5.0. Our results in Table 6 and Table 7 demonstrate the power of hypersphere features, which contribute to nearly all of the three types of entities as shown in Table 6, except for a slight drop in the PER type of BIBREF22 on a strong baseline. HS features stably enhance all strong state-of-the-art baselines, BIBREF22 , BIBREF21 and BIBREF23 by 0.33/0.72/0.23 $F_1$ point and 0.13/0.3/0.1 $F_1$ point on both benchmark datasets, CoNLL-2003 and ONTONOTES 5.0. We show that our HS feature is also comparable with previous much more complicated LS feature, and our model surpasses their baseline (without LS feature) by 0.58/0.78 $F_1$ point with only HS features. We establish a new state-of-the-art $F_1$ score of 89.75 on ONTONOTES 5.0, while matching state-of-the-art performance with a $F_1$ score of 92.95 on CoNLL-2003 dataset.
To evaluate the influence of our hypersphere feature for off-the-shelf NER systems, we perform the NE recognition on two standard NER benchmark datasets, CoNLL2003 and ONTONOTES 5.0.
|
Do they evaluate on NER data sets?
|
Yes.
|
null | false
| null |
Think of all the ways having a diverse group of employees helps an organization.
|
Improves overall employee morale, improves the organization's cultural competence skills, reduces fear amongst employees, increases creativity in the workplace, reduces micro-aggressions in the workplace, improves reputation of the organization, increases workplace innovation
|
|
null | false
| null |
Banksia is a genus of around 170 species in the plant family Proteaceae. These Australian wildflowers and popular garden plants are easily recognised by their characteristic flower spikes, and fruiting "cones" and heads. Banksias range in size from prostrate woody shrubs to trees up to 30 metres (100 ft) tall. They are found in a wide variety of landscapes: sclerophyll forest, (occasionally) rainforest, shrubland, and some more arid landscapes, though not in Australia's deserts.
|
Given the reference text about the genus Banksia, tell me how many species of Banksia there are and what size they can grow to.
|
There are around 170 species of Banksia. Banksias range in size from prostrate woody shrubs to trees up to 30 metres (100 ft) tall.
|
1910.11768
| false
| null |
To create our training dataset, we followed an approach similar to LASER. The dataset contains 6 languages: English, Spanish, German, Dutch, Korean and Chinese Mandarin. These languages use 3 different scripts, 2 different language orderings, and belong to 4 language families.
The dataset was created by using translations provided by Tatoeba and OpenSubtitles BIBREF16. They were chosen for their high availability in multiple languages.
To create our training dataset, we followed an approach similar to LASER. The dataset contains 6 languages: English, Spanish, German, Dutch, Korean and Chinese Mandarin.
The dataset was created by using translations provided by Tatoeba and OpenSubtitles BIBREF16.
|
Which corpus do they use?
|
The answers are shown as follows:
* The dataset was created by using translations provided by Tatoeba and OpenSubtitles BIBREF16.
|
null | false
| null |
Alison Calder (born 21 December 1969) is a Canadian poet, literary critic and educator.
Life and career
Calder was born in London, England on 21 December 1969 and grew up in Saskatoon, Saskatchewan, Canada. She studied at the University of Saskatchewan, where she earned a BA, and at the University of Western Ontario where she earned an MA and a PhD in English Literature. She was also a Distinguished Junior Scholar in Residence at the Peter Wall Institute for Advanced Studies, University of British Columbia.
In 2004, she won the RBC Bronwen Wallace Award for Emerging Writers.
She wrote a collection of essays in 2005 called History, Literature, and the Writing of the Canadian Prairies which examines literary critism.
Her debut collection of poetry was called Wolf Tree and was published in 2007. It won the 2008 Aqua Books Lansdowne Prize for Poetry and the Eileen McTavish Sykes Award for Best First Book by a Manitoba Author at the 2008 Manitoba Book Awards. It was a finalist for the Pat Lowther Memorial Award and the Gerald Lampert Memorial Award. Her second collection, In the Tiger Park, was published in 2014 and was a finalist for the Lansdowne Prize for Poetry.
She also co-wrote the chapbook Ghost Works: Improvisations in Letters and Poems, with Jeanette Lynes.
She lives in Winnipeg, Manitoba and works at the University of Manitoba where she teaches literature and creative writing. She is married to writer Warren Cariou.
|
Tell me about the life and career of Alison Calder
|
Alison Calder (born 21 December 1969) is a Canadian poet, literary critic and educator.
Calder was born in London, England on 21 December 1969 and grew up in Saskatoon, Saskatchewan, Canada. She studied at the University of Saskatchewan, where she earned a BA, and at the University of Western Ontario where she earned an MA and a PhD in English Literature. She was also a Distinguished Junior Scholar in Residence at the Peter Wall Institute for Advanced Studies, University of British Columbia.
In 2004, she won the RBC Bronwen Wallace Award for Emerging Writers.
She wrote a collection of essays in 2005 called History, Literature, and the Writing of the Canadian Prairies which examines literary critism.
Her debut collection of poetry was called Wolf Tree and was published in 2007. It won the 2008 Aqua Books Lansdowne Prize for Poetry and the Eileen McTavish Sykes Award for Best First Book by a Manitoba Author at the 2008 Manitoba Book Awards. It was a finalist for the Pat Lowther Memorial Award and the Gerald Lampert Memorial Award. Her second collection, In the Tiger Park, was published in 2014 and was a finalist for the Lansdowne Prize for Poetry.
She also co-wrote the chapbook Ghost Works: Improvisations in Letters and Poems, with Jeanette Lynes.
She lives in Winnipeg, Manitoba and works at the University of Manitoba where she teaches literature and creative writing. She is married to writer Warren Cariou.
|
null | false
| null |
How long does it take our sun to orbit the center of our galaxy
|
230 million years!
|
|
null | false
| 5
|
Presenting pronunciation data in several languages to the network might create problems because different languages have different pronunciation patterns. For example, the string `real' is pronounced differently in English, German, Spanish, and Portuguese. We solve this problem by prepending each grapheme sequence with an artificial token consisting of the language's ISO 639-3 code enclosed in angle brackets. The English word `real', for example, would be presented to the system as
INLINEFORM0 eng INLINEFORM1 r e a l
The artificial token is treated simply as an element of the grapheme sequence. This is similar to the approach taken by johnson2016google in their zero-shot NMT system. However, their source-side artificial tokens identify the target language, whereas ours identify the source language. An alternative approach, used by ostling2017continuous, would be to concatenate a language embedding to the input at each time step. They do not evaluate their approach on grapheme-to-phoneme conversion.
This corpus consists of spelling-pronunciation pairs extracted from Wiktionary.
|
Where is the data collected from?
|
All are extracted from Wiktionary.
|
null | false
| null |
Why do some people like horror films?
|
Horror films can provide a safe way for us to mentally rehearse how we'd handle real danger. Some people actually enjoy the scary emotions that horror films elicit. It can provide a unique adrenaline rush, somewhat similar to other fearful activities, like white water rafting, jumping out of airplanes, amusement park rides and flying in a small aircraft. For many, horror films are their least liked movie genre for those same reasons. They might prefer a romantic comedy or action triller, without all the blood and diabolical antagonists.
|
|
null | false
| 519
|
We used the same hyperparameters as the ones used in MADDPG, except with a reduced learning rate and tau (lr = 0.001 and τ = 0.005). We used the same parameters for all our experiments
|
Are there any hyperparameter setting methods?
|
We based our hyperparameters on the ones used in MADDPG, but reduced the learning rate and tau to be able to reproduce their results (lr=0.001; τ=0.005). We used the same parameters for all our experiments. We would like to emphasize that the point we are trying to convey is that all the learned models are significantly below the heuristic&oracle (for as much as an order of magnitude).
|
null | false
| null |
Why and how are people choosing to electrify their homes?
|
An increasingly popular trend is to replace dirty appliances (in other words, those that consume gas or other carbon-based fuel sources) with clean energy appliances (typically those powered by electricity that is in turn generated by solar or wind power). The main motivation for doing so is to reduce a household's carbon footprint and help address global warming. Government incentives, economies of scale and technological innovation have also made it possible for consumers to save money by changing to electric appliances. The most common household appliances that have traditionally been powered by gas are the furnace, water heater, stove, air conditioning unit, and dryer. Replacements for these appliances include heat pumps that can replace both the furnace and air conditioner, electric water heaters and dryers, and induction stoves. To power these electric appliances in environmentally friendly ways, consumers are either generating the electricity on their property by installing rooftop solar panels or they are selecting green energy sources when they pick their plan from their local utility provider.
|
|
1902.00330
| false
| null |
We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.
AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.
ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.
MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)
AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.
WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.
WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.
In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets.
AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.
ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.
MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)
AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.
WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.
WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.
|
What datasets used for evaluation?
|
The answers are shown as follows:
* AIDA-B
* ACE2004
* MSNBC
* AQUAINT
* WNED-CWEB
* WNED-WIKI
|
null | false
| null |
How does one state something as an opinion, belief, or preference rather than as if something is a fact?
|
Begin the statement with "I think" or "I believe" or "In my opinion." This will make it clear that you are expressing personal thoughts about something, rather than stating it as if it were a fact.
|
|
null | false
| null |
Is it advisable to dress in costume for a musical theater audition?
|
No, this is usually not advisable. It is often considered "too much" or the sign of an inexperienced actor to come to the call in full costume. It is recommended that an auditioner dress for confidence, and for comfort and movement (especially if there will be a dance call), However, it is always good advice to keep the appearance, style and personality of the character you are most interested in top of mind when choosing how to present yourself at an audition.
|
|
null | false
| null |
Cuzco, Machu Pichu is a famous place located in which country?
|
Peru
|
|
null | false
| null |
The San Diego Film Critics Society Award for Best Cinematography is an award given by the San Diego Film Critics Society to honor the finest directing achievements in filmmaking.
Winners
2000s
Year Winner Cinematographer(s)
2000 Gladiator John Mathieson
2001 The Man Who Wasn't There Roger Deakins
2002 Road to Perdition Conrad L. Hall
2003 Girl with a Pearl Earring Eduardo Serra
2004 Hero (Ying xiong) Christopher Doyle
The Phantom of the Opera John Mathieson
2005 The New World Emmanuel Lubezki
2006 The Illusionist Dick Pope
2007 No Country for Old Men Roger Deakins
2008 Slumdog Millionaire Anthony Dod Mantle
2009 The Road Javier Aguirresarobe
2010s
Year Winner Cinematographer(s)
2010 Inception Wally Pfister
2011 The Tree of Life Emmanuel Lubezki
2012 Life of Pi Claudio Miranda
2013 To the Wonder Emmanuel Lubezki
2014 Nightcrawler Robert Elswit
2015 Sicario Roger Deakins
2016 Hell or High Water Giles Nuttgens
2017 Dunkirk Hoyte van Hoytema
2018 The Ballad of Buster Scruggs Bruno Delbonnel
The Rider Joshua James Richards
2019 The Lighthouse Jarin Blaschke
2020s
Year Winner Cinematographer(s)
2020 Nomadland Joshua James Richards
|
Who won the San Diego Film Critics Society Award for Best Cinematography in 2012?
|
Claudio Miranda won the San Diego Film Critics Society Award for Best Cinematography in 2012 as cinematographer for Life of Pi
|
null | false
| 2
|
Adversarial learning has been an effective mechanism to resolve the problem of the input features between the training and test examples having large divergences BIBREF27 , BIBREF13 . It has been successfully applied on domain adaption BIBREF7 , cross-lingual learning BIBREF15 and multi-task learning BIBREF17 . All settings involve feature shifting between the training and testing.
In this paper, our setting is different. We are using the annotations from non-experts, which are noise and can influence the final performances if they are not properly processed. Directly learning based on the resulting corpus may adapt the neural feature extraction into the biased annotations. In this work, we assume that individual workers have their own guidelines in mind after short training. For example, a perfect worker can annotate highly consistently with an expert, while common crowdsourcing workers may be confused and have different understandings on certain contexts. Based on the assumption, we make an adaption for the original adversarial neural network to our setting.
Our adaption is very simple. Briefly speaking, the original adversarial learning adds an additional discriminator to classify the type of source inputs, for example, the domain category in the domain adaption setting, while we add a discriminator to classify the annotation workers. Solely the features from the input sentence is not enough for worker classification. The annotation result of the worker is also required. Thus the inputs of our discriminator are different. Here we exploit both the source sentences and the crowd-annotated NE labels as basic inputs for the worker discrimination.
In the following, we describe the proposed adversarial learning module, including both the submodels and the training method. As shown by the left part of Figure 1 , the submodel consists of four parts: (1) a common Bi-LSTM over input characters; (2) an additional Bi-LSTM to encode crowd-annotated NE label sequence; (3) a convolutional neural network (CNN) to extract features for worker discriminator; (4) output and prediction.
As shown by the left part of Figure 1, the submodel consists of four parts: (1) a common Bi-LSTM over input characters; (2) an additional Bi-LSTM to encode crowd-annotated NE label sequence; (3) a convolutional neural network (CNN) to extract features for worker discriminator; (4) output and prediction.
|
What does the submodel consist of?
|
The submodel consists of four parts: (1) a common Bi-LSTM over input characters; (2) an additional Bi-LSTM to encode crowd-annotated NE label sequence; (3) a convolutional neural network (CNN) to extract features for worker discriminator; (4) output and prediction.
|
null | false
| 18
|
Training Data. We choose WIT3's TED corpus BIBREF12 as the basis of our experiments since it might be the only high-quality parallel data of many low-resourced language pairs. TED is also multilingual in a sense that it includes numbers of talks which are commonly translated into many languages. In addition, we use a much larger corpus provided freely by WMT organizers when we evaluate the impact of our approach in a real machine translation campaign. It includes the paralled corpus extracted from the digital corpus of European Parliament (EPPS), the News Commentary (NC) and the web-crawled parallel data (CommonCrawl). While the number of sentences in popular TED corpora varies from 13 thousands to 17 thousands, the total number of sentences in those larger corpus is approximately 3 million sentences.
Neural Machine Translation Setup. All experiments have been conducted using NMT framework Nematus, Following the work of Sennrich2016a, subword segmentation is handled in the prepocessing phase using Byte-Pair Encoding (BPE). Excepts stated clearly in some experiments, we set the number of BPE merging operations at 39500 on the joint of source and target data. When training all NMT systems, we take out the sentence pairs exceeding 50-word length and shuffle them inside every minibatch. Our short-list vocabularies contain 40,000 most frequent words while the others are considered as rare words and applied the subword translation. We use an 1024-cell GRU layer and 1000-dimensional embeddings with dropout at every layer with the probability of 0.2 in the embedding and hidden layers and 0.1 in the input and ourput layers. We trained our systems using gradient descent optimization with Adadelta BIBREF13 on minibatches of size 80 and the gradient is rescaled whenever its norm exceed 1.0. All the trainings last approximately seven days if the early-stopping condition could not be reached. At a certain time, an external evaluation script on BLEU BIBREF14 is conducted on a development set to decide the early-stopping condition. This evaluation script has also being used to choose the model archiving the best BLEU on the development set instead of the maximal loglikelihood between the translations and target sentences while training. In translation, the framework produces INLINEFORM0 -best candidates and we then use a beam search with the beam size of 12 to get the best translation.
Our short-list vocabularies contain 40,000 most frequent words while the others are considered as rare words and applied the subword translation. We use an 1024-cell GRU layer and 1000-dimensional embeddings with dropout at every layer with the probability of 0.2 in the embedding and hidden layers and 0.1 in the input and ourput layers.
|
What is the probability of dropout in their input layer?
|
0.1.
|
null | false
| null |
Why Phuket is so famous
|
Because of island like phi phi, Krabi. Its just stand still mountains inside sea
|
|
null | false
| null |
Please provide me with ideas for dinner tonight.
|
Sure! Here are 2 ideas.
1) Ordering in:
You can see what restaurants in your area offer home delivery, and select something that works well for you! Some common things to order are Pizza, Sushi, Tacos, Fried Rice, and more!
2) Check in your fridge and pantry, and cook at home! Here are a few easy meal ideas:
- Pasta
- Stir Fry
- Flat bread
- Chicken, rice, and vegetables
- Wraps or sandwiches
|
|
1701.08118
| false
| null |
Participants who were shown the definition were more likely to suggest to ban the tweet. In fact, participants in group one very rarely gave different answers to questions one and two (18 of 500 instances or 3.6%). This suggests that participants in that group aligned their own opinion with the definition.
Participants who were shown the definition were more likely to suggest to ban the tweet. In fact, participants in group one very rarely gave different answers to questions one and two (18 of 500 instances or 3.6%). This suggests that participants in that group aligned their own opinion with the definition.
|
How did the authors demonstrate that showing a hate speech definition caused annotators to partially align their own opinion with the definition?
|
The answers are shown as follows:
* participants in group one very rarely gave different answers to questions one and two (18 of 500 instances or 3.6%)
|
null | false
| null |
What is the purpose of a shot clock in basketball?
|
The shot clock is intended to increase the pace of play. By having a limited amount of time to shoot, teams are encouraged to shoot more frequently, leading to a faster moving game.
|
|
null | false
| null |
What is a tri-state area?
|
This term is used in multiple parts of the United States to refer to regions where three states come together.
|
|
null | false
| null |
Give me some tips to impress my aspiring Father-in-law.
|
1. Try to be authentic self. It will cause you less stress in long run and also provide a chance to the other person to get to know you better.
2. Be respectful. It is always good to understand other person's perspective specially when they are coming from a different generation.
3. Light the conversation over the lighter topics or if required keep yourself up to date with the recent events happening in the world. Bonus points if you also know about the topics of interest for the other party.
4. Relax and try to enjoy the time together. At the end of the day, both of you would like to know each other better. There can always be some judgement in the equation but with openness and mindful transparency, things should evolve constructively.
|
|
null | false
| null |
Korla, also known as Kurla or from Mandarin Chinese as Ku'erle or Kuerle, is the second largest city in Xinjiang. It is a county-level city and the seat of the Bayingolin Mongol Autonomous Prefecture, the largest prefecture of China.
Korla has existed since at least the Han Dynasty. Korla is known for its production of fragrant pears and is a production center for the Tarim oil fields.
|
What does Korla stand for?
|
Korla is the second largest city in Xinjiang, China. It is also known as Kurla or Kuerle. It is located in the Bayingolin Mongol Autonomous Prefecture. It is known for fragrant pears and oil fields.
|
null | false
| 121
|
We compare our hierarchical model against three systems. For each of them, we report the results of the best performing models presented in each paper.
$\bullet $ Wiseman BIBREF10 is a standard encoder-decoder system with copy mechanism.
$\bullet $ Li BIBREF28 is a standard encoder-decoder with a delayed copy mechanism: text is first generated with placeholders, which are replaced by salient records extracted from the table by a pointer network.
$\bullet $ Puduppully-plan BIBREF12 acts in two steps: a first standard encoder-decoder generates a plan, i.e. a list of salient records from the table; a second standard encoder-decoder generates text from this plan.
$\bullet $ Puduppully-updt BIBREF13. It consists in a standard encoder-decoder, with an added module aimed at updating record representations during the generation process. At each decoding step, a gated recurrent network computes which records should be updated and what should be their new representation.
We test the importance of the input structure by training different variants of the proposed architecture:
$\bullet $ Flat, where we feed the input sequentially to the encoder, losing all notion of hierarchy. As a consequence, the model uses standard attention. This variant is closest to Wiseman, with the exception that we use a Transformer to encode the input sequence instead of an RNN.
$\bullet $ Hierarchical-kv is our full hierarchical model, with traditional hierarchical attention, i.e. where attention over records is computed on the full record encoding, as in equation (SECREF16).
$\bullet $ Hierarchical-k is our full hierarchical model, with key-guided hierarchical attention, i.e. where attention over records is computed only on the record key representations, as in equation (DISPLAY_FORM17).
From a general point of view, we can see from Table TABREF25 that our scenarios obtain significantly higher results in terms of BLEU over all models; our best model Hierarchical-k reaching $17.5$ vs. $16.5$ against the best baseline. This means that our models learns to generate fluent sequences of words, close to the gold descriptions, adequately picking up on domain lingo. Qualitative metrics are either better or on par with baselines. We show in Figure FIGREF29 a text generated by our best model, which can be directly compared to the gold description in Figure FIGREF1. Generation is fluent and contains domain-specific expressions. As reflected in Table TABREF25, the number of correct mentions (in green) outweights the number of incorrect mentions (in red). Please note that, as in previous work BIBREF28, BIBREF12, BIBREF13, BIBREF10, generated texts still contain a number of incorrect facts, as well hallucinations (in blue): sentences that have no basis in the input data (e.g. “[...] he's now averaging 22 points [...]."). While not the direct focus of our work, this highlights that any operation meant to enrich the semantics of structured data can also enrich the data with incorrect facts.
Specifically, regarding all baselines, we can outline the following statements.
$\bullet $ Our hierarchical models achieve significantly better scores on all metrics when compared to the flat architecture Wiseman, reinforcing the crucial role of structure in data semantics and saliency. The analysis of RG metrics shows that Wiseman seems to be the more naturalistic in terms of number of factual mentions (RG#) since it is the closest scenario to the gold value (16.83 vs. 17.31 for resp. Wiseman and Hierarchical-k). However, Wiseman achieves only $75.62$% of precision, effectively mentioning on average a total of $22.25$ records (wrong or accurate), where our model Hierarchical-k scores a precision of $89.46$%, leading to $23.66$ total mentions, just slightly above Wiseman.
$\bullet $ The comparison between the Flat scenario and Wiseman is particularly interesting. Indeed, these two models share the same intuition to flatten the data-structure. The only difference stands on the encoder mechanism: bi-LSTM vs. Transformer, for Wiseman and Flat respectively. Results shows that our Flat scenario obtains a significant higher BLEU score (16.7 vs. 14.5) and generates fluent descriptions with accurate mentions (RG-P%) that are also included in the gold descriptions (CS-R%). This suggests that introducing the Transformer architecture is promising way to implicitly account for data structure.
$\bullet $ Our hierarchical models outperform the two-step decoders of Li and Puduppully-plan on both BLEU and all qualitative metrics, showing that capturing structure in the encoding process is more effective that predicting a structure in the decoder (i.e., planning or templating). While our models sensibly outperform in precision at factual mentions, the baseline Puduppully-plan reaches $34.28$ mentions on average, showing that incorporating modules dedicated to entity extraction leads to over-focusing on entities; contrasting with our models that learn to generate more balanced descriptions.
$\bullet $ The comparison with Puduppully-updt shows that dynamically updating the encoding across the generation process can lead to better Content Ordering (CO) and RG-P%. However, this does not help with Content Selection (CS) since our best model Hierarchical-k obtains slightly better scores. Indeed, Puduppully-updt updates representations after each mention allowing to keep track of the mention history. This guides the ordering of mentions (CO metric), each step limiting more the number of candidate mentions (increasing RG-P%). In contrast, our model encodes saliency among records/entities more effectively (CS metric). We note that while our model encodes the data-structure once and for all, Puduppully-updt recomputes, via the updates, the encoding at each step and therefore significantly increases computation complexity. Combined with their RG-# score of $30.11$, we argue that our model is simpler, and obtains fluent description with accurate mentions in a more human-like fashion.
We would also like to draw attention to the number of parameters used by those architectures. We note that our scenarios relies on a lower number of parameters (14 millions) compared to all baselines (ranging from 23 to 45 millions). This outlines the effectiveness in the design of our model relying on a structure encoding, in contrast to other approach that try to learn the structure of data/descriptions from a linearized encoding.
We compare our hierarchical model against three systems.
|
How many systems do they compare their hierarchical model against?
|
Three.
|
null | false
| null |
What are some useful ways to repurpose a cardboard box?
|
That depends on the person and what they find useful. I might find a box useful for clearing my desk of all the mess I accumulate throughout the week. Meanwhile, my kids would much prefer to turn the box into a play house, a spaceship, or maybe a jail to imprison each other in. A cat might repurpose it as a hiding place or as a new scratching toy.
|
|
null | false
| 146
|
What would be possible if a person had an oracle that could immediately provide the answer to any question about the visual world? Sight-impaired users could quickly and reliably figure out the denomination of their currency and so whether they spent the appropriate amount for a product BIBREF0 . Hikers could immediately learn about their bug bites and whether to seek out emergency medical care. Pilots could learn how many birds are in their path to decide whether to change course and so avoid costly, life-threatening collisions. These examples illustrate several of the interests from a visual question answering (VQA) system, including tackling problems that involve classification, detection, and counting. More generally, the goal for VQA is to have a single system that can accurately answer any natural language question about an image or video BIBREF1 , BIBREF2 , BIBREF3 .
Entangled in the dream of a VQA system is an unavoidable issue that, when asking multiple people a visual question, sometimes they all agree on a single answer while other times they offer different answers (Figure FIGREF1 ). In fact, as we show in the paper, these two outcomes arise in approximately equal proportions in today's largest publicly-shared VQA benchmark that contains over 450,000 visual questions. Figure FIGREF1 illustrates that human disagreements arise for a variety of reasons including different descriptions of the same concept (e.g., “minor" and “underage"), different concepts (e.g., “ghost" and “photoshop"), and irrelevant responses (e.g., “no").
Our goal is to account for whether different people would agree on a single answer to a visual question to improve upon today's VQA systems. We propose multiple prediction systems to automatically decide whether a visual question will lead to human agreement and demonstrate the value of these predictions for a new task of capturing the diversity of all plausible answers with less human effort.
Our work is partially inspired by the goal to improve how to employ crowds as the computing power at run-time. Towards satisfying existing users, gaining new users, and supporting a wide range of applications, a crowd-powered VQA system should be low cost, have fast response times, and yield high quality answers. Today's status quo is to assume a fixed number of human responses per visual question and so a fixed cost, delay, and potential diversity of answers for every visual question BIBREF2 , BIBREF0 , BIBREF4 . We instead propose to dynamically solicit the number of human responses based on each visual question. In particular, we aim to accrue additional costs and delays from collecting extra answers only when extra responses are needed to discover all plausible answers. We show in our experiments that our system saves 19 40-hour work weeks and $1800 to answer 121,512 visual questions, compared to today's status quo approach BIBREF0 .
Our work is also inspired by the goal to improve how to employ crowds to produce the information needed to train and evaluate automated methods. Specifically, researchers in fields as diverse as computer vision BIBREF2 , computational linguistics BIBREF1 , and machine learning BIBREF3 rely on large datasets to improve their VQA algorithms. These datasets include visual questions and human-supplied answers. Such data is critical for teaching machine learning algorithms how to answer questions by example. Such data is also critical for evaluating how well VQA algorithms perform. In general, “bigger" data is better. Current methods to create these datasets assume a fixed number of human answers per visual question BIBREF2 , BIBREF4 , thereby either compromising on quality by not collecting all plausible answers or cost by collecting additional answers when they are redundant. We offer an economical way to spend a human budget to collect answers from crowd workers. In particular, we aim to actively allocate additional answers only to visual questions likely to have multiple answers.
The key contributions of our work are as follows:
We show in our experiments that our system saves 19 40-hour work weeks and $1800 to answer 121,512 visual questions, compared to today's status quo approach.
|
Did their system save money compared to today's status quo approach?
|
Yes, $1800 has been saved.
|
null | false
| null |
What are fun things to do in New York City
|
1. Plan a trip to Central Park
2. Go shopping on Fifth Avenue
3. Get a New York City Bagel
4. Take the Subway to visit the Statue of Liberty
|
|
null | false
| null |
Who was the first man in space
|
Yuri Gagarin
|
|
null | false
| null |
What kind of activity do good sales people do?
|
Good sales people build good relationships first by questioning and finding their interest. Then, they stat discovery questions. They don’t start product presentation until they finish discovery and find a space to listen in their customers. When they give some discovery questions they often use SPIN. S stands for situational questions, P stands for problem questions, I stands for Implication questions, N stands for need questions.
When they talk about their products they always how this product solves customers problems. They mention “You” or customer name frequently so that the customer imagine how they can change by that product.
|
|
null | false
| null |
What are the words of House Frey?
|
"We Stand Together"
|
|
null | false
| null |
Is voluntary childlessness rising? If yes, then what are the major drivers? What could be its impact?
|
The fast paced life and easy availability of effective contraception have increased the number of couples opting for childfree life. There are numerous reasons why being childfree is trending. The decision to have a child comes with responsibilities and many couples do not want to embrace it. Some couples avoid the idea of having children based on past experience in the family or simply because not having a child opens new avenues to explore and spend time doing other meaningful things in life. Unwillingness to bear the financial burden of raising a child is another reason why some couples avoid the idea.
Nevertheless, the trend represents the rise in deterioration of interpersonal relationships. It prevents overpopulation and indirectly impacts events like recession, inflation, global warming etc. and in a consumer centric world the resonance of this trend cannot be ignored in the global economy.
|
|
null | false
| null |
Which is a bird or fish: Redhorse sucker, Red-necked grebe
|
Red-necked grebe is a bird, Redhorse sucker is a fish
|
|
null | false
| null |
What are the words of House Wendwater?
|
"For All Seasons"
|
|
1909.03526
| false
| null |
We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data).
We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data).
|
What in-domain data is used to continue pre-training?
|
The answers are shown as follows:
* dialectal tweet data
|
null | false
| null |
Classify the following animals as mammal or not: whale, kangaroo, mouse, snake, fish, human, koala
|
whale - mammal
kangaroo - mammal
mouse - mammal
snake - not mammal
fish - not mammal
human - mammal
koala - mammal
|
|
1603.07044
| false
| null |
FLOAT SELECTED: Table 4: Compared with other systems (bold is best).
FLOAT SELECTED: Table 4: Compared with other systems (bold is best).
|
How much performance gap between their approach and the strong handcrafted method?
|
0.007 MAP on Task A, 0.032 MAP on Task B, 0.055 MAP on Task C
|
null | false
| null |
Identify which instrument is string or woodwind: Asor, Aulochrome
|
Aulochrome is woodwind, Asor is string.
|
|
null | false
| null |
What are the words of House Bolton?
|
"Our Blades are Sharp"
|
|
1911.01680
| false
| null |
We compare our method with the models trained using Adobe internal NLU tool, Pytext BIBREF18 and Rasa BIBREF19 NLU tools. Table TABREF22 shows the results on Test set. Our model improves the F1 score by almost 2%, which corresponds to a 12.3% error rate reduction. This improvements proves the effectiveness of using contextual information for the task of slot filling.
We compare our method with the models trained using Adobe internal NLU tool, Pytext BIBREF18 and Rasa BIBREF19 NLU tools.
|
What are the baselines?
|
The answers are shown as follows:
* Adobe internal NLU tool
* Pytext
* Rasa
|
null | false
| null |
Fissures, vents, and plugs are all associated with which geological feature?
|
Volcanos.
|
|
null | false
| 11
|
Our premise is that we can generate informative maps using geolocated information available on social media; therefore, we guide the blog collection process with the constraint that we only accept blogs that have specific location information. Moreover, we aim to find blogs belonging to writers from all 50 U.S. states, which will allow us to build U.S. maps for various dimensions of interest.
We first started by collecting a set of profiles of bloggers that met our location specifications by searching individual states on the profile finder on http://www.blogger.com. Starting with this list, we can locate the profile page for a user, and subsequently extract additional information, which includes fields such as name, email, occupation, industry, and so forth. It is important to note that the profile finder only identifies users that have an exact match to the location specified in the query; we thus built and ran queries that used both state abbreviations (e.g., TX, AL), as well as the states' full names (e.g., Texas, Alabama).
After completing all the processing steps, we identified 197,527 bloggers with state location information. For each of these bloggers, we found their blogs (note that a blogger can have multiple blogs), for a total of 335,698 blogs. For each of these blogs, we downloaded the 21 most recent blog postings, which were cleaned of HTML tags and tokenized, resulting in a collection of 4,600,465 blog posts.
After completing all the processing steps, we identified 197,527 bloggers with state location information. For each of these bloggers, we found their blogs (note that a blogger can have multiple blogs), for a total of 335,698 blogs. For each of these blogs, we downloaded the 21 most recent blog postings, which were cleaned of HTML tags and tokenized, resulting in a collection of 4,600,465 blog posts.
|
Where did the data set the authors use come from?
|
They identified 197,527 bloggers with state location information. For each of these bloggers, they found their blogs, for a total of 335,698 blogs. For each of these blogs, they downloaded the 21 most recent blog postings, which were cleaned of HTML tags and tokenized, resulting in a collection of 4,600,465 blog posts.
|
null | false
| null |
What are the 8 categories of scientific classification?
|
Levels of Classification. The classification system commonly used today is based on the Linnean system and has eight levels of taxa; from the most general to the most specific, these are domain, kingdom, phylum (plural, phyla), class, order, family, genus (plural, genera), and species.
|
|
null | false
| null |
Hemoglobin (haemoglobin in British English), abbreviated Hb or Hgb, is the iron-containing oxygen-transport metalloprotein present in red blood cells (erythrocytes) of almost all vertebrates (the exception being the fish family Channichthyidae ) as well as the tissues of some invertebrates.Hemoglobin in blood carries oxygen from the respiratory organs (e.g. lungs or gills) to the rest of the body (i.e. tissues). There it releases the oxygen to permit aerobic respiration to provide energy to power functions of an organism in the process called metabolism. A healthy individual human has 12 to 20 grams of hemoglobin in every 100 mL of blood.
|
Tell me some facts about hemoglobin with the given text as a reference
|
1. Hemoglobin/Hb/Hgb is the iron-containing oxygen-transport metalloprotein which is present
in red blood cells (erythrocytes) of almost all vertebrates (the exception being the fish
family Channichthyidae ) as well as the tissues of some invertebrates
2. Hemoglobin carries oxygen in blood from respiratory organs (e.g. lungs or gills) to rest of the
body(i.e. tissue).
3. Metablolism is the energy to power functions of an organism. Hemoglobin releases the oxygen
to permit aerobic respiration .
4. Healthy human have 12 to 25 grams of hemoglobin in every 100 mL of blood.
|
null | false
| null |
Who invented Calculus?
|
Calculus was developed independently by Isaac Newton and Gottfried Wilhem Leibniz at around the same time. Later work by Cauchy, Weierstrass, and Riemann brought full rigor to Calculus as a distinct branch of mathematics.
|
|
null | false
| null |
What is the fastest way to travel between the United States and Croatia?
|
The fastest way to travel would be by airplane, which would be faster than a boat trip
|
|
null | false
| null |
What are a few uses of pot (THC) gummies?
|
-Gummies are used medicinally for patients to cope with chronic pain
-Gummies are used to self-medicate as a way to treat mental health issues such as depression and anxiety
-Gummies are used to prevent seizures
-Gummies are also used for fun, similar to how people use alcohol
|
|
null | false
| null |
What is the political philosophy of isolationists?
|
Isolationists philosophy is to focus within and not spend time/energy in the affairs of other nations. It was a popular philosophy during the US rebuild after the depression.
|
|
null | false
| null |
What are the three main states of matter and how does it transition between them?
|
The three main states of matter are solid, liquid, and gas. Matter consumes energy as it changes state from solid to liquid and liquid to gas. When matter changes from gas to liquid or liquid to solid, it releases energy. Temperature and pressure changes are the key influencers of these state changes.
|
|
null | false
| null |
What are top JavaScript frameworks?
|
- React
- Angular
- Vue.js
- Next.js
- Svelte
|
|
1912.01673
| false
| null |
In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset. Statistics of individual annotators are available in tab:statistics.
In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset.
|
How are possible sentence transformations represented in dataset, as new sentences?
|
Yes, as new sentences.
|
null | false
| 137
|
We build our bilingual LMs, named RAMEN, starting from BERT$_{\textsc {base}}$, BERT$_{\textsc {large}}$, RoBERTa$_{\textsc {base}}$, and RoBERTa$_{\textsc {large}}$ pre-trained models. Using BERT$_{\textsc {base}}$ allows us to compare the results with mBERT model. Using BERT$_{\textsc {large}}$ and RoBERTa allows us to investigate whether the performance of the target LM correlates with the performance of the source LM. We evaluate our models on two cross-lingual zero-shot tasks: (1) Cross-lingual Natural Language Inference (XNLI) and (2) dependency parsing.
Since pretrained models operate on subword level, we need to estimate subword translation probabilities.
|
Why do authors need to estimate subword translation probabilities?
|
Since pretrained models operate on subword level.
|
null | false
| null |
What is the fastest car in the world?
|
The fastest production car in the world is currently the Bugatti Chiron Supersport 300+ thanks to its 304mph top speed, although the Koenigsegg Jesko Absolut could topple the Bugatti from the position in the near future.
|
|
null | false
| null |
Give me a list of the Adventure motorbikes that I can buy on my LAMS (Learner Approved Motorcycle Scheme) restriction, living in Australia
|
This is not a comprehensive list but you can use this as a starting point:
Kawasaki Versys-X 300
Kawazaki KLR650
Honda CRF300 Rally
BMW G 310 GS
KTM 390 Adventure
Honda CB500XA
Yamaha XT250
Yamaha WR250R
Suzuki V-Strom 650XT
Royal Enfield Himalayan
Suzuki DRZ400/DR650
|
|
null | false
| 27
|
In most other analyses and datasets, the idea of out-of-scope data is not considered, and instead the output classes are intended to cover all possible queries (e.g., TREC BIBREF7). Recent work by BIBREF8 considers a similar problem they call out-of-distribution detection. They use other datasets or classes excluded during training to form the out-of-distribution samples. This means that the out-of-scope samples are from a small set of coherent classes that differ substantially from the in-distribution samples. Similar experiments were conducted for evaluating unknown intent discovery models in BIBREF9. In contrast, our out-of-scope queries cover a broad range of phenomena and are similar in style and often similar in topic to in-scope queries, representing things a user might say given partial knowledge of the capabilities of a system.
Table TABREF20 compares our dataset with other short-query intent classification datasets. The Snips BIBREF0 dataset and the dataset presented in BIBREF10 are the most similar to the in-scope part of our work, with the same type of conversational agent requests. Like our work, both of these datasets were bootstrapped using crowdsourcing. However, the Snips dataset has only a small number of intents and an enormous number of examples of each. Snips does present a low-data variation, with 70 training queries per intent, in which performance drops slightly. The dataset presented in BIBREF10 has a large number of intent classes, yet also contains a wide range of samples per intent class (ranging from 24 to 5,981 queries per intent, and so is not constrained in all cases).
BIBREF11 created datasets with constrained training data, but with very few intents, presenting a very different type of challenge. We also include the TREC query classification datasets BIBREF7, which have a large set of labels, but they describe the desired response type (e.g., distance, city, abbreviation) rather than the action intents we consider. Moreover, TREC contains only questions and no commands. Crucially, none of the other datasets summarized in Table TABREF20 offer a feasible way to evaluate out-of-scope performance.
The Dialog State Tracking Challenge (DSTC) datasets are another related resource. Specifically, DSTC 1 BIBREF12, DSTC 2 BIBREF13, and DSTC 3 BIBREF14 contain “chatbot style" queries, but the datasets are focused on state tracking. Moreover, most if not all queries in these datasets are in-scope. In contrast, the focus of our analysis is on both in- and out-of-scope queries that challenge a virtual assistant to determine whether it can provide an acceptable response.
The Snips (Coucke et al., 2018) dataset and the dataset presented in Liu et al. (2019) are the most similar to the in-scope part of our work, with the same type of conversational agent requests.
|
Is the Snips dataset similar to the in-scope part of the work in this paper?
|
Yes, it is.
|
null | false
| null |
what is rain ?
|
Water from various water bodies like lakes, oceans along with water from plants & trees evaporate as vapour due to the heat from sun. This rises, cools and changes into tiny droplets of water which form clouds. Multiple tiny droplets join to form bigger water drops and once they become heavy enough, they come down as rain. Small droplets are called drizzles and the bigger ones as rain !
|
|
null | false
| null |
Classify each of the following fruits and vegetables as either green or red color: cucumber, radish, watermelon, lettuce, tomato, avocado, strawberry, zucchini, broccoli, pomegranate.
|
Red fruits and vegetables include: pomegranate, radish, strawberry, tomato, watermelon.
Green fruits and vegetables include: avocado, broccoli, cucumber, lettuce, zucchini.
|
|
null | false
| null |
Tell me what to put in a smoothie.
|
A smoothie is primarily made up of fruit or vegetables, liquid, and toppings. The fruit can be frozen or fresh, and good options are bananas, strawberries, blueberries, and raspberries. To add some health benefits, you can also add vegetables like spinach. Adding liquid gives the smoothie a more drinkable consistency, and you can use either water, milk, or milk alternatives. Greek yogurt also adds protein. Finally, for toppings, you can add chia seeds or flaxseeds to give your smoothie some crunch.
|
|
1808.05902
| false
| null |
With the purpose of comparing the proposed model with a popular state-of-the-art approach for image classification, for the LabelMe dataset, the following baseline was introduced:
Bosch 2006 (mv): This baseline is similar to one in BIBREF33 . The authors propose the use of pLSA to extract the latent topics, and the use of k-nearest neighbor (kNN) classifier using the documents' topics distributions. For this baseline, unsupervised LDA is used instead of pLSA, and the labels from the different annotators for kNN (with INLINEFORM0 ) are aggregated using majority voting (mv).
The results obtained by the different approaches for the LabelMe data are shown in Fig. FIGREF94 , where the svi version is using mini-batches of 200 documents.
Analyzing the results for the Reuters-21578 and LabelMe data, we can observe that MA-sLDAc outperforms all the baselines, with slightly better accuracies for the batch version, especially in the Reuters data. Interestingly, the second best results are consistently obtained by the multi-annotator approaches, which highlights the need for accounting for the noise and biases of the answers of the different annotators.
Both the batch and the stochastic variational inference (svi) versions of the proposed model (MA-sLDAc) are compared with the following baselines:
[itemsep=0.02cm]
LDA + LogReg (mv): This baseline corresponds to applying unsupervised LDA to the data, and learning a logistic regression classifier on the inferred topics distributions of the documents. The labels from the different annotators were aggregated using majority voting (mv). Notice that, when there is a single annotator label per instance, majority voting is equivalent to using that label for training. This is the case of the 20-Newsgroups' simulated annotators, but the same does not apply for the experiments in Section UID89 .
LDA + Raykar: For this baseline, the model of BIBREF21 was applied using the documents' topic distributions inferred by LDA as features.
LDA + Rodrigues: This baseline is similar to the previous one, but uses the model of BIBREF9 instead.
Blei 2003 (mv): The idea of this baseline is to replicate a popular state-of-the-art approach for document classification. Hence, the approach of BIBREF0 was used. It consists of applying LDA to extract the documents' topics distributions, which are then used to train a SVM. Similarly to the previous approach, the labels from the different annotators were aggregated using majority voting (mv).
sLDA (mv): This corresponds to using the classification version of sLDA BIBREF2 with the labels obtained by performing majority voting (mv) on the annotators' answers.
With the purpose of comparing the proposed model with a popular state-of-the-art approach for image classification, for the LabelMe dataset, the following baseline was introduced:
Bosch 2006 (mv): This baseline is similar to one in BIBREF33 . The authors propose the use of pLSA to extract the latent topics, and the use of k-nearest neighbor (kNN) classifier using the documents' topics distributions. For this baseline, unsupervised LDA is used instead of pLSA, and the labels from the different annotators for kNN (with INLINEFORM0 ) are aggregated using majority voting (mv).
The results obtained by the different approaches for the LabelMe data are shown in Fig. FIGREF94 , where the svi version is using mini-batches of 200 documents.
Analyzing the results for the Reuters-21578 and LabelMe data, we can observe that MA-sLDAc outperforms all the baselines, with slightly better accuracies for the batch version, especially in the Reuters data. Interestingly, the second best results are consistently obtained by the multi-annotator approaches, which highlights the need for accounting for the noise and biases of the answers of the different annotators.
Both the batch and the stochastic variational inference (svi) versions of the proposed model (MA-sLDAc) are compared with the following baselines:
[itemsep=0.02cm]
LDA + LogReg (mv): This baseline corresponds to applying unsupervised LDA to the data, and learning a logistic regression classifier on the inferred topics distributions of the documents. The labels from the different annotators were aggregated using majority voting (mv). Notice that, when there is a single annotator label per instance, majority voting is equivalent to using that label for training. This is the case of the 20-Newsgroups' simulated annotators, but the same does not apply for the experiments in Section UID89 .
LDA + Raykar: For this baseline, the model of BIBREF21 was applied using the documents' topic distributions inferred by LDA as features.
LDA + Rodrigues: This baseline is similar to the previous one, but uses the model of BIBREF9 instead.
Blei 2003 (mv): The idea of this baseline is to replicate a popular state-of-the-art approach for document classification. Hence, the approach of BIBREF0 was used. It consists of applying LDA to extract the documents' topics distributions, which are then used to train a SVM. Similarly to the previous approach, the labels from the different annotators were aggregated using majority voting (mv).
sLDA (mv): This corresponds to using the classification version of sLDA BIBREF2 with the labels obtained by performing majority voting (mv) on the annotators' answers.
|
what are the state of the art approaches?
|
The answers are shown as follows:
* Bosch 2006 (mv)
* LDA + LogReg (mv)
* LDA + Raykar
* LDA + Rodrigues
* Blei 2003 (mv)
* sLDA (mv)
|
null | false
| null |
Give me a bulleted list of ideas for a viral youtube video
|
• Film yourself biking from California to New York
• Create summarizations and synopses of high profile current events and supplement them with high quality editing or animations
• Document your journey of mastering a new skill, like learning an instrument, and provide large-scale time lapses where you go from beginner to advanced at the skill.
• Film yourself attempting to infiltrate and work for a venue or event, like attempting to sneak into a fashion show and pretending to be, or entering a coffee shop and pretending to be a newly-hired barista
|
|
null | false
| 207
|
Word Sense Disambiguation (WSD) is an important problem in Natural Language Processing (NLP), both in its own right and as a stepping stone to other advanced tasks in the NLP pipeline, applications such as machine translation BIBREF0 and question answering BIBREF1 . WSD specifically deals with identifying the correct sense of a word, among a set of given candidate senses for that word, when it is presented in a brief narrative (surrounding text) which is generally referred to as context. Consider the ambiguous word `cold'. In the sentence “He started to give me a cold shoulder after that experiment”, the possible senses for cold can be cold temperature (S1), a cold sensation (S2), common cold (S3), or a negative emotional reaction (S4). Therefore, the ambiguous word cold is specified along with the sense set {S1, S2, S3, S4} and our goal is to identify the correct sense S4 (as the closest meaning) for this specific occurrence of cold after considering - the semantic and the syntactic information of - its context.
In this effort, we develop our supervised WSD model that leverages a Bidirectional Long Short-Term Memory (BLSTM) network. This network works with neural sense vectors (i.e. sense embeddings), which are learned during model training, and employs neural word vectors (i.e. word embeddings), which are learned through an unsupervised deep learning approach called GloVe (Global Vectors for word representation) BIBREF2 for the context words. By evaluating our one-model-fits-all WSD network over the public gold standard dataset of SensEval-3 BIBREF3 , we demonstrate that the accuracy of our model in terms of F-measure is comparable with the state-of-the-art WSD algorithms'.
We outline the organization of the rest of the paper as follows. In Section 2, we briefly explore earlier efforts in WSD and discuss recent approaches that incorporate deep neural networks and word embeddings. Our main model that employs BLSTM with the sense and word embeddings is detailed in Section 3. We then present our experiments and results in Section 4 supported by a discussion on how to avoid some drawbacks of the current model in order to achieve higher accuracies and demand less number of training data which is desirable. Finally, in Section 5, we conclude with some future research directions for the construction of sense embeddings as well as applications of such model in other domains such as biomedicine.
This network works with neural sense vectors (i.e. sense embeddings), which are learned during model training, and employs neural word vectors (i.e. word embeddings), which are learned through an unsupervised deep learning approach called GloVe (Global Vectors for word representation) for the context words.
|
What vectors does the BLSTM network use?
|
Neural sense vectors.
|
null | false
| null |
The Naked Brothers Band is an American musical comedy television series created by Polly Draper, which aired on Nickelodeon from February 3, 2007, to June 13, 2009. It depicts the daily lives of Draper's sons, who lead a faux world-renowned children's rock band in New York City. As a mockumentary, the storyline is an embellished satire of their real lives, and the fictional presence of a camera is often acknowledged. The show stars Nat Wolff and Alex Wolff, the lead singer-songwriter and drummer, respectively. Nat's fictional female interest (Allie DiMeco) and real-life friends Thomas Batuello, David Levi, and Cooper Pillot, as well as Qaasim Middleton—who has no prior acquaintance with the family—are featured as the other band members, with Draper's jazz musician husband Michael Wolff as his sons' widowed accordion-playing dad and her niece Jesse Draper portraying the group's babysitter.
|
Give me a summary of the Naked Brothers Band.
|
The Naked Brother Bands is a TV show about the lives of Draper's sons. The storyline is a satirical version of their real lives and was aired on Nickelodeon from 2007 to 2009.
|
null | false
| 151
|
We used sentencepiece BIBREF4 to parse into sub-word units and reduce the possible out-of-vocabulary terms in the data set. We selected a vocab size of 30,000 and used the byte-pair encoding (BPE) model. To our knowledge this is the first time that the BPE toenization has been used with ULMFiT in a competition model.
To our knowledge this is the first time that the BPE toenization has been used with ULMFiT in a competition model.
|
Is this the first time that the BPE tokenization has been used with ULMFiT in a competition model?
|
Yes, it is.
|
null | false
| null |
McKinsey & Company is a global management consulting firm founded in 1926 by University of Chicago professor James O. McKinsey, that offers professional services to corporations, governments, and other organizations. McKinsey is the oldest and largest of the "Big Three" management consultancies (MBB), the world's three largest strategy consulting firms by revenue. The firm mainly focuses on the finances and operations of their clients.
Under the leadership of Marvin Bower, McKinsey expanded into Europe during the 1940s and 1950s. In the 1960s, McKinsey's Fred Gluck—along with Boston Consulting Group's Bruce Henderson, Bill Bain at Bain & Company, and Harvard Business School's Michael Porter—transformed corporate culture. A 1975 publication by McKinsey's John L. Neuman introduced the business practice of "overhead value analysis" that contributed to a downsizing trend that eliminated many jobs in middle management.
|
What is MBB and what does it stand for?
|
MBB is in reference to the "Big Three" management consultancies. From the passage, it isn't clear exactly what it stands for, but McKinsey is included in MBB as it is the oldest and largest of the group.
|
null | false
| null |
Who are some of the best basketball players of all time?
|
There are often arguments about who the best basketball players of all time are. In most circles, there are three people that come up in this discussion most often and they are Michael Jordan, Lebron James, and Kareem Abdul-Jabbar. Michael Jordan won 6 championships in a row after taking a hiatus from basketball between his 3rd and 4th championship. Until recently, Kareem held the all-time scoring record in the NBA with 38,387 points. Recently he was surpassed for most points all-time by Lebron James who also has 4 championships with 3 different teams. This is an argument that will never be completely closed as everyone has a different opinion, but it's tough to argue against Michael Jordan due to the fact he won 6 rings in a row.
|
|
null | false
| null |
Name some plays that were written by Shakespeare
|
Hamlet, Romeo and Juliet, Macbeth, King Lear, Much Ado About Nothing
|
|
null | false
| null |
Tell me whether these are mammals or amphibians: Kangaroo, Hellbender, Horse, Taita African Caecilian, Monkey, Pangolins, Squirrel, Frogs
|
Horse, Kangaroo, Monkey, Pangolin, and Squirrel are mammals. Frogs, Hellbenders, and Taita African Caecilian are Amphibians.
|
|
null | false
| null |
Which of the following car brands produce electric vehicles: Toyota, Tesla, and Ford. List the models that are electric.
|
There are a few types of electric vehicles: battery electric vehicle (BEV), plug-in hybrid electric vehicle (PHEV), and hybrid electric vehicles (HEV). Toyota produces the Prius which is a HEV with an PHEV variant, they also produce the bZ4X a fully battery electric crossover. Tesla solely produces battery electric vehicles with their Model 3, Model S, Model X, and Model Y models. Ford produces the Mustang Mach-e and F-150 Lightning as fully battery electric vehicles.
|
|
null | false
| null |
Think of different ways to make friends
|
Here are some ways to make friends:
1. Join a running club or other sports league
2. Go out to social bars and restaurants
3. Take time to meet your friend's friends
4. Ask coworkers to hang out after work
5. Join different local groups on social media platforms
|
|
1906.07668
| true
| null |
FLOAT SELECTED: Table 1: Topics and top-10 keywords of the corresponding topic
FLOAT SELECTED: Figure 5: Visualization using pyLDAVis. Best viewed in electronic format (zoomed in).
FLOAT SELECTED: Table 1: Topics and top-10 keywords of the corresponding topic
FLOAT SELECTED: Figure 5: Visualization using pyLDAVis. Best viewed in electronic format (zoomed in).
|
Do the authors report results only on English data?
|
Yes.
|
null | false
| null |
Tell me whether each of these are green or yellow: lemon, lime, cheese, spinach, lettuce, spaghetti
|
Lemon: Yellow
Lime: Green
Cheese: Yellow
Spinach: Green
Lettuce: Green
Spaghetti: Yellow
|
|
null | false
| null |
These islands can be divided into three recognizable groups located on the Azores Plateau:
The Eastern Group (Grupo Oriental) of São Miguel, Santa Maria and Formigas Islets
The Central Group (Grupo Central) of Terceira, Graciosa, São Jorge, Pico and Faial
The Western Group (Grupo Ocidental) of Flores and Corvo.
|
Extract the names of the islands located on the Eastern Group of the Azores plateau from the text. Separate them with a comma.
|
São Miguel, Santa Maria and Formigas
|
null | false
| 92
|
Progress in AI has been driven by, among other things, the development of challenging large-scale benchmarks like ImageNet BIBREF0 in computer vision, and SNLI BIBREF1, SQuAD BIBREF2, and others in natural language processing (NLP). Recently, for natural language understanding (NLU) in particular, the focus has shifted to combined benchmarks like SentEval BIBREF3 and GLUE BIBREF4, which track model performance on multiple tasks and provide a unified platform for analysis.
With the rapid pace of advancement in AI, however, NLU benchmarks struggle to keep up with model improvement. Whereas it took around 15 years to achieve “near-human performance” on MNIST BIBREF5, BIBREF6, BIBREF7 and approximately 7 years to surpass humans on ImageNet BIBREF8, BIBREF0, BIBREF9, the GLUE benchmark did not last as long as we would have hoped after the advent of BERT BIBREF10, and rapidly had to be extended into SuperGLUE BIBREF11. This raises an important question: Can we collect a large benchmark dataset that can last longer?
The speed with which benchmarks become obsolete raises another important question: are current NLU models genuinely as good as their high performance on benchmarks suggests? A growing body of evidence shows that state-of-the-art models learn to exploit spurious statistical patterns in datasets BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, instead of learning meaning in the flexible and generalizable way that humans do. Given this, human annotators—be they seasoned NLP researchers or non-experts—might easily be able to construct examples that expose model brittleness.
We propose an iterative, adversarial human-and-model-in-the-loop solution for NLU dataset collection that addresses both benchmark longevity and robustness issues. In the first stage, human annotators devise examples that our current best models cannot determine the correct label for. These resulting hard examples—which should expose additional model weaknesses—can be added to the training set and used to train a stronger model. We then subject the strengthened model to human interference and collect more weaknesses over several rounds. After each round, we both train a new model, and set aside a new test set. The process can be iteratively repeated in a never-ending learning BIBREF18 setting, with the model getting stronger and the test set getting harder in each new round.This process yields a “moving post” dynamic target for NLU systems, rather than a static benchmark that will eventually saturate.
Our approach draws inspiration from recent efforts that gamify collaborative training of machine learning agents over multiple rounds BIBREF19 and pit “builders” against “breakers” to learn better models BIBREF20. Recently, Dinan2019build showed that a similar approach can be used to make dialogue safety classifiers more robust. Here, we focus on natural language inference (NLI), arguably the most canonical task in NLU. We collected three rounds of data, and call our new dataset Adversarial NLI (ANLI).
Our contributions are as follows: 1) We introduce a novel human-and-model-in-the-loop dataset, currently consisting of three rounds that progressively increase in difficulty and complexity, that includes annotator-provided explanations. 2) We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks. 3) We provide a detailed analysis of the collected data that sheds light on the shortcomings of current models, categorizes the data by inference type to examine weaknesses, and demonstrates good performance on NLI stress tests. The ANLI dataset is available at github.com/facebookresearch/anli/. A demo of the annotation procedure can be viewed at adversarialnli.com.
We propose an iterative, adversarial human-andmodel-in-the-loop solution for NLU dataset collection that addresses both benchmark longevity and robustness issues.
|
What issues do the solution proposed by the authors address?
|
Benchmark longevity and robustness issues.
|
null | false
| null |
I just found out that my child has leukemia, what should I do?
|
First, I'm so sorry that you got the news that your child has leukemia. No child should have to live through cancer, and no parent should have to watch their child suffer such a serious illness. I recommend that you listen closely to your doctor's opinion on treatment. It usually does not hurt to get a second opinion if you feel you need one. Next, I recommend learning all that you can about pediatric leukemia. There are many good texts available and your hospital may even have some on hand. If you can find another parent within your network who has gone through pediatric leukemia with their child, it can be helpful to talk to someone who has been through the experience. Many people also find comfort in support groups on Facebook, or through a local "Leukemia & Lymphoma Society" chapter. It can be hard on caretakers of children with leukemia, so it can be a good idea to get plenty of rest, food and liquids while you are under stress. Finding a good therapist can also be helpful along the journey. I hope that you are able to find peace during this difficult time and that you have the resources you need.
|
|
null | false
| null |
Does Spark support JavaScript?
|
Spark does not have native built-in support for JavaScript, but it is possible to use JavaScript with Spark through 3rd party libraries.
|
|
null | false
| null |
What does the percentage chance of rain in a weather forecast indicate?
|
This provides the probability of rain from models projected by meteorologists. For example, a 50% chance of rain translates to meteorologists seeing rain in 50% of the scenarios with the similar air pressure, temperatures, time period, and other factors they use in their modeling scenarios.
|
|
null | false
| null |
Where is Venezuela?
|
Venezuela is located in South America, next to Colombia, Brazil and Guyana.
|
|
null | false
| null |
Which is a species of fish? Pintano or Pint
|
Pintano
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.