paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Which field hockey team has won the Stanley Cup twice in a row?
|
In a stifling game on Sunday, the Pittsburgh Penguins made the difference with a minute and a half left against Nashville (2-0) to pocket their second consecutive Stanley Cup (4-2).
|
|
null | false
| null |
What is a headlamp?
|
A headlamp is a light that attaches to a person’s head, usually with a headband and shines light ahead of the person wearing it. Headlamps are usually powered by batteries.
|
|
null | false
| 222
|
We study human-human negotiations in the buyer-seller bargaining scenario, which has been a key research area in the literature BIBREF0. In this section, we first describe our problem setup and key terminologies by discussing the dataset used. Later, we formalize our problem definition.
Dataset: For our explorations, we use the Craigslist Bargaining dataset (CB) introduced by BIBREF4. Instead of focusing on the previously studied game environments BIBREF5, BIBREF6, the dataset considers a more realistic setup: negotiating the price of products listed on Craigslist. The dataset consists of 6682 dialogues between a buyer and a seller who converse in natural language to negotiate the price of a given product (sample in Table TABREF1). In total, 1402 product ad postings were scraped from Craigslist, belonging to six categories: phones, bikes, housing, furniture, car and electronics. Each ad posting contains details such as Product Title, Category Type and a Listing Price. Moreover, a secret target price is also pre-decided for the buyer. The final price after the agreement is called the Agreed Price, which we aim to predict.
Defining the problem: Say we are provided with a product scenario $S$, a tuple: (Category, Title, Listing Price, Target Price). Define the interactions between a buyer and seller using a sequence of $n$ events $E_n:<e_{1}, e_{2}, ..., e_{n}>$, where $e_{i}$ occurs before $e_{j}$ iff $i<j$. Event $e_{i}$ is also a tuple: (Initiator, Type, Data). Initiator is either the Buyer or Seller, Type can be one of (message, offer, accept, reject or quit) and Data consists of either the corresponding natural language dialogue, offer price or can be empty. Nearly $80\%$ of events in CB dataset are of type `message', each consisting a textual message as Data. An offer is usually made and accepted at the end of each negotiation. Since the offers directly contain the agreed price (which we want to predict), we only consider `message' events in our models. Given the scenario $S$ and first $n$ events $E_n$, our problem is then to learn the function $f_{n}$: $A = f_{n}(S, E_n)$ where $A$ refers to the final agreed price between the two negotiating parties.
We use the Craigslist Bargaining dataset (CB) introduced by He et al. (2018).
|
What dataset do the authors use in this paper?
|
The Craigslist Bargaining dataset (CB).
|
null | false
| null |
What do you mean trekking?
|
Trekking - or long-distance hiking - consists of walking for several days or several weeks and being entirely or partially self-sufficient.
|
|
1804.04225
| true
| null |
The overview of our approach is shown in Figure FIGREF6 . Within ICU notes (e.g., text example in top-left box in Figure 2), we first identify all abbreviations using regular expressions and then try to find all possible expansions of these abbreviations from domain-specific knowledge base as candidates. We train word embeddings using the clinical notes data with task-oriented resources such as Wikipedia articles of candidates and medical scientific papers and compute the semantic similarity between an abbreviation and its candidate expansions based on their embeddings (vector representations of words).
Within ICU notes (e.g., text example in top-left box in Figure 2), we first identify all abbreviations using regular expressions and then try to find all possible expansions of these abbreviations from domain-specific knowledge base as candidates.
|
Do they use any knowledge base to expand abbreviations?
|
Yes.
|
null | false
| null |
How do I cook spaghetti?
|
Spaghetti is a popular noodle dish. To cook spaghetti, fill a pot with water and add 1-2 teaspoons of salt. Heat the water until it is boiling and then place the spaghetti into the pot. Wait 10-15 minutes, take one noodle out with a fork, and bite into it to see if the noodle is soft. If not wait 2 minutes are repeat until the spaghetti noodles become soft. At this point, turn off the heat under the pot, and strain the spaghetti to separate it from the water. Place the spaghetti onto a plate or bowl, and add any sauce if desired. Some people add additional seasoning, including salt, pepper, meat, garlic, and mushrooms.
|
|
null | false
| null |
Reid attended the University of Virginia, where she made 83 appearances and scored two goals. Her senior year, she started all 21 games, scored a golden goal at No. 2 West Virginia, and had two game-winning assists. Also in her senior year, she was named to the All-ACC second team and the ACC Women's Soccer All-Academic team.
|
Given a reference text about Megan Reid, American professional soccer player, what college did she attend and how many soccer games did she start in her senior year of college?
|
Reid attended the University of Virginia and during her senior year she started all 21 games.
|
null | false
| null |
What are some of the major cities in Australia?
|
Some of the major cities in Australia are Sydney, Melbourne, Brisbane, and Perth.
|
|
null | false
| 76
|
In this work, we have proposed an automated approach for the novel task of suggesting news articles to Wikipedia entity pages to facilitate Wikipedia updating. The process consists of two stages. In the first stage, article–entity placement, we suggest news articles to entity pages by considering three main factors, such as entity salience in a news article, relative authority and novelty of news articles for an entity page. In the second stage, article–section placement, we determine the best fitting section in an entity page. Here, we remedy the problem of incomplete entity section profiles by constructing section templates for specific entity classes. This allows us to add missing sections to entity pages. We carry out an extensive experimental evaluation on 351,983 news articles and 73,734 entities coming from 27 distinct entity classes. For the first stage, we achieve an overall performance with P=0.93, R=0.514 and F1=0.676, outperforming our baseline competitors significantly. For the second stage, we show that we can learn incrementally to determine the correct section for a news article based on section templates. The overall performance across different classes is P=0.844, R=0.885 and F1=0.860.
In the future, we will enhance our work by extracting facts from the suggested news articles. Results suggest that the news content cited in entity pages comes from the first paragraphs. However, challenging task such as the canonicalization and chronological ordering of facts, still remain.
In the future, we will enhance our work by extracting facts from the suggested news articles.
|
What will the author do about his future work?
|
They will enhance their work by extracting facts from the suggested news articles.
|
null | false
| null |
What is deep dish pizza?
|
Deep dish pizza is a variant of traditional pizza where the dough ends are rolled and pushed to the ends of a deep pan similar to a dessert pie crust. The crust is then filled with cheese and other toppings and spread with chunky tomato sauce. This type of pizza is also referred to as Chicago-style pizza.
|
|
null | false
| null |
Which of the following are considered to be baked goods? A cake, a pie, chicken fingers, scone, and a scoop of ice cream
|
A cake, a pie, and a scone
|
|
null | false
| 365
|
Words with multiple senses commonly exist in many languages. For example, the word bank can either mean a “financial establishment” or “the land alongside or sloping down to a river or lake”, based on different contexts. Such a word is called a “polyseme”. The task to identify the meaning of a polyseme in its surrounding context is called word sense disambiguation (WSD). Word sense disambiguation is a long-standing problem in natural language processing (NLP), and has broad applications in other NLP problems such as machine translation BIBREF0 . Lexical sample task and all-word task are the two main branches of WSD problem. The former focuses on only a pre-selected set of polysemes whereas the later intends to disambiguate every polyseme in the entire text. Numerous works have been devoted in WSD task, including supervised, unsupervised, semi-supervised and knowledge based learning BIBREF1 . Our work focuses on using supervised learning to solve all-word WSD problem.
Most supervised approaches focus on extracting features from words in the context. Early approaches mostly depend on hand-crafted features. For example, IMS by BIBREF2 uses POS tags, surrounding words and collections of local words as features. These approaches are later improved by combining with word embedding features BIBREF0 , which better represents the words' semantic information in a real-value space. However, these methods neglect the valuable positional information between the words in the sequence BIBREF3 . The bi-directional Long-Short-Term-Memory (LSTM) approach by BIBREF3 provides one way to leverage the order of words. Recently, BIBREF4 improved the performance by pre-training a LSTM language model with a large unlabelled corpus, and using this model to generate sense vectors for further WSD predictions. However, LSTM significantly increases the computational complexity during the training process.
The development of the so called “fixed-size ordinally forgetting encoding” (FOFE) has enabled us to consider more efficient method. As firstly proposed in BIBREF5 , FOFE provides a way to encode the entire sequence of words of variable length into an almost unique fixed-size representation, while also retain the positional information for words in the sequence. FOFE has been applied to several NLP problems in the past, such as language model BIBREF5 , named entity recognition BIBREF6 , and word embedding BIBREF7 . The promising results demonstrated by the FOFE approach in these areas inspired us to apply FOFE in solving the WSD problem. In this paper, we will first describe how FOFE is used to encode sequence of any length into a fixed-size representation. Next, we elaborate on how a pseudo language model is trained with the FOFE encoding from unlabelled data for the purpose of context abstraction, and how a classifier for each polyseme is built from context abstractions of its labelled training data. Lastly, we provide the experiment results of our method on several WSD data sets to justify the equivalent performance as the state-of-the-art approach.
We also show that the methodology can be used for transferring to entirely different tasks.
|
Is their method transferable?
|
Yes.
|
null | false
| null |
Classify the following Greek philosophers by their schools of thought: Democritus, Anaximander, Protagoras, Epictetus, Chrysippus, Theano, Pythagorus and Heraclides.
|
There were many diverse schools of thought in Ancient Greek philosophy. Democritus was a pre-socratic philosopher who founded the discipline of Atomism. Anaximander was another pre-socratic thinker who was a member of the Milesian school. Epictetus and Chrysippus were both stoic philosophers. Protagoras was a famous sophist who was referenced by Plato. Heraclides was a Platonist and direct student of Plato. Pythagoras and Theano were both of the Pythagorean school, and are believed to have been married.
|
|
1605.08675
| true
| null |
FLOAT SELECTED: Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid).
FLOAT SELECTED: Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid).
|
Do they compare DeepER against other approaches?
|
Yes.
|
null | false
| null |
Why is the Republican Party against gun control legislation?
|
There isn’t one answer to this question, and the answer isn’t strictly associated with the political party a person is affiliated with. But one theory, is many individuals assign more value to the right to own guns for sport or self-defense, than the lives lost by gun violence, because most individuals against gun control legislation are not impacted by gun violence.
|
|
null | false
| null |
how to do a self-diagnose of burning out?
|
Examples to look out for burning out including: Inability to stop thinking about work even when you’re not at work; Relationships with either co-workers or families and friends suffer; Feeling unexplained aches and pains; You spend lots of money outsourcing various personal tasks; Feeling mentally strained- with no plans for vacations; You wonder where you’ll be in the future.
|
|
null | false
| null |
Who is the greatest college football quarterback of all time?
|
Different people have different opinions when ranking college football players. Both individual and team metrics are considered when determining the best quarterback. Key individual metrics include individual passing metrics (such as passer rating, number of passing yards, and number of touchdowns thrown) and individual awards won (such as the Heisman trophy). Key team metrics include the team's win percentage with the individual as the starting quarterback and the number of conference and nation championships won. A quarterback that excels at passing the football and running the football is commonly referred to as a "dual threat" quarterback. Individual rushing statistics may also be considered when determining their rankings. Many people consider Sam Bradford, Matt Leinart, and Peyton Manning to be among the best pure passing quarterbacks in college football history. Many people consider Cam Newton, Vince Young, and Johnny Manziel to be among the best dual threat quarterbacks in college football history. Opinions vary, but many people consider Tim Tebow to be the greatest college football quarterback of all-time due to his individual accomplishments, his team's accomplishments and revolutionary offensive play style, and his popularity amongst fans and the sports media.
|
|
1611.00514
| false
| null |
In this section we present the results obtained on the protocol provided by NIST on the development set which is supposed to mirror that of evaluation set. The results are shown in Table TABREF26 . The first part of the table indicates the result obtained by the primary system. As can be seen, the fusion of MFCC and PLP (a simple sum of both MFCC and PLP scores) resulted in a relative improvement of almost 10%, as compared to MFCC alone, in terms of both INLINEFORM0 and INLINEFORM1 . In order to quantify the contribution of the different system components we have defined different scenarios. In scenario A, we have analysed the effect of using LDA instead of NDA. As can be seen from the results, LDA outperforms NDA in the case of PLP, however, in fusion we can see that NDA resulted in better performance in terms of the primary metric. In scenario B, we analysed the effect of using the short-duration compensation technique proposed in Section SECREF7 . Results indicate superior performance using this technique. In scenario C, we investigated the effects of language normalization on the performance of the system. If we replace LN-LDA with simple LDA, we can see performance degradation in MFCC as well as fusion, however, PLP seems not to be adversely affected. The effect of using QMF is also investigated in scenario D. Finally in scenario E, we can see the major improvement obtained through the use of the domain adaptation technique explained in Section SECREF16 . For our secondary submission, we incorporated a disjoint portion of the labelled development set (10 out of 20 speakers) in either LN-LDA and in-domain PLDA training. We evaluated the system on almost 6k out of 24k trials from the other portion to avoid any over-fitting, particularly important for the domain adaptation technique. This resulted in a relative improvement of 11% compared to the primary system in terms of the primary metric. However, the results can be misleading, since the recording condition may be the same for all speakers in the development set.
FLOAT SELECTED: Table 2. Performance comparison of the Intelligent Voice speaker recognition system with various analysis on the development protocol of NIST SRE 2016.
In this section we present the results obtained on the protocol provided by NIST on the development set which is supposed to mirror that of evaluation set. The results are shown in Table TABREF26 .
FLOAT SELECTED: Table 2. Performance comparison of the Intelligent Voice speaker recognition system with various analysis on the development protocol of NIST SRE 2016.
|
How well does their system perform on the development set of SRE?
|
EER 16.04, Cmindet 0.6012, Cdet 0.6107
|
null | false
| 82
|
Automatic TS is a complicated natural language processing (NLP) task, which consists of lexical and syntactic simplification levels BIBREF12 . It has attracted much attention recently as it could make texts more accessible to wider audiences, and used as a pre-processing step, improve performances of various NLP tasks and systems BIBREF13 , BIBREF14 , BIBREF15 . Usually, hand-crafted, supervised, and unsupervised methods based on resources like English Wikipedia and Simple English Wikipedia (EW-SEW) BIBREF10 are utilized for extracting simplification rules. It is very easy to mix up the automatic TS task and the automatic summarization task BIBREF3 , BIBREF16 , BIBREF6 . TS is different from text summarization as the focus of text summarization is to reduce the length and redundant content.
At the lexical level, lexical simplification systems often substitute difficult words using more common words, which only require a large corpus of regular text to obtain word embeddings to get words similar to the complex word BIBREF1 , BIBREF9 . Biran et al. BIBREF0 adopted an unsupervised method for learning pairs of complex and simpler synonyms from a corpus consisting of Wikipedia and Simple Wikipedia. At the sentence level, a sentence simplification model was proposed by tree transformation based on statistical machine translation (SMT) BIBREF3 . Woodsend and Lapata BIBREF17 presented a data-driven model based on a quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. Wubben et al. BIBREF18 proposed a phrase-based machine translation (PBMT) model that is trained on ordinary-simplified sentence pairs. Xu et al. BIBREF19 proposed a syntax-based machine translation model using simplification-specific objective functions and features to encourage simpler output.
Compared with SMT, neural machine translation (NMT) has shown to produce state-of-the-art results BIBREF5 , BIBREF7 . The central approach of NMT is an encoder-decoder architecture implemented by recurrent neural networks, which can represent the input sequence as a vector, and then decode that vector into an output sequence. Therefore, NMT models were used for text simplification task, and achieved good results BIBREF8 , BIBREF4 , BIBREF20 . The main limitation of the aforementioned NMT models for text simplification depended on the parallel ordinary-simplified sentence pairs. Because ordinary-simplified sentence pairs are expensive and time-consuming to build, the available largest data is EW-SEW that only have 296,402 sentence pairs. The dataset is insufficiency for NMT model if we want to NMT model can obtain the best parameters. Considering simplified data plays an important role in boosting fluency for phrase-based text simplification, and we investigate the use of simplified data for text simplification. We are the first to show that we can effectively adapt neural translation models for text simplifiation with simplified corpora.
At the lexical level, lexical simplification systems often substitute difficult words using more common words, which only require a large corpus of regular text to obtain word embeddings to get words similar to the complex word (Paetzold and Specia, 2016; Glavaˇs and ˇStajner, 2015).
|
What is the task of lexical simplification systems at the lexical level?
|
Substituting difficult words using more common words
|
null | false
| null |
Conisbrough Castle is a medieval fortification in Conisbrough, South Yorkshire, England. The castle was initially built in the 11th century by William de Warenne, the Earl of Surrey, after the Norman conquest of England in 1066. Hamelin Plantagenet, the illegitimate, parvenu brother of Henry II, acquired the property by marriage in the late 12th century. Hamelin and his son William rebuilt the castle in stone, including its prominent 28-metre (92 ft)-high keep. The castle remained in the family line into the 14th century, despite being seized several times by the Crown. The fortification was then given to Edmund of Langley, passing back into royal ownership in 1461.
|
Extract all of the names of people mentioned in this paragraph and list them using bullets in the format {Name}
|
• William de Warenne
• Hamelin Plantagenet
• Henry II
• William Plantagenet
• Edmund of Langley
|
null | false
| 310
|
The QE module receives as input a tuple INLINEFORM0 , where INLINEFORM1 is the source sentence, INLINEFORM2 is the translated sentence, and INLINEFORM3 is a set of word alignments. It predicts as output a sequence INLINEFORM4 , with each INLINEFORM5 . The overall architecture is shown in Figure FIGREF2
CEQE consists of three major components: (1) embedding layers for words and part-of-speech (POS) tags in both languages, (2) convolution encoding of the local context for each target word, and (3) encoding the global context by the recurrent neural network.
CEQE consists of three major components: (1) embedding layers for words and part-of-speech (POS) tags in both languages, (2) convolution encoding of the local context for each target word, and (3) encoding the global context by the recurrent neural network.
|
What are the three major components of CEQE?
|
(1) embedding layers for words and part-of-speech (POS) tags in both languages, (2) convolution encoding of the local context for each target word, and (3) encoding the global context by the recurrent neural network.
|
null | false
| 496
|
We evaluate the performance of our model primarily on the MNIST dataset. Our model is able to approximate any conditional distribution. Here we demonstrate how to model missing pixels and the class digits conditioned on the observed pixels, as well as model missing pixels conditioned on class digits and observed pixels. Although MNIST is of course a small-scale problem, the goal here is to demonstrate joint inference and learning over what is still a reasonably-sized joint model, considering the number of hidden units. Nonetheless, the current experiment is admittedly largely a demonstration of the proposed method rather than a full accounting of its performance, a point we highlight in the subsequent section as well. After describing the MNIST experiments, we detail a similar experiment on CIFAR-10 to demonstrate the potential for scaling the approach.
We also show how our inference method differs from the previous ones. On the joint imputation and classification task, we train models using our updates and the updates in;, and inference each using all three update methods, with and without the monotonicity constraint. We show that our inference method is superior in both performance and convergence speed. Numerically, we also demonstrate that the other two methods could either diverge or not converging to the actual mean-field fixed point. All deferred experiments and details can be found in the appendix.
Experiment setup The original MNIST dataset has one channel representing the gray-scale intensity, ranging between 0 and 1. Here we adopt the strategy of to convert this continuous distribution to a discrete one. We bin the intensity evenly to 4 categories {0, . . . , 3}, and for each channel uses a one-hot encoding of the category so that the input data has shape 4×28×28.
We remark that the number of categories is chosen arbitrarily and can be any integer. For image with missing data, if the pixels are randomly masked, we always mask each pixel off independently with probability 60%, such that in expectation only 40% pixels are observed. If a whole patch of pixels is masked, we randomly pick a 14 × 14 patch. The patches/masks are chosenly differently for every image, similar to the query training in. To make the model class richer, in the patch case we lift the monotonicity constraint, and the model converges regardless. We also conduct the same set of MNIST experiments using a 3-layer DBM and include the results in the appendix.
Training the above described model for 40 epochs with 40% missing pixels leads to a 92.95% test accuracy, and the image reconstruction result is shown in fig.. Based on the reconstructed images and our classification results, our model successfully recovers the unimodal distribution conditioned on the observed pixels. Despite working simultaneously on two different tasks, our model can simply be trained jointly, taking full advantages of the existing auto-differentiation frameworks without additional burden. It is worth noting that while the autoregressive models of are comparable to ours, they require sampling pixels one-by-one in a sequential order at inference time, whereas our model does not assume an underlying sequence and can generate all pixels at once. Comparing to the differently-parameterized monDEQ in, whose linear module suffers from drastically increasing condition number (hence in later epochs taking around 20 steps to converge, even with tuned α), our
We additionally conduct an experiment on the simultaneous tasks of image pixel imputation and label prediction given partially observed features. Model architecture and training details are given in the appendix. With 50% of the pixels observed, the model obtains 58% test accuracy, and can impute the missing pixels effectively (see fig.).
Training the above described model for 40 epochs with 40% missing pixels leads to a 92.95% test accuracy, and the image reconstruction result is shown in fig. 4d. Based on the reconstructed images and our classification results, our model successfully recovers the unimodal distribution conditioned on the observed pixels.
|
The figure "92.95% test accuracy" corresponds to the 10-way labels of each digit? Or to the 4-way categories of the pixels?
|
92.95% is the classification accuracy with 40% missing pixels.
|
null | false
| null |
An income tax amendment to the Constitution was first proposed by Senator Norris Brown of Nebraska. He submitted two proposals, Senate Resolutions Nos. 25 and 39. The amendment proposal finally accepted was Senate Joint Resolution No. 40, introduced by Senator Nelson W. Aldrich of Rhode Island, the Senate majority leader and Finance Committee Chairman. The amendment was proposed as part of the congressional debate over the 1909 Payne–Aldrich Tariff Act; by proposing the amendment, Aldrich hoped to temporarily defuse progressive calls for the imposition of new taxes in the tariff act. Aldrich and other conservative leaders in Congress largely opposed the actual ratification of the amendment, but they believed that it had little chance of being ratified, as ratification required approval by three quarters of the state legislatures.
On July 12, 1909, the resolution proposing the Sixteenth Amendment was passed by the Congress and was submitted to the state legislatures. Support for the income tax was strongest in the western and southern states, while opposition was strongest in the northeastern states. Supporters of the income tax believed that it would be a much better method of gathering revenue than tariffs, which were the primary source of revenue at the time. From well before 1894, Democrats, Progressives, Populists and other left-oriented parties argued that tariffs disproportionately affected the poor, interfered with prices, were unpredictable, and were an intrinsically limited source of revenue. The South and the West tended to support income taxes because their residents were generally less prosperous, more agricultural and more sensitive to fluctuations in commodity prices. A sharp rise in the cost of living between 1897 and 1913 greatly increased support for the idea of income taxes, including in the urban Northeast. A growing number of Republicans also began supporting the idea, notably Theodore Roosevelt and the "Insurgent" Republicans (who would go on to form the Progressive Party). These Republicans were driven mainly by a fear of the increasingly large and sophisticated military forces of Japan, Britain and the European powers, their own imperial ambitions, and the perceived need to defend American merchant ships. Moreover, these progressive Republicans were convinced that central governments could play a positive role in national economies. A bigger government and a bigger military, they argued, required a correspondingly larger and steadier source of revenue to support it.
Opposition to the Sixteenth Amendment was led by establishment Republicans because of their close ties to wealthy industrialists, although not even they were uniformly opposed to the general idea of a permanent income tax. In 1910, New York Governor Charles Evans Hughes, shortly before becoming a Supreme Court Justice, spoke out against the income tax amendment. Hughes supported the idea of a federal income tax, but believed the words "from whatever source derived" in the proposed amendment implied that the federal government would have the power to tax state and municipal bonds. He believed this would excessively centralize governmental power and "would make it impossible for the state to keep any property".
Between 1909 and 1913, several conditions favored passage of the Sixteenth Amendment. Inflation was high and many blamed federal tariffs for the rising prices. The Republican Party was divided and weakened by the loss of Roosevelt and the Insurgents who joined the Progressive Party, a problem that blunted opposition even in the Northeast. In 1912, the Democrats won the presidency and control of both houses of Congress. The country was generally in a left-leaning mood, with a member of the Socialist Party winning a seat in the U.S. House in 1910 and the party's presidential candidate polling six percent of the popular vote in 1912.
Three advocates for a federal income tax ran in the presidential election of 1912. On February 25, 1913, Secretary of State Philander Knox proclaimed that the amendment had been ratified by three-fourths of the states and so had become part of the Constitution. The Revenue Act of 1913, which greatly lowered tariffs and implemented a federal income tax, was enacted shortly after the Sixteenth Amendment was ratified.
|
Based on the following passage, give 3 reasons why the Sixteenth Amendment was passed and ratified.
|
Reasons the Sixteenth Amendment to the US constitution was passed by Congress in 1909 and ratified by the states in 1913 include:
1. Democrats and progressives believed the income tax to be fairer method of generating revenue than tariffs (taxes on the import and export of goods).
2. Western and southern states were concerned about commodity prices, which were impacted by tariffs, and therefore preferred an alternative means of revenue.
3. Republicans were split on whether to oppose or support amendment.
|
null | false
| 356
|
The Reuters Corpus Volume 2 BIBREF2 , in short RCV2, is a multilingual corpus with a collection of 487,000 news stories. Each news story was manually classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). Topic codes were assigned to capture the major subject of the news story. The entire corpus covers thirteen languages, i.e. Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish, written by local reporters in each language. The news stories are not parallel. Single-label stories, i.e. those labeled with only one topic out of the four top categories, are often used for evaluations. However, the class distributions vary significantly across all the thirteen languages (see Table 1 ). Therefore, using random samples to extract evaluation corpora may lead to very imbalanced test sets, i.e. undesired and misleading variability among the languages when the main focus is to evaluate cross-lingual transfer.
Each news story was manually classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets).
|
What are the four hierarchical groups of the RCV21?
|
CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets).
|
null | false
| 36
|
The results for each class type are shown in Table TABREF19 , with some specific examples shown in Figure FIGREF18 . For the Wiki(gold) we quote the micro-averaged F-1 scores for the entire top level entity category. The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. It is worth noting that one could improve Wiki(gold) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. The results in Table TABREF19 (OntoNotes) only show the main 7 categories in OntoNotes which map to Wiki(gold) for clarity. The other categories (date, time, norp, language, ordinal, cardinal, quantity, percent, money, law) have F-1 scores between 80-90%, with the exception of time (65%)
It is worth noting that one could improve Wiki(GOLD) results by training directly using this dataset.
|
Could Wiki(GOLD) results be improved?
|
Yes.
|
null | false
| null |
David Oliver Siegmund (born November 15, 1941) is an American statistician who has worked extensively on sequential analysis.
Biography
Siegmund grew up in Webster Groves, Missouri. He received his baccalaureate degree, in mathematics, from Southern Methodist University in 1963, and a doctorate in statistics from Columbia University in 1966. His Ph.D. advisor was Herbert Robbins. After being an assistant and then a full professor at Columbia, he went to Stanford University in 1976, where he is currently a professor of statistics. He has served twice as the chair of Stanford's statistics department. He has also held visiting positions at Hebrew University of Jerusalem, the University of Zurich, the University of Oxford, and the University of Cambridge.
|
Who is David Oliver Siegmund
|
David Oliver Siegmund (born November 15, 1941) is an American statistician who has worked extensively on sequential analysis.
Siegmund grew up in Webster Groves, Missouri. He received his baccalaureate degree, in mathematics, from Southern Methodist University in 1963, and a doctorate in statistics from Columbia University in 1966. His Ph.D. advisor was Herbert Robbins. After being an assistant and then a full professor at Columbia, he went to Stanford University in 1976, where he is currently a professor of statistics.
He has served twice as the chair of Stanford's statistics department. He has also held visiting positions at Hebrew University of Jerusalem, the University of Zurich, the University of Oxford, and the University of Cambridge.
|
null | false
| null |
Dominant theories of the relationship
Several theories explain different aspects of the relationship between the work and family life. Boundary theory and border theory are the two fundamental theories that researchers have used to study these role conflicts. Other theories are built upon the foundations of these two theories. In the two decades since boundary theory and border theory were first proposed, the rise of Information and Communication Technologies (ICT) has drastically altered the work–life interface. Work can now be completed at any time and in any location, meaning that domains are more likely to be blended and boundaries barely exist.
Seven dominant theories have been utilized to explain this relationship on the boundary-border spectrum; These theories are: structural functioning, segmentation, compensation, supplemental and reactive compensation, role enhancement, spillover, and work enrichment model.
Structural functionalism
The roots of this theory can be traced back to the early 20th century, when industrial revolution was separating economic work from the family home. The 19th century's technological advancements in machinery and manufacturing initiated the separation of work from family. However, it was not until the early 20th century that the first view of work–family theories started to take shape. Structural-functionalism as one of the dominant sociology theories of early 20th century was a natural candidate.
The structural functionalism theory, which emerged following WWII, was largely influenced from the industrial revolution and the changes in the social role of men and women during this period. This theory implies that the life is concerned mainly with two separate spheres: productive life which happens in the workplace and affective life which is at home. Structural functionalism theory believes in the existence of radical separation between work (institution, workplace, or market) and families. According to this theory, these two (workplace and family) work best "when men and women specialize their activities in separate spheres, women at home doing expressive work and men in the workplace performing instrumental tasks" (Kingsbury & Scanzoni, 1993; as cited in MacDermid, 2005: 18).
Greedy institutions
It has been argued that the work–family conflicts, in particular role conflicts, can be interpreted in terms of Lewis A. Coser's concept of "greedy institutions". These institutions are called "greedy" in the sense that they make all-encompassing demands on the commitment and loyalty of individuals, and tend to discourage involvement in other social spheres. Institutions such as religious orders, sects, academia, top level sports, the military and senior management have been interpreted as greedy institutions. On the other hand, also the family has been interpreted as a greedy institution in consideration of the demands placed on a caretaker. When a person is involved in two greedy institutions—be it child care and university, or family and the military, or others—task and role conflicts arise.
Segmentation
Based on this theory work and family do not affect each other, since they are segmented and independent from each other. The literature also reports the usage of the terms compartmentalization, independence, separateness, disengagement, neutrality, and detachment to describe this theory.
Compensation
In 1979, Piotrkowski argued that according to this theory employees "look to their homes as havens, look to their families as sources of satisfaction lacking in the occupational sphere." What distinguishes compensation theory from the previous theories is that, in compensation theory, for the first time, the positive effect of work to family has been recognized.
Supplemental and reactive compensation
Supplemental and reactive compensation theories are two dichotomies of compensation theory which were developed during the late 1980s and the early 1990s. While compensation theory describes the behavior of employees in pursuing an alternative reward in the other sphere, supplemental and reactive compensation theories try to describe the reason behind the work–family compensation behavior of employees.
Role enhancement theory
According to this theory, the combination of certain roles has a positive, rather than a negative effect on well-being. This theory states that participation in one role is made better or easier by virtue of participation in the other role. Moreover, this theory acknowledges the negative effect of the work–family relationship, in which, only beyond a certain upper limit may overload and distress occur, however, the central focus of this perspective is mainly on the positive effects of work and family relationship, such as resource enhancement.
Work enrichment model
This theory is one of the recent models for explaining the relationship between work and family. According to this model, experience in one role (work or family) will enhance the quality of life in the other role. In other words, this model tries to explain the positive effects of the work–family relationship.
|
Please give me a short bulleted list of the theories that explain different aspects of the relationship between the work and family life made in this passage.
|
- Structural Functionalism: The theory of structural functionalism indicates that life is strictly separated between the workplace, where productive life occurs, and home or family life where affective life takes place.
- Greedy Institutions: The theory of greedy institutions posits that institutions make sweeping demands on individuals, and discourage an individual's participation in other social areas or institutions, leading to conflict when competing demands arise.
- Segmentation: The theory of segmentation argues that work and family life are fully segmented or severed from one another and therefore do not affect each other.
-Compensation: The theory of compensation states that individuals find refuge from work and its deficiencies in their home lives and the satisfaction that comes from their families.
-Supplemental and Reactive Compensation: Supplemental and reactive compensation are two theories that describe the ways in which individuals pursue alternative forms of life satisfaction while caught between the work and family social spheres.
- Role enhancement: The theory of role enhancement posits that participation in one social sphere can enhance satisfaction and participation in the other--up to certain limits beyond which overload may occur.
- Work Enrichment Model: The theory of work enrichment attempts to explain how experience in one role or social sphere can complement or increase satisfaction in another. In this way, the work enrichment model is similar to the theory of role enhancement.
|
1906.06045
| true
| null |
We attack the problem by automatically generating unanswerable questions for data augmentation to improve question answering models. The generated unanswerable questions should not be too easy for the question answering model so that data augmentation can better help the model. For example, a simple baseline method is randomly choosing a question asked for another paragraph, and using it as an unanswerable question. However, it would be trivial to determine whether the retrieved question is answerable by using word-overlap heuristics, because the question is irrelevant to the context BIBREF6 . In this work, we propose to generate unanswerable questions by editing an answerable question and conditioning on the corresponding paragraph that contains the answer. So the generated unanswerable questions are more lexically similar and relevant to the context. Moreover, by using the answerable question as a prototype and its answer span as a plausible answer, the generated examples can provide more discriminative training signal to the question answering model.
In this work, we propose to generate unanswerable questions by editing an answerable question and conditioning on the corresponding paragraph that contains the answer. So the generated unanswerable questions are more lexically similar and relevant to the context. Moreover, by using the answerable question as a prototype and its answer span as a plausible answer, the generated examples can provide more discriminative training signal to the question answering model.
|
Does their approach require a dataset of unanswerable questions mapped to similar answerable questions?
|
Yes.
|
1911.08915
| false
| null |
Statistical characterization of languages has been a field of study for decadesBIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Even simple quantities, like letter frequency, can be used to decode simple substitution cryptogramsBIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, probably the most surprising result in the field is Zipf's law, which states that if one ranks words by their frequency in a large text, the resulting rank frequency distribution is approximately a power law, for all languages BIBREF0, BIBREF11. These kind of universal results have long piqued the interest of physicists and mathematicians, as well as linguistsBIBREF12, BIBREF13, BIBREF14. Indeed, a large amount of effort has been devoted to try to understand the origin of Zipf's law, in some cases arguing that it arises from the fact that texts carry information BIBREF15, all the way to arguing that it is the result of mere chance BIBREF16, BIBREF17. Another interesting characterization of texts is the Heaps-Herdan law, which describes how the vocabulary -that is, the set of different words- grows with the size of a text, the number of which, empirically, has been found to grow as a power of the text size BIBREF18, BIBREF19. It is worth noting that it has been argued that this law is a consequence Zipf's law. BIBREF20, BIBREF21
However, probably the most surprising result in the field is Zipf's law, which states that if one ranks words by their frequency in a large text, the resulting rank frequency distribution is approximately a power law, for all languages BIBREF0, BIBREF11.
Another interesting characterization of texts is the Heaps-Herdan law, which describes how the vocabulary -that is, the set of different words- grows with the size of a text, the number of which, empirically, has been found to grow as a power of the text size BIBREF18, BIBREF19. It is worth noting that it has been argued that this law is a consequence Zipf's law. BIBREF20, BIBREF21
|
How do Zipf and Herdan-Heap's laws differ?
|
Zipf's law describes change of word frequency rate, while Heaps-Herdan describes different word number in large texts (assumed that Hepas-Herdan is consequence of Zipf's)
|
null | false
| null |
Name some birds that are commonly observed in Berkeley, CA.
|
In Berkeley, it is common to see hummingbirds, crows, mourning doves, steller’s jays, song sparrows, and California gulls. It’s also possible to observe a red-tailed hawk or a great horned owl.
|
|
null | false
| null |
Why does oil float on water?
|
Because oil has lesser density, it floats on water
|
|
null | false
| null |
Classify the following as a 'triangle' or 'not a triangle' - equilateral, isosceles, right angled, circle, sphere, scalene, rectangle, square, trapezium.
|
'triangle', 'triangle', 'triangle', 'not a triangle', 'not a triangle', 'triangle', 'not a triangle', 'not a triangle', 'not a triangle'.
|
|
null | false
| null |
What are the most abundant greenhouse gases in Earth's atmosphere
|
The most abundant greenhouse gases in Earth's atmosphere, listed in decreasing order:
Water vapor
Carbon dioxide
Methane
Nitrous oxide
Ozone
Chlorofluorocarbons
Hydrofluorocarbons
Perfluorocarbons
|
|
null | false
| null |
Robyn Rihanna Fenty (born February 20, 1988) is a Barbadian singer, actress, and businesswoman. Born in Saint Michael and raised in Bridgetown, Barbados, Rihanna auditioned for American record producer Evan Rogers who invited her to the United States to record demo tapes. After signing with Def Jam in 2005, she soon gained recognition with the release of her first two studio albums, Music of the Sun (2005) and A Girl Like Me (2006), both of which were influenced by Caribbean music and peaked within the top ten of the US Billboard 200 chart.
|
Who is Rihanna?
|
Rihanna (Robyn Rihanna Fenty) is a singer, actress, and businesswoman from Barbados. After auditioning with American record producer Evan Rogers, she signed to Def Jam in 2005 and gained worldwide recognition with her first two studio albums, "Music of the Sun" and "A Girl Like Me".
|
null | false
| 200
|
Recent studies have shown the vulnerability of ML models to adversarial attacks, small perturbations which lead to misclassification of inputs. Adversarial example generation in NLP BIBREF0 is more challenging than in common computer vision tasks BIBREF1, BIBREF2, BIBREF3 due to two main reasons: the discrete nature of input space and ensuring semantic coherence with the original sentence. A major bottleneck in applying gradient based BIBREF4 or generator model BIBREF5 based approaches to generate adversarial examples in NLP is the backward propagation of the perturbations from the continuous embedding space to the discrete token space.
Recent works for attacking text models rely on introducing errors at the character level in words BIBREF6, BIBREF7 or adding and deleting words BIBREF8, BIBREF9, BIBREF10, etc. for creating adversarial examples. These techniques often result in adversarial examples which are unnatural looking and lack grammatical correctness, and thus can be easily identified by humans.
TextFooler BIBREF11 is a black-box attack, that uses rule based synonym replacement from a fixed word embedding space to generate adversarial examples. These adversarial examples do not account for the overall semantics of the sentence, and consider only the token level similarity using word embeddings. This can lead to out-of-context and unnaturally complex replacements (see Table ), which can be easily identifiable by humans.
The recent advent of powerful language models BIBREF12, BIBREF13 in NLP has paved the way for using them in various downstream applications. In this paper, we present a simple yet novel technique: BAE (BERT-based Adversarial Examples), which uses a language model (LM) for token replacement to best fit the overall context. We perturb an input sentence by either replacing a token or inserting a new token in the sentence, by means of masking a part of the input and using a LM to fill in the mask (See Figure FIGREF1). BAE relies on the powerful BERT masked LM for ensuring grammatical correctness of the adversarial examples. Our attack beats the previous baselines by a large margin and confirms the inherent vulnerabilities of modern text classification models to adversarial attacks. Moreover, BAE produces more richer and natural looking adversarial examples as it uses the semantics learned by a LM.
To the best of our knowledge, we are the first to use a LM for adversarial example generation. We summarize our major contributions as follows:
We propose BAE, a novel strategy for generating natural looking adversarial examples using a masked language model.
We introduce 4 BAE attack modes, all of which are almost always stronger than previous baselines on 7 text classification datasets.
We show that, surprisingly, just a few replace/insert operations can reduce the accuracy of even a powerful BERT-based classifier by over $80\%$ on some datasets.
Adversarial example generation in NLP (Zhang et al., 2019) is more challenging than in commonly studied computer vision tasks (Szegedy et al., 2014; Kurakin et al., 2017; Papernot et al., 2017) because of (i) the discrete nature of the input space and (ii) the need to ensure semantic coherence with the original text.
|
Why the adversarial example generation in NLP is more challenging than in commonly studied computer vision tasks?
|
Because of the discrete nature of the input space and the need to ensure semantic coherence with the original text.
|
null | false
| null |
Although "sauerkraut" is from a German word (Sauerkraut), the dish did not originate in Germany. Some claim fermenting cabbage suan cai was already practised in the days of the building of the Great Wall of China and that the practice was likely transmitted from China to Europe by the Tartars. However, the Romans, as previously noted, pickled forms of cabbage, and were the more likely source of modern-day sauerkraut It then took root in Central and Eastern European cuisines, but also in other countries including the Netherlands, where it is known as zuurkool, and France, where the name became choucroute. The English name is borrowed from German where it means "sour cabbage". The names in Slavic and other Central and Eastern European languages have similar meanings with the German word: "fermented cabbage" (Albanian: lakër turshi, Azerbaijani: kələm turşusu, Belarusian: квашаная капуста, Czech: kysané zelí, Lithuanian: rauginti kopūstai, Russian: квашеная капуста, tr. kvašenaja kapusta, Turkısh: lahana turşusu, Romanian: varză murată, Persian: kalam torş, Ukrainian: квашена капуста) or "sour cabbage" (Bulgarian: кисело зеле, Estonian: hapukapsas, Finnish: hapankaali, Hungarian: savanyúkáposzta, Latvian: skābēti kāposti, Macedonian: расол / кисела зелка, Polish: kapusta kiszona, Russian: кислая капуста, tr. kislaya kapusta, Serbo-Croatian: кисели купус / кисело зелје, kiseli kupus / kiselo zelje, Slovak: kyslá kapusta, Slovene: kislo zelje, Ukrainian: кисла капуста, kysla kapusta).
|
Given this paragraph, where did the German dish sauerkraut originate from?
|
The Romans are believed to be the most likely origin of the German dish called sauerkraut.
|
null | false
| null |
What percentage of the world is left-handed?
|
It's estimated about ten percent of the global populated are left-hand dominant.
|
|
null | false
| null |
What does someone mean when they say “flip him the bird”?
|
The phrase refers to pointing a middle finger at someone in order to swear at them. This colloquial form of profanity was popularized by Robin Williams in the movie Good Morning Vietnam.
|
|
null | false
| null |
Super Bowl LV was an American football game played to determine the champion of the National Football League (NFL) for the 2020 season. The National Football Conference (NFC) champion Tampa Bay Buccaneers defeated the defending American Football Conference (AFC) and Super Bowl champion Kansas City Chiefs, 31–9. The game was played on February 7, 2021, at Raymond James Stadium in Tampa, Florida, the home stadium of the Buccaneers, marking the first time a team played a Super Bowl in its home stadium. Due to COVID-19 protocols limiting the stadium's seating capacity to 25,000 fans, it was the least-attended Super Bowl.
The Buccaneers' victory was their second and made them one of two teams, along with the Baltimore Ravens, to be undefeated in multiple Super Bowls. They finished the regular season with an 11–5 record and a wild card berth to advance to their second Super Bowl appearance through the guidance of several new acquisitions, most notably 21-year veteran quarterback Tom Brady in his first season away from the New England Patriots. The Chiefs, aided by their top-ranked offense, finished the regular season with a league-best 14–2 record to advance to their fourth Super Bowl appearance and were the defending Super Bowl LIV champions, seeking to become the first repeat champions since the Patriots in 2004.
For the first time under quarterback Patrick Mahomes, the Chiefs failed to score a touchdown and lost by double-digits, making them the third Super Bowl team to not score a touchdown. They also committed 11 penalties for 120 yards, including a record eight penalties for 95 yards in the first half, most of which were called against the defense. The Buccaneers capitalized on these struggles to take a commanding 21–6 lead at halftime and dominated the remainder of the game. Brady, who also extended his player records for Super Bowl appearances to 10 and wins to seven, was named Super Bowl MVP for a record fifth time and was the first to receive the award with multiple franchises. He became the oldest player to receive the honor and win a Super Bowl as the starting quarterback at age 43, breaking additional personal records, while Bruce Arians was the oldest head coach to win the Super Bowl at 68.
The game was televised nationally by CBS. Country music singer Eric Church and R&B singer Jazmine Sullivan performed the national anthem, while the halftime show was headlined by Canadian singer The Weeknd. On television, Super Bowl LV was seen by 91.63 million viewers, the lowest ratings for the game since 2006. Combined with viewership on other platforms, viewership was down by 5% overall in comparison to Super Bowl LIV, but with a 69% increase in average streaming viewership.
|
Given the reference text about Super Bowl LV, please answer the following questions:
1. Who performed at the halftime show?
2. Which team won and what was the final score?
3. Who was the MVP?
4. Where was the game played?
|
Given the provided reference text, the answers to the questions are as follows:
1. Who performed at the halftime show?
The halftime show was headlined by Canadian singer The Weeknd.
2. Which team won and what was the final score?
The Tampa Bay Buccaneers won the game by a score of 31–9.
3. Who was the MVP?
Tom Brady was the MVP.
4. Where was the game played?
The game was played at Raymond James Stadium in Tampa, Florida
|
null | false
| null |
I am an odd number and if you take away a letter, I become even. What number am I?
|
I am the number seven.
|
|
null | false
| null |
Campanula cervicaria, also known as the bristly bellflower, is a species of flowering plant in the bellflower family, Campanulaceae. Native to Scandinavia and central Europe, C. cervicaria has also become naturalised in parts of the U.S. state of Minnesota. Its natural habitat is woodland edges, hillside meadows, dry meadows and banks, and it also flourishes where soil has been disturbed, for example after slash-and-burn or forest clearance. The plant has a hairy stem, with light blue flowers about 1 to 2 centimetres (0.4 to 0.8 in) long and grouped together. This C. cervicaria flower was photographed in Keila, Estonia.
|
Do you know any beautiful flower with purple petals and native to Scandinavia and central Europe?
|
Yes, Campanula cervicaria, also known as the bristly bellflower, is a species of flowering plant in the bellflower family, Campanulaceae. It is Native to Scandinavia and central Europe, it has also become naturalised in parts of the U.S. state of Minnesota.
|
null | false
| 66
|
We evaluate our approach on the large vocabulary task of the Wall Street Journal (WSJ) dataset BIBREF25 , which contains 80 hours of clean read speech, and Librispeech BIBREF26 , which contains 1000 hours with separate train/dev/test splits for clean and noisy speech. Each dataset comes with official textual data to train language models, which contain 37 million tokens for WSJ, 800 million tokens for Librispeech. Our language models are trained separately for each dataset on the official text data only. These datasets were chosen to study the impact of the different components of our system at different scales of training data and in different recording conditions.
The models are evaluated in Word Error Rate (WER). Our experiments use the open source codes of wav2letter for the acoustic model, and fairseq for the language model. More details on the experimental setup are given below.
Baseline Our baseline for each dataset follows BIBREF11 . It uses the same convolutional acoustic model as our approach but a mel-filterbanks front-end and a 4-gram language model.
Training/test splits On WSJ, models are trained on si284. nov93dev is used for validation and nov92 for test. On Librispeech, we train on the concatenation of train-clean and train-other. The validation set is dev-clean when testing on test-clean, and dev-other when testing on test-other.
Acoustic model architecture The architecture for the convolutional acoustic model is the "high dropout" model from BIBREF11 for Librispeech, which has 19 layers in addition to the front-end (mel-filterbanks for the baseline, or the learnable front-end for our approach). On WSJ, we use the lighter version used in BIBREF17 , which has 17 layers. Dropout is applied at each layer after the front-end, following BIBREF20 . The learnable front-end uses 40 or 80 filters. Language model architecture As described in Section SECREF8 , we use the GCNN-14B model of BIBREF0 with dropout at each convolutional and linear layer on both WSJ and Librispeech. We keep all the words (162K) in WSJ training corpus. For Librispeech, we only use the most frequent 200K tokens (out of 900K).
Hyperparameter tuning The acoustic models are trained following BIBREF11 , BIBREF17 , using SGD with a decreasing learning rate, weight normalization and gradient clipping at 0.2 and a momentum of 0.9. The language models are trained with Nesterov accelerated gradient BIBREF27 . Following BIBREF0 , we also use weight normalization and gradient clipping.
The parameters of the beam search (see Section SECREF9 ) INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are tuned on the validation set with a beam size of 2500 and a beam score of 26 for computational efficiency. Once INLINEFORM3 are chosen, the test WER is computed with a beam size of 3000 and a beam score of 50.
We evaluate our approach on the large vocabulary task of the Wall Street Journal (WSJ) dataset [24], which contains 80 hours of clean read speech, and Librispeech [25], which contains 1000 hours with separate train/dev/test splits for clean and noisy speech.
|
How large is the dataset used for evaluation?
|
They evaluate their approach on the large vocabulary task of the Wall Street Journal (WSJ) dataset [24], which contains 80 hours of clean read speech, and Librispeech which contains 1000 hours with separate train/dev/test splits for clean and noisy speech.
|
null | false
| null |
Who are the top 5 basketball players of all time?
|
The top 5 basketball players of all time will differ in opinion, depending on who you are talking to and in what region. But, a common list includes Michael Jordan, Lebron James, Kobe Bryant, Kareem Abdul Jabbar and Magic Johnson. Other players that are often mentioned in the conversation are Larry Bird, Bill Russel and Julius Erving.
|
|
null | false
| null |
Spring training is a series of practices and exhibition games preceding the start of the regular season. Teams hold training camps in the states of Arizona and Florida, where the early warm weather allows teams to practice and play without worrying about the late winter cold. Spring training allows new players to compete for roster and position spots and gives existing team players practice time prior to competitive play. The teams that hold spring training in Arizona are grouped into the Cactus League, while teams that hold camp in Florida are known as the Grapefruit League. Spring training has always attracted fan attention, drawing crowds who travel to the warmer climates to enjoy the weather and watch their favorite teams play, and spring training usually coincides with spring break for many college students. Autograph seekers also find greater access to players during spring training.[citation needed]
|
Given this text about the Major League Baseball, where do Major League Baseball teams play spring training games and why.
|
Spring training games are played in the states of Arizona or Florida due to the warm weather, even early in the year.
|
null | false
| null |
Can you explain the Keto diet and ketosis?
|
The keto diet, short for the ketogenic diet, focuses on high-fat and low-carbohydrate consumption in order to achieve a state of ketosis. Ketosis is a metabolic state in which fat stores in the liver produce ketones to use as energy instead of glucose from carbohydrates. The idea here is to force the body to burn fat for energy instead of carbs, which contributes to weight loss and other healthy effects such as enhanced blood sugar and energy levels. It is worth noting that this diet requires a lifestyle that is often difficult to maintain and breaking it can lead to adverse effects. Some professionals advocate for moderation, balance and regular exercise rather than attempting to maintain ketosis.
|
|
null | false
| null |
What are some fun activities to do in Charlotte, NC?
|
Charlotte is the most-visited city in the state, attracting 28.3 million visitors in 2018. Fun area attractions include Carolina Panthers NFL football team and Charlotte Hornets basketball team, Carowinds amusement park, Charlotte Motor Speedway, U.S. National Whitewater Center, Discovery Place, Great Wolf Lodge, Sea Life Aquarium, Bechtler Museum of Modern Art, Billy Graham Library, Carolinas Aviation Museum, Harvey B. Gantt Center for African-American Arts + Culture, Levine Museum of the New South, McColl Center for Art + Innovation, Mint Museum, and the NASCAR Hall of Fame.
|
|
null | false
| null |
The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator.
By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.
The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.
In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative. If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised.
|
What is electrical conduction?
|
The process by which electric current passes through a material is termed electrical conduction.
|
null | false
| null |
What is Bar Method?
|
Bar Method is a type of exercise routine based on body weight strength movements and stretching. The entire one-hour class is performed at a Ballet Bar and individual instruction and corrections are provided by a certified Bar Method teacher. Most movements are based on physical therapy with a focus on maximizing muscular strength and minimal joint impact.
|
|
null | false
| null |
Imagine you're planning a ski trip with your friends. What are some good areas to go skiing?
|
There are many skiing spots around the world with various levels of difficulty. Some of the most well-known ski areas in the world are in North America, including in Whistler, BC, Lake Tahoe, CA, Park City, UT, various resorts in Colorado, and Mont Tremblant, Quebec.
|
|
null | false
| null |
George Watson Buck (1789–1854) was the engineer of the Montgomeryshire Canal in the early 19th century, and was responsible for the unique lock paddle design.
He was later resident engineer during the building of the London and Birmingham Railway, and also designed Stockport Viaduct and the Dane Valley Viaduct on the Manchester and Birmingham Railway.
Early life
Buck was born on 1 April 1789 at Stoke Holy Cross, near Norwich. His parents were Quakers and sent him, with his two brothers, to the Quaker School at Ackworth, West Yorkshire. With his schooling completed, his father placed him with a wholesaler at Tower Hill, London, as he wanted him to be involved in trade, but the position did not suit Buck, and so he looked for something else. Around 1807, the East London Waterworks Company were building Old Ford pumping station, which had been designed by Ralph Walker, and Buck managed to secure a job there. When the job was finished, Walker moved on to Farlington, to work on a water supply scheme for Portsmouth and Gosport. The scheme had been authorised by an Act of Parliament in 1809, and Buck went to work as resident engineer, reporting to Walker. The scheme suffered from an intermittent water supply, as did a competing scheme engineered by William Nicholson.
|
Given a reference text about George Watson Buck, tell me where he went to Quaker School and where he got a job in 1807.
|
George Watson Buck went to Quaker School at Ackworth, West Yorkshire and got a job at the East London Waterworks Company in 1807.
|
null | false
| 356
|
In this section, we provide comparative results on our new Multilingual Document Classification Corpus. Since the initial work by BIBREF0 many alternative approaches to cross-lingual document classification have been developed. We will encourage the respective authors to evaluate their systems on MLDoc. We believe that a large variety of transfer language pairs will give valuable insights on the performance of the various approaches.
In this paper, we propose initial strong baselines which represent two complementary directions of research: one based on the aggregation of multilingual word embeddings, and another one, which directly learns multilingual sentence representations. Details on each approach are given in section "Multilingual word representations" and "Multilingual sentence representations" respectively. In contrast to previous works on cross-lingual document classification with RVC2, we explore training the classifier on all languages and transfer it to all others, ie. we do not limit our study to the transfer between English and a foreign language.
One can envision several ways to define cross-lingual document classification, in function of the resources which are used in the source and transfer language (see Table 3 ). The first scheme assumes that we have no resources in the transfer language at all, neither labeled nor unlabeled. We will name this case “zero-shot cross-lingual document classification”. To simplify the presentation, we will assume that we transfer from English to German. The training and evaluation protocol is as follows. First, train a classifier using resources in the source language only, eg. the training and development corpus are in English. All meta parameters and model choices are performed using the English development corpus. Once the best performing model is selected, it is applied to the transfer language, eg. the German test set. Since no resources of the transfer language are used, the same system can be applied to many different transfer languages. This type of cross-lingual document classification needs a very strong multilingual representation since no knowledge on the target language was used during the development of the classifier.
In a second class of cross-lingual document classification, we may aim in improving the transfer performance by using a limited amount of resources in the target language. In the framework of the proposed MLDoc we will use the development corpus of target language for model selection. We will name this method “targeted cross-lingual document classification” since the system is tailored to one particular transfer language. It is unlikely that this system will perform well on other languages than the ones used for training or model selection.
If the goal is to build one document classification system for many languages, it may be interesting to use already several languages during training and model selection. To allow a fair comparison, we will assume that these multilingual resources have the same size than the ones used for zero-shot or targeted cross-language document classification, e.g. a training set composed of five languages with 200 examples each. This type of training is not a cross-lingual approach any more. Consequently, we will refer to this method as “joint multilingual document classification”.
The first scheme assumes that we have no resources in the transfer language at all, neither labeled nor unlabeled.
|
What is the first scheme assumeing to define cross-lingual document classification?
|
The first scheme assumes that the authors have no resources in the transfer language at all, neither labeled nor unlabeled.
|
null | false
| null |
Name five NHL teams in the Western Conference.
|
Five NHL teams in the Western Conference are the Dallas Stars, Minnesota Wild, Edmonton Oilers, Seattle Kraken, and Colorado Avalanche.
|
|
null | false
| 148
|
With the availability of rich data on users' locations, profiles and search history, personalization has become the leading trend in large-scale information retrieval. However, efficiency through personalization is not yet the most suitable model when tackling domain-specific searches. This is due to several factors, such as the lexical and semantic challenges of domain-specific data that often include advanced argumentation and complex contextual information, the higher sparseness of relevant information sources, and the more pronounced lack of similarities between users' searches.
A recent study on expert search strategies among healthcare information professionals BIBREF0 showed that, for a given search task, they spend an average of 60 minutes per collection or database, 3 minutes to examine the relevance of each document, and 4 hours of total search time. When written in steps, their search strategy spans over 15 lines and can reach up to 105 lines.
With the abundance of information sources in the medical domain, consumers are more and more faced with a similar challenge, one that needs dedicated solutions that can adapt to the heterogeneity and specifics of health-related information.
Dedicated Question Answering (QA) systems are one of the viable solutions to this problem as they are designed to understand natural language questions without relying on external information on the users.
In the context of QA, the goal of Recognizing Question Entailment (RQE) is to retrieve answers to a premise question ( INLINEFORM0 ) by retrieving inferred or entailed questions, called hypothesis questions ( INLINEFORM1 ) that already have associated answers. Therefore, we define the entailment relation between two questions as: a question INLINEFORM2 entails a question INLINEFORM3 if every answer to INLINEFORM4 is also a correct answer to INLINEFORM5 BIBREF1 .
RQE is particularly relevant due to the increasing numbers of similar questions posted online BIBREF2 and its ability to solve differently the challenging issues of question understanding and answer extraction. In addition to being used to find relevant answers, these resources can also be used in training models able to recognize inference relations and similarity between questions.
Question similarity has recently attracted international challenges BIBREF3 , BIBREF4 and several research efforts proposing a wide range of approaches, including Logistic Regression, Recurrent Neural Networks (RNNs), Long Short Term Memory cells (LSTMs), and Convolutional Neural Networks (CNNs) BIBREF5 , BIBREF6 , BIBREF1 , BIBREF7 .
In this paper, we study question entailment in the medical domain and the effectiveness of the end-to-end RQE-based QA approach by evaluating the relevance of the retrieved answers. Although entailment was attempted in QA before BIBREF8 , BIBREF9 , BIBREF10 , as far as we know, we are the first to introduce and evaluate a full medical question answering approach based on question entailment for free-text questions. Our contributions are:
The next section is dedicated to related work on question answering, question similarity and entailment. In Section SECREF3 , we present two machine learning (ML) and deep learning (DL) methods for RQE and compare their performance using open-domain and clinical datasets. Section SECREF4 describes the new collection of medical question-answer pairs. In Section SECREF5 , we describe our RQE-based approach for QA. Section SECREF6 presents our evaluation of the retrieved answers and the results obtained on TREC 2017 LiveQA medical questions.
In Section 3, we present two machine learning (ML) and deep learning (DL) methods for RQE and compare their performance using open-domain and clinical datasets.
|
Does the author use the deep learning method?
|
Yes.
|
null | false
| null |
What are some common types of snake plants?
|
Snake plants come in many varieties. Some common varieties include the bird's nest snake plant, cylinder snake plant, laurentii sansevieria, twisted sister, white snake plant, rhino grass, and sansevieria desertii.
|
|
null | false
| 26
|
One of the most fundamental topics in natural language processing is how best to derive high-level representations from constituent parts, as natural language meanings are a function of their constituent parts. How best to construct a sentence representation from distributed word embeddings is an example domain of this larger issue. Even though sequential neural models such as recurrent neural networks (RNN) BIBREF0 and their variants including Long Short-Term Memory (LSTM) BIBREF1 and Gated Recurrent Unit (GRU) BIBREF2 have become the de-facto standard for condensing sentence-level information from a sequence of words into a fixed vector, there have been many lines of research towards better sentence representation using other neural architectures, e.g. convolutional neural networks (CNN) BIBREF3 or self-attention based models BIBREF4 .
From a linguistic point of view, the underlying tree structure—as expressed by its constituency and dependency trees—of a sentence is an integral part of its meaning. Inspired by this fact, some recursive neural network (RvNN) models are designed to reflect the syntactic tree structure, achieving impressive results on several sentence-level tasks such as sentiment analysis BIBREF5 , BIBREF6 , machine translation BIBREF7 , natural language inference BIBREF8 , and discourse relation classification BIBREF9 .
However, some recent works have BIBREF10 , BIBREF11 proposed latent tree models, which learn to construct task-specific tree structures without explicit supervision, bringing into question the value of linguistically-motivated recursive neural models. Witnessing the surprising performance of the latent tree models on some sentence-level tasks, there arises a natural question: Are linguistic tree structures the optimal way of composing sentence representations for NLP tasks?
In this paper, we demonstrate that linguistic priors are in fact useful for devising effective neural models for sentence representations, showing that our novel architecture based on constituency trees and their tag information obtains superior performance on several sentence-level tasks, including sentiment analysis and natural language inference.
A chief novelty of our approach is that we introduce a small separate tag-level tree-LSTM to control the composition function of the existing word-level tree-LSTM, which is in charge of extracting helpful syntactic signals for meaningful semantic composition of constituents by considering both the structures and linguistic tags of constituency trees simultaneously. In addition, we demonstrate that applying a typical LSTM to preprocess the leaf nodes of a tree-LSTM greatly improves the performance of the tree models. Moreover, we propose a clustered tag set to replace the existing tags on the assumption that the original syntactic tags are too fined-grained to be useful in neural models.
In short, our contributions in this work are as follows:
In this paper, we demonstrate that linguistic priors are in fact useful for devising effective neural models for sentence representations, showing that our novel architecture based on constituency trees and their tag information obtains superior performance on several sentence-level tasks, including sentiment analysis and natural language inference.
|
On which level tasks does the novel architecture based on constituency trees and their tag information obtain superior performance?
|
On several sentence-level tasks, including sentiment analysis and natural language inference.
|
null | false
| null |
Give me the list of fictional dogs in prose and poetry
|
Fictional dogs in prose and poetry:
- Buck, in Jack London's The Call of the Wild
- Bull's-eye, Bill Sikes' dog in Oliver Twist by Charles Dickens
- Ripper, Marjorie Dursley's pet dog in J.K Rowling's Harry Potter and the Prisoner of Azkaban
- Jip (short for Gypsy), Dora Spenlow's spaniel in Charles Dickens' David Copperfield
- Jip, resident in the household of Hugh Lofting's Dr. Dolittle
- Baleia, the dog-companion that follows a poor family throughout the hardships of the 1915-drought in Brazil in Vidas secas, by Graciliano Ramos
- Quincas Borba, the dog whose name is the same as his human's in Machado de Assis' Quincas Borba
- Tentação, the dog in the homonymous short-story by Clarice Lispector
- Pingo de Ouro, Miguilim's companion in the novella Campo Geral, by João Guimarães Rosa
- Biruta, the dog in the homonymous short-story by Lygia Fagundes Telles
- Bruno Lichtenstein, the dog in the homonymous short-story by Rubem Braga
- Toto, Dorothy's dog in The Wizard of Oz, by L. Frank Baum
- Sharik, the astray dog who undergoes a transformation surgery in Heart of a Dog, by Mikhail Bulgakov
- Mr. Bones, the companion dog in Timbuktu, by Paul Auster
- Randolph, a Labrador retriever and the narrator of A Dog About Town, by JF Englert
|
|
null | false
| null |
What is a NBA super team?
|
NBA superteam is a term coined in the past decade that involves various all stars deciding to team up to play for a particular organization. The super team only works if the organization has enough money to pay everyone - so often super stars will take a pay cut to play with fellow super stars. Some recent super teams, regardless of success, include Nets (Harden, Durant, Kyrie, even Simmons), Warriors (Durant, Curry, Green, Thompson), Heat (Bosh, Wade, Lebron).
|
|
null | false
| 462
|
Theoretical Analysis of the DG Setting and Algorithms The DG problem setting was first analysed in. Since then there have been some attempts to analyse DG algorithms from a generalisation bound perspective. However these studies have theoretical results that are either restricted to specific model classes, such as kernel machines, or make strong assumptions about how the domains seen during training will resemble those seen at test time-e.g., that all domains are convex combinations of a finite pre-determined set of prototypical domains. In contrast, our Rademacher complexity approach can be applied to a broad range of model classes (including neural networks), and makes comparatively milder assumptions about the relationship between domains-i.e., they are i.i.d. samples from another arbitrary distribution over domains.
The majority of the existing work investigating the theoretical foundations of DG follow the initial formalisation of the domain generalisation problem put forth by, where the goal is to minimise the expected error over unseen domains. However, several recent works have also explored the idea of bounding the error on a single unseen domain with the most pathological distribution shift. This type of analysis is typically rooted in methods from causal inference, rather than statistical learning theory. As a consequence, they are able to make stronger claims for the problems they address, but the scope of their analysis is necessarily limited to the scenarios where their assumptions about the underlying causal structures are valid. For example, Janzing (2019) provides bounds that assume problems conform to a specific class of structural equation models, and the analysis is performed under the assumption that infinite training data is available within each of the observed training domains. Throughout the work we address the stan-dard DG formalisation given by, where one is concerned with the expected performance of a model on domains sampled from some distribution over domains.
Others rely on trying to link between domain adaptation objectives (where target domains are observable for alignment to source domains) and domain generalisation (where target domains are not observable and thus cannot correctly be used in a learning objective). proceed by making assumptions on the structure of the distribution over possible domains (i.e., that it has support determined by the convex hull of a finite set of prototypical domains), which allows them to upper bound the domain alignment metric. provide a bound that depends on an unobservable domain distance quantity, which they then approximate in experiments using kernel density estimates. is another piece of work that theoretically investigates the generalisation of ERM in a DG setting. They deal with online DG, where each time-step corresponds to observing a new domain, and the learner must produce a new model capable of generalising to novel domains. Another point of difference between their work and the standard DG problem setting of is that the domain at each time-step is chosen by an adversary. They analyse this game for a finite number of time-steps, but they assume each domain has an infinite amount of data. They also put some limitations on the adversary: e.g., it must choose a domain that is a convex combination of a finite number of pre-determined domains. In contrast, our theoretical analysis is in the more realistic setting where one has a finite amount of data per domain, and the domains we consider are not limited to convex combinations of a set of prototypical domains. Possibly the most similar work to our theoretical contributions is due to, who also provide learningtheoretic generalisation bounds for DG. However, their analysis only applies to finite hypothesis classes (which does not include, e.g., linear models or neural networks), whereas ours can be applied to any class amenable to analysis with Rademacher complexity.
The main existing empirical analysis on DG is, who compared several state of the art DG methods under a common evaluation and hyper-parameter tuning protocol called DomainBed. They ultimately defend Empirical Risk Minimization (ERM) over more sophisticated alternatives on the grounds that no competitor consistently beats it across the benchmark suite. We also broadly defend ERM, and build on the same benchmark, but differently we provide a much deeper analysis into when and why ERM works. More specifically: (i) We provide a new theoretical analysis of ERM's generalisation quality unlike the prior purely empirical evaluation, (ii) We re-use the DomainBed benchmark to directly corroborate this theory under controlled conditions using linear models where model complexity can be tractably and accurately tuned. (iii) We use our complexity-based analysis to explain the previously erratic results of prior DomainBed competitors in terms of model complexity. (iv) We identify, and empirically validate, the preferred model selection criterion for DG, a point which was inconclusive in.
Theoretical Analysis of the DG Setting and Algorithms The DG problem setting was first analysed in Blanchard et al. (2011). Since then there have been some attempts to analyse DG algorithms from a generalisation bound perspective (Muandet et al., 2013; Blanchard et al., 2021; Hu et al., 2020; Albuquerque et al., 2020; Rosenfeld et al., 2021). However these studies have theoretical results that are either restricted to specific model classes, such as kernel machines, or make strong assumptions about how the domains seen during training will resemble those seen at test time—e.g., that all domains are convex combinations of a finite pre-determined set of prototypical domains. In contrast, our Rademacher complexity approach can be applied to a broad range of model classes (including neural networks), and makes comparatively milder assumptions about the relationship between domains—i.e., they are i.i.d. samples from another arbitrary distribution over domains.
|
What is the relationship of this work to prior DG theory work?
|
We have expanded the RW section to better contextualize our work wrt. existing theory and explain the particular type of DG theory that we are going for to clarify the breadth of our claims (ie: Focus on the general purpose rather than causal inference setting). We are happy to cite any additional theory work that we are missing if you have any pointers.
|
null | false
| 187
|
Knowledge graphs (KGs) containing relationship triples (subject, relation, object), denoted as (s, r, o), are the useful resources for many NLP and especially information retrieval applications such as semantic search and question answering BIBREF0 . However, large knowledge graphs, even containing billions of triples, are still incomplete, i.e., missing a lot of valid triples BIBREF1 . Therefore, much research efforts have focused on the knowledge graph completion task which aims to predict missing triples in KGs, i.e., predicting whether a triple not in KGs is likely to be valid or not BIBREF2 , BIBREF3 , BIBREF4 . To this end, many embedding models have been proposed to learn vector representations for entities (i.e., subject/head entity and object/tail entity) and relations in KGs, and obtained state-of-the-art results as summarized by BIBREF5 and BIBREF6 . These embedding models score triples (s, r, o), such that valid triples have higher plausibility scores than invalid ones BIBREF2 , BIBREF3 , BIBREF4 . For example, in the context of KGs, the score for (Melbourne, cityOf, Australia) is higher than the score for (Melbourne, cityOf, United Kingdom).
Triple modeling is applied not only to the KG completion, but also for other tasks which can be formulated as a triple-based prediction problem. An example is in search personalization, one would aim to tailor search results to each specific user based on the user's personal interests and preferences BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Here the triples can be formulated as (submitted query, user profile, returned document) and used to re-rank documents returned to a user given an input query, by employing an existing KG embedding method such as TransE BIBREF3 , as proposed by BIBREF12 . Previous studies have shown the effectiveness of modeling triple for either KG completion or search personalization. However, there has been no single study investigating the performance on both tasks.
Conventional embedding models, such as TransE BIBREF3 , DISTMULT BIBREF13 and ComplEx BIBREF14 , use addition, subtraction or simple multiplication operators, thus only capture the linear relationships between entities. Recent research has raised interest in applying deep neural networks to triple-based prediction problems. For example, BIBREF15 proposed ConvKB—a convolutional neural network (CNN)-based model for KG completion and achieved state-of-the-art results. Most of KG embedding models are constructed to modeling entries at the same dimension of the given triple, where presumably each dimension captures some relation-specific attribute of entities. To the best of our knowledge, however, none of the existing models has a “deep” architecture for modeling the entries in a triple at the same dimension.
BIBREF16 introduced capsule networks (CapsNet) that employ capsules (i.e., each capsule is a group of neurons) to capture entities in images and then uses a routing process to specify connections from capsules in a layer to those in the next layer. Hence CapsNet could encode the intrinsic spatial relationship between a part and a whole constituting viewpoint invariant knowledge that automatically generalizes to novel viewpoints. Each capsule accounts for capturing variations of an object or object part in the image, which can be efficiently visualized. Our high-level hypothesis is that embedding entries at the same dimension of the triple also have these variations, although it is not straightforward to be visually examined.
To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization. Different from the traditional modeling design of CapsNet where capsules are constructed by splitting feature maps, we use capsules to model the entries at the same dimension in the entity and relation embeddings. In our CapsE, INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are unique INLINEFORM3 -dimensional embeddings of INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. The embedding triple [ INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ] of (s, r, o) is fed to the convolution layer where multiple filters of the same INLINEFORM10 shape are repeatedly operated over every row of the matrix to produce INLINEFORM11 -dimensional feature maps. Entries at the same dimension from all feature maps are then encapsulated into a capsule. Thus, each capsule can encode many characteristics in the embedding triple to represent the entries at the corresponding dimension. These capsules are then routed to another capsule which outputs a continuous vector whose length is used as a score for the triple. Finally, this score is used to predict whether the triple (s, r, o) is valid or not.
In summary, our main contributions from this paper are as follows:
INLINEFORM0 We propose an embedding model CapsE using the capsule network BIBREF16 for modeling relationship triples. To our best of knowledge, our work is the first consideration of exploring the capsule network to knowledge graph completion and search personalization.
INLINEFORM0 We evaluate our CapsE for knowledge graph completion on two benchmark datasets WN18RR BIBREF17 and FB15k-237 BIBREF18 . CapsE obtains the best mean rank on WN18RR and the highest mean reciprocal rank and highest Hits@10 on FB15k-237.
INLINEFORM0 We restate the prospective strategy of expanding the triple embedding models to improve the ranking quality of the search personalization systems. We adapt our model to search personalization and evaluate on SEARCH17 BIBREF12 – a dataset of the web search query logs. Experimental results show that our CapsE achieves the new state-of-the-art results with significant improvements over strong baselines.
To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization.
|
What do the authors introduce to explore a novel application of CapsNet on triple-based data?
|
CapsE.
|
1911.03894
| true
| null |
To demonstrate the value of building a dedicated version of BERT for French, we first compare CamemBERT to the multilingual cased version of BERT (designated as mBERT). We then compare our models to UDify BIBREF36. UDify is a multitask and multilingual model based on mBERT that is near state-of-the-art on all UD languages including French for both POS tagging and dependency parsing.
To demonstrate the value of building a dedicated version of BERT for French, we first compare CamemBERT to the multilingual cased version of BERT (designated as mBERT).
|
Was CamemBERT compared against multilingual BERT on these tasks?
|
Yes.
|
null | false
| null |
Kubernetes defines a set of building blocks ("primitives") that collectively provide mechanisms that deploy, maintain, and scale applications based on CPU, memory or custom metrics. Kubernetes is loosely coupled and extensible to meet different workloads. The internal components as well as extensions and containers that run on Kubernetes rely on the Kubernetes API. The platform exerts its control over compute and storage resources by defining resources as Objects, which can then be managed as such.
Kubernetes follows the primary/replica architecture. The components of Kubernetes can be divided into those that manage an individual node and those that are part of the control plane.
Control plane
The Kubernetes master node handles the Kubernetes control plane of the cluster, managing its workload and directing communication across the system. The Kubernetes control plane consists of various components, each its own process, that can run both on a single master node or on multiple masters supporting high-availability clusters. The various components of the Kubernetes control plane are as follows:
etcd is a persistent, lightweight, distributed, key-value data store that CoreOS has developed. It reliably stores the configuration data of the cluster, representing the overall state of the cluster at any given point of time. etcd favors consistency over availability in the event of a network partition (see CAP theorem). The consistency is crucial for correctly scheduling and operating services.
The API server serves the Kubernetes API using JSON over HTTP, which provides both the internal and external interface to Kubernetes. The API server processes and validates REST requests and updates the state of the API objects in etcd, thereby allowing clients to configure workloads and containers across worker nodes. The API server uses etcd's watch API to monitor the cluster, roll out critical configuration changes, or restore any divergences of the state of the cluster back to what the deployer declared. As an example, the deployer may specify that three instances of a particular "pod" (see below) need to be running. etcd stores this fact. If the Deployment Controller finds that only two instances are running (conflicting with the etcd declaration), it schedules the creation of an additional instance of that pod.
The scheduler is the extensible component that selects on which node an unscheduled pod (the basic entity managed by the scheduler) runs, based on resource availability. The scheduler tracks resource use on each node to ensure that workload is not scheduled in excess of available resources. For this purpose, the scheduler must know the resource requirements, resource availability, and other user-provided constraints or policy directives such as quality-of-service, affinity vs. anti-affinity requirements, and data locality. The scheduler's role is to match resource "supply" to workload "demand".
A controller is a reconciliation loop that drives the actual cluster state toward the desired state, communicating with the API server to create, update, and delete the resources it manages (e.g., pods or service endpoints). One kind of controller is a Replication Controller, which handles replication and scaling by running a specified number of copies of a pod across the cluster. It also handles creating replacement pods if the underlying node fails. Other controllers that are part of the core Kubernetes system include a DaemonSet Controller for running exactly one pod on every machine (or some subset of machines), and a Job Controller for running pods that run to completion (e.g., as part of a batch job). Labels selectors that are part of the controller's definition specify the set of pods that a controller manages.
The controller manager is a process that manages a set of core Kubernetes controllers.
|
Summarize the main components of a Kubernetes cluster control plane from this paragraph
|
The kubernetes cluster control plane runs on one more more master nodes. The components are:
1. etcd - a key value datastore containing the configuration data of the cluster.
2. The api-server is a server which serves and updates the state of the kubernetes cluster. It watches etcd and keeps the clusters actual state in sync with its desired state.
3. The scheduler is a component which selects on which node an unscheduled pod should run, based on what each pod needs and what the nodes in the cluster can support.
4. The controller manager manages several controllers which keep the cluster in sync with its desired state.
|
null | false
| null |
Agriculture encompasses crop and livestock production, aquaculture, fisheries and forestry for food and non-food products. Agriculture was the key development in the rise of sedentary human civilization, whereby farming of domesticated species created food surpluses that enabled people to live in cities. While humans started gathering grains at least 105,000 years ago, nascent farmers only began planting them around 11,500 years ago. Sheep, goats, pigs and cattle were domesticated around 10,000 years ago. Plants were independently cultivated in at least 11 regions of the world. In the twentieth century, industrial agriculture based on large-scale monocultures came to dominate agricultural output.
|
Based on the information below, when might people have started baking bread?
|
Bread typically requires grains as an ingredient, so it’s feasible that humans could make bread when they started gathering grains 105,000 years ago. However, it may be more likely that they began baking bread around the time they began intentionally planting grains 11,500 years ago.
|
null | false
| 333
|
Distributed representations - dense real-valued vectors that encode the semantics of linguistic units - are ubiquitous in today's NLP research. For single-words or word-like entities, there are established ways to acquire such representations from naturally occurring (unlabelled) training data based on comparatively task-agnostic objectives (such as predicting adjacent words). These methods are well understood empirically BIBREF0 and theoretically BIBREF1 . The best word representation spaces reflect consistently-observed aspects of human conceptual organisation BIBREF2 , and can be added as features to improve the performance of numerous language processing systems BIBREF3 .
By contrast, there is comparatively little consensus on the best ways to learn distributed representations of phrases or sentences. With the advent of deeper language processing techniques, it is relatively common for models to represent phrases or sentences as continuous-valued vectors. Examples include machine translation BIBREF8 , image captioning BIBREF9 and dialogue systems BIBREF10 . While it has been observed informally that the internal sentence representations of such models can reflect semantic intuitions BIBREF11 , it is not known which architectures or objectives yield the `best' or most useful representations. Resolving this question could ultimately have a significant impact on language processing systems. Indeed, it is phrases and sentences, rather than individual words, that encode the human-like general world knowledge (or `common sense') BIBREF12 that is a critical missing part of most current language understanding systems.
We address this issue with a systematic comparison of cutting-edge methods for learning distributed representations of sentences. We constrain our comparison to methods that do not require labelled data gathered for the purpose of training models, since such methods are more cost-effective and applicable across languages and domains. We also propose two new phrase or sentence representation learning objectives - Sequential Denoising Autoencoders (SDAEs) and FastSent, a sentence-level log-linear bag-of-words model. We compare all methods on two types of task - supervised and unsupervised evaluations - reflecting different ways in which representations are ultimately to be used. In the former setting, a classifier or regression model is applied to representations and trained with task-specific labelled data, while in the latter, representation spaces are directly queried using cosine distance.
We observe notable differences in approaches depending on the nature of the evaluation metric. In particular, deeper or more complex models (which require greater time and resources to train) generally perform best in the supervised setting, whereas shallow log-linear models work best on unsupervised benchmarks. Specifically, SkipThought Vectors BIBREF13 perform best on the majority of supervised evaluations, but SDAEs are the top performer on paraphrase identification. In contrast, on the (unsupervised) SICK sentence relatedness benchmark, FastSent, a simple, log-linear variant of the SkipThought objective, performs better than all other models. Interestingly, the method that exhibits strongest performance across both supervised and unsupervised benchmarks is a bag-of-words model trained to compose word embeddings using dictionary definitions BIBREF14 . Taken together, these findings constitute valuable guidelines for the application of phrasal or sentential representation-learning to language understanding systems.
We observe notable differences in approaches depending on the nature of the evaluation metric.
|
Do the notable differences in approaches depend on the nature of the evaluation metric?
|
Yes, they do.
|
null | false
| null |
In a 100 gram amount, smooth peanut butter supplies 597 Calories and is composed of 51% fat, 22% protein, 22% carbohydrates (including 5% dietary fiber), and 1% water (table). Both crunchy and smooth peanut butter are sources of saturated and monounsaturated fats (mainly oleic acid) as 25% of total serving amount, and polyunsaturated fat (12% of total), primarily as linoleic acid).
Peanut butter is a rich source (20% or more of the Daily Value, DV) of dietary fiber, vitamin E, pantothenic acid, folate, niacin, and vitamin B6 (table, USDA FoodData Central). Also high in content are the dietary minerals manganese, magnesium, phosphorus, zinc, copper, and sodium (added as salt during manufacturing). Peanut butter is a moderate source (10–19% DV) of thiamin, riboflavin, iron, and potassium (table).
|
Given this text about the nutrition profile of peanut butter, what vitamins is peanut butter rich in?
|
Vitamins that peanut butter is rich in include vitamin E, pantothenic acid, folate, niacin, and vitamin B6.
|
1809.09795
| false
| null |
The usage of a purely character-based input would allow us to directly recover and model these features. Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10 . The ELMo layer allows to recover a rich 1,024-dimensional dense vector for each word. Using CNNs, each vector is built upon the characters that compose the underlying words. As ELMo also contains a deep bi-directional LSTM on top of this character-derived vectors, each word-level embedding contains contextual information from their surroundings. Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 .
Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification.
Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10
Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 .
Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification.
|
What type of model are the ELMo representations used in?
|
A bi-LSTM with max-pooling on top of it
|
null | false
| 343
|
The use of categorical attributes (e.g., user, topic, aspects) in the sentiment analysis community BIBREF0, BIBREF1, BIBREF2 is widespread. Prior to the deep learning era, these information were used as effective categorical features BIBREF3, BIBREF4, BIBREF5, BIBREF6 for the machine learning model. Recent work has used them to improve the overall performance BIBREF7, BIBREF8, interpretability BIBREF9, BIBREF10, and personalization BIBREF11 of neural network models in different tasks such as sentiment classification BIBREF12, review summarization BIBREF13, and text generation BIBREF8.
In particular, user and product information have been widely incorporated in sentiment classification models, especially since they are important metadata attributes found in review websites. BIBREF12 first showed significant accuracy increase of neural models when these information are used. Currently, the accepted standard method is to use them as additional biases when computing the weights $a$ in the attention mechanism, as introduced by BIBREF7 as:
where $u$ and $p$ are the user and product embeddings, and $h$ is a word encoding from BiLSTM. Since then, most of the subsequent work attempted to improve the model by extending the model architecture to be able to utilize external features BIBREF14, handle cold-start entities BIBREF9, and represent user and product separately BIBREF15.
Intuitively, however, this method is not the ideal method to represent and inject attributes because of two reasons. First, representing attributes as additional biases cannot model the relationship between the text and attributes. Rather, it only adds a user- and product-specific biases that are independent from the text when calculating the attention weights. Second, injecting the attributes in the attention mechanism means that user and product information are only used to customize how the model choose which words to focus on, as also shown empirically in previous work BIBREF7, BIBREF15. However, we argue that there are more intuitive locations to inject the attributes such as when contextualizing words to modify their sentiment intensity.
We propose to represent user and product information as weight matrices (i.e., $W$ in the equation above). Directly incorporating these attributes into $W$ leads to large increase in parameters and subsequently makes the model difficult to optimize. To mitigate these problems, we introduce chunk-wise importance weight matrices, which (1) uses a weight matrix smaller than $W$ by a chunk size factor, and (2) transforms these matrix into gates such that it corresponds to the relative importance of each neuron in $W$. We investigate the use of this method when injected to several locations in the base model: word embeddings, BiLSTM encoder, attention mechanism, and logistic classifier.
The results of our experiments can be summarized in three statements. First, our preliminary experiments show that doing bias-based attribute representation and attention-based injection is not an effective method to incorporate user and product information in sentiment classification models. Second, despite using only a simple BiLSTM with attention classifier, we significantly outperform previous state-of-the-art models that use more complicated architectures (e.g., models that use hierarchical models, external memory networks, etc.). Finally, we show that these attribute representations transfer well to other tasks such as product category classification and review headline generation.
Thus, we use a simple classifier as our base model, which consists of four parts explained briefly as follows.
|
How many parts does the base model consist of?
|
Four.
|
null | false
| 322
|
NLG is the process of automatically generating coherent NL text from non-linguistic data BIBREF0. Recently, the field has seen an increased interest in the development of NLG systems focusing on verbalizing resources from SW data BIBREF1. The SW aims to make information available on the Web easier to process for machines and humans. However, the languages underlying this vision, i.e., RDF, SPARQL and OWL, are rather difficult to understand for non-expert users. For example, while the meaning of the OWL class expression Class: Professor SubClassOf: worksAt SOME University is obvious to every SW expert, this expression (“Every professor works at a university”) is rather difficult to fathom for lay persons.
Previous works such as SPARQL2NL BIBREF2 and SPARTIQULATION BIBREF3 have already shown the usefulness of the verbalization of SPARQL and RDF in areas such as question answering BIBREF4 and the explanation of the output of systems based onSW technologies BIBREF5. However, other SW languages are rarely investigated, such as OWL.
In this paper, we present an open-source holistic NLG framework for the SW, named LD2NL, which facilitates the verbalization of the three key languages of the SW, i.e., RDF, OWL, and SPARQL into NL. Our framework is based on a bottom-up paradigm for verbalizing SW data. Additionally, LD2NL builds upon SPARQL2NL as it is open-source and the paradigm it follows can be reused and ported to RDF and OWL. Thus, LD2NL is capable of generating either a single sentence or a summary of a given resource, rule, or query. To validate our framework, we evaluated LD2NL using experts 66 in NLP and SW as well as 20 non-experts who were lay users or non-users of SW. The results suggest that LD2NL generates texts which can be easily understood by humans. The version of LD2NL used in this paper, all experimental results will be publicly available.
Natural Language Generation (NLG) is the process of automatically generating coherent Natural Language (NL) text from non-linguistic data (Reiter and Dale, 2000a).
|
What is the Natural Language Generation (NLG)?
|
Natural Language Generation (NLG) is the process of automatically generating coherent Natural Language (NL) text from non-linguistic data.
|
null | false
| null |
Yukigassen (雪合戦) is a snowball fighting-competition from Japan. Today there are annual tournaments in Sōbetsu, Hokkaidō in Japan, Kemijärvi in Finland, Vardø in Norway, Murmansk in Russia, Mount Buller, Victoria in Australia, Luleå in Sweden, Anchorage in Alaska, Aparan in Armenia, Jasper, Alberta and Saskatoon, Saskatchewan in Canada.
|
Where can I find Yukigassen tournaments?
|
Sōbetsu, Hokkaidō, Japan
Kemijärvi, Finland
Vardø, Norway
Murmansk, Russia
Mount Buller, Victoria, Australia
Luleå, Sweden
Anchorage, Alaska
Aparan, Armenia
Jasper, Alberta, Canada
Saskatoon, Saskatchewan, Canada
|
null | false
| null |
Give me a list of the best romantic comedies
|
-My Big Fat Greek Wedding
-How to Lose a Guy in 10 Days
-Clueless
-13 Going On 30
-She's The Man
-Just Go With It
-The Sweetest Thing
-Set It Up
|
|
null | false
| null |
Several factors of the GPU's construction enter into the performance of the card for real-time rendering. Common factors can include the size of the connector pathways in the semiconductor device fabrication, the clock signal frequency, and the number and size of various on-chip memory caches. Additionally, the number of Streaming Multiprocessors (SM) for NVidia GPUs, or Compute Units (CU) for AMD GPUs, which describe the number of core on-silicon processor units within the GPU chip that perform the core calculations, typically working in parallel with other SM/CUs on the GPU. Performance of GPUs are typically measured in floating point operations per second or FLOPS, with GPUs in the 2010s and 2020s typically delivering performance measured in teraflops (TFLOPS). This is an estimated performance measure as other factors can impact the actual display rate.
|
Based on this passage, please extract a bulleted list of the factors of GPU's constructions that enter into the performance of the card for real-time rendering.
|
-The size of the connector pathways in the semiconductor device fabrication
-The clock signal frequency
-The number/size of various on-chip memory caches
-Number of SM or CU GPUs
|
null | false
| null |
What are the main football teams in the city of Madrid?
|
There are two world-class football/soccer teams in the city of Madrid, Spain: Real Madrid and Atletico de Madrid.
|
|
null | false
| null |
The third century AD showed some remarkable development for the empire as well as a potential decline and collapse. Emperor Caracalla's edict in 212 AD known as the Antoninian constitution delivered Roman citizenship to every free man in the empire, starting the decline of the importance of the city of Rome and indeed of Roman Italy for the Roman Empire. By the mid-third century AD, Pax Romana had come to an end, and several crises hit the empire simultaneously in a period known to history as the Crisis of the third century. This period saw the empire exposed to rampant inflation, natural disasters, secession and civil wars taking places over several decades. The secessions ended with emperor Aurelian’s conquest of the secessionist Palmyrene Empire and Gallic Empire in 273 and 274 AD, as Aurelian unified a splintered empire which, according to many historians, should have been doomed. However, Aurelian was assassinated in 276 AD, sparking additional civil wars. The crisis came to an end only through the ascension of emperor Diocletian to the throne. The West suffered more heavily from these decades of crisis due to this distinction between the established and more prosperous Hellenised East and the younger colonized and Romanized West that had persisted. This cultural division of the Roman Empire into an eastern Greek and western Latin part became increasingly important in later centuries, leading to a gradual estrangement of the two Roman worlds.
Emperor Diocletian created the administrative system known as the Dominate to guarantee security in all endangered regions of his empire. The dominate was in comparison with the principate, a huge imperial bureaucracy, which laid the foundations for the power structure of the later Eastern Roman Empire. With this came a redistricting and reduction of Roman provinces. Diocletian scrapped any pretense of republican rule as the emperor moved away from formally being merely "Princeps" to being "Dominus" - Lord and master. Diocletian also formally finished the process of restructuring the empire, from being a colonial-styled empire ruled by Rome and Roman Italians in the first century AD to a larger imperial entity where the emperor's court was not bound to the city of Rome or Italy, now only one Roman imperial province among many. The norm of emperors being Roman Italians was first broken by emperor Trajan who came from Hispania. By the time of the Severan dynasty, most emperors were originating from outside of Italy. When Diocletian introduced his reforms, this had the effect of the Roman Senate formally lost its already declining imperial influence and became a de facto regional body of decision-making.
An early instance of the partition of the empire into East and West occurred in 286, when Emperor Diocletian appointed Maximian as augustus of the West. In 293 AD, the empire went through a devolution under Diocletian known as the Tetrarchy. The empire was split into four, with the two most prominent parts of the empire each ruled by one emperor (Augustus). Each co-emperor then appointed a young colleague as caesar to be a subordinate emperor with control over his own territory. The Caesar shared power with and would eventually succeed the senior partner. Each tetrarch was in charge of a part of the empire, with the divisions based on geographic regions. This devolution outlines the coming split 100 years later when the empire was permanently divided into an eastern and western empire. The tetrarchy however only lasted 20 years, as emperors quickly began fighting each other for power. The whole empire was eventually reunited by Constantine the Great in 324 after he ended the last of these civil wars.
|
Given the referenced text, what Roman Emperor gave Roman Citizenship to all free men?
|
Emperor Caracalla gave Roman Citizenship to all free men in the Roman Empire.
|
null | false
| 14
|
Suppose a user wants to write a sentence “I will be 10 minutes late.” Ideally, she would type just a few keywords such as “10 minutes late” and an autocomplete system would be able to infer the intended sentence (Figure FIGREF1). Existing left-to-right autocomplete systems BIBREF0, BIBREF1 can often be inefficient, as the prefix of a sentence (e.g. “I will be”) fails to capture the core meaning of the sentence. Besides the practical goal of building a better autocomplete system, we are interested in exploring the tradeoffs inherent to such communication schemes between the efficiency of typing keywords, accuracy of reconstruction, and interpretability of keywords.
One approach to learn such schemes is to collect a supervised dataset of keywords-sentence pairs as a training set, but (i) it would be expensive to collect such data from users, and (ii) a static dataset would not capture a real user's natural predilection to adapt to the system BIBREF2. Another approach is to avoid supervision and jointly learn a user-system communication scheme to directly optimize the combination of efficiency and accuracy. However, learning in this way can lead to communication schemes that are uninterpretable to humans BIBREF3, BIBREF4 (see Appendix for additional related work).
In this work, we propose a simple, unsupervised approach to an autocomplete system that is efficient, accurate, and interpretable. For interpretability, we restrict keywords to be subsequences of their source sentences based on the intuition that humans can infer most of the original meaning from a few keywords. We then apply multi-objective optimization approaches to directly control and achieve desirable tradeoffs between efficiency and accuracy.
We observe that naively optimizing a linear combination of efficiency and accuracy terms is unstable and leads to suboptimal schemes. Thus, we propose a new objective which optimizes for communication efficiency under an accuracy constraint. We show this new objective is more stable and efficient than the linear objective at all accuracy levels.
As a proof-of-concept, we build an autocomplete system within this framework which allows a user to write sentences by specifying keywords. We empirically show that our framework produces communication schemes that are 52.16% more accurate than rule-based baselines when specifying 77.37% of sentences, and 11.73% more accurate than a naive, weighted optimization approach when specifying 53.38% of sentences. Finally, we demonstrate that humans can easily adapt to the keyword-based autocomplete system and save nearly 50% of time compared to typing a full sentence in our user study.
We empirically show that our framework produces communication schemes that are 52.16% more accurate than rule-based baselines when specifying 77.37% of sentences, and 11.73% more accurate than a naive, weighted optimization approach when specifying 53.38% of sentences.
|
How much better does the model behave than baselines?
|
It is 52.16% more accurate than rule-based baselines when specifying 77.37% of sentences.
|
1909.00154
| false
| null |
The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to its description BIBREF0, the respondents provided information in order to analyze the impact of the modal innovation in transportation, represented by the Swissmetro, a revolutionary mag-lev underground system, against the usual transport modes represented by car and train. After discarding respondents for which some variables were not available (e.g. age, purpose), a total of 10469 responses from 1188 individuals were used for the experiments.
The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998.
|
What datasets are used for evaluation?
|
The answers are shown as follows:
* Swissmetro dataset
|
null | false
| null |
What's the easiest way to make friends when traveling alone?
|
Traveling alone to anywhere in the world can be very intimidating and cal also feel lonely at times, but the great news is that it's extremely easy to make friends abroad! A couple of ways to make friends are:
1. Visit your neighborhood bar and sit at the bar. More than likely there will be other folks going alone and you'll be able to chat with them. You can strike a conversation by easily asking them about their order or if they're from the area.
2. Join travel groups on Facebook, Instagram and TikTok. With social media, it's much easier to find people who are traveling in the same location as you are and willing to meet up. A great group is The Travel Squad. You can join their Facebook community with thousands of solo travelers
3. Do walking tours! There are a ton of free walking tours that you can do so make sure to sign up.
4. Excursions is another great way to meet others. Don't be afraid to ask people where they are from or what got them interested in doing that excursion.
5. Hostels. If you don't mind the noise levels, hostels are one of the top ways to meet fellow friends while solo traveling. They always host great music, events and have a common dining room area where people tend to hang out.
Traveling alone can feel scary but remember that all it takes is getting out of your comfort zone and striking up a conversation to make long lasting friendships!
|
|
null | false
| null |
Adobe Inc., originally called Adobe Systems Incorporated, is an American multinational computer software company incorporated in Delaware and headquartered in San Jose, California. It has historically specialized in software for the creation and publication of a wide range of content, including graphics, photography, illustration, animation, multimedia/video, motion pictures, and print. Its flagship products include Adobe Photoshop image editing software; Adobe Illustrator vector-based illustration software; Adobe Acrobat Reader and the Portable Document Format (PDF); and a host of tools primarily for audio-visual content creation, editing and publishing. Adobe offered a bundled solution of its products named Adobe Creative Suite, which evolved into a subscription software as a service (SaaS) offering named Adobe Creative Cloud. The company also expanded into digital marketing software and in 2021 was considered one of the top global leaders in Customer Experience Management (CXM).
Adobe was founded in December 1982 by John Warnock and Charles Geschke, who established the company after leaving Xerox PARC to develop and sell the PostScript page description language. In 1985, Apple Computer licensed PostScript for use in its LaserWriter printers, which helped spark the desktop publishing revolution. Adobe later developed animation and multimedia through its acquisition of Macromedia, from which it acquired Adobe Flash; video editing and compositing software with Adobe Premiere, later known as Adobe Premiere Pro; low-code web development with Adobe Muse; and a suite of software for digital marketing management.
As of 2022, Adobe has more than 26,000 employees worldwide. Adobe also has major development operations in the United States in Newton, New York City, Arden Hills, Lehi, Seattle, Austin and San Francisco. It also has major development operations in Noida and Bangalore in India.
|
From the passage provided, extract the list of software products sold by Adobe in the order they appear in the passage.
|
- Adobe Photoshop
- Adobe Illustrator
- Adobe Acrobat Reader
- Adobe Creative Suite
- Adobe Flash
- Adobe Premiere Pro
- Adobe Muse
|
null | false
| null |
Flesh from these animals is often commercially sold in restaurants as "langostino" or sometimes dishonestly called "lobster" when incorporated in seafood dishes. As well as being used for human consumption, there is demand for squat lobster meat to be used as feed in fish farms and shrimp or prawn farms. This is in part because they contain astaxanthin, a pigment that helps to colour the meat of farmed salmon and trout.
Despite their worldwide distribution and great abundance, there are few functioning fisheries for squat lobsters. Experimental fisheries have occurred in several countries, including Argentina, Mexico, and New Zealand, but commercial exploitation is currently restricted to Latin America, and chiefly to Chile. The main target species are Pleuroncodes monodon, P. planipes, and Cervimunida johni.
In Central America, the primary species of squat lobster targeted by fisheries is a species of Pleuroncodes. There is a great deal of confusion over both scientific names and common names, and the exact species is often unknown. In El Salvador, for instance, the commercial catch is generally referred to as "P. planipes", but is in fact P. monodon. Commercial fishing for squat lobsters in El Salvador began in the early 1980s; production increased markedly in the 2001 season, and has continued to grow, now making up 98% of the demersal resources landed in El Salvador, with annual catches peaking at 13,708 t in 2005. In Costa Rica, aggregations of squat lobsters are avoided, as the fishermen fear the squat lobsters will clog their nets. In Nicaragua, squat lobsters are heavily exploited, especially following a large increase in fishing effort in the 2007 season. In Panama, production reached 492 t in 2008. Chilean squat lobster fisheries initially targeted Cervimunida johni, beginning in 1953. By the mid-1960s, effort had largely switched to P. monodon. In an effort to conserve stocks, the Chilean government instituted quotas for squat lobsters, and the fishery is closely monitored. In New Zealand, Munida gregaria has been considered as a potential fisheries resource, particularly to feed farmed Chinook salmon (Oncorhynchus tshawytscha).
|
Based on the following paragraph on squat lobster fisheries, where do such fisheries exist today?
|
Commercial fisheries for squat lobsters currently exist only in Latin America, mainly in Chile due to the restriction on commercial exploitation of these animals. There are, however, a number of experimental fisheries in countries such as Argentina, Mexico, and New Zealand.
|
null | false
| 353
|
Automated coding of political event data, or the record of who-did-what-to-whom within the context of political actions, has existed for roughly two decades. The approach has remained largely the same during this time, with the underlying coding procedures not updating to reflect changes in natural language processing (NLP) technology. These NLP technologies have now advanced to such a level, and with accompanying open-source software implementations, that their inclusion in the event-data coding process comes as an obvious advancement. When combined with changes in how news content is obtained, the ability to store and process large amounts of text, and enhancements based on two decades worth of event-data experience, it becomes clear that political event data is ready for a next generation dataset.
In this chapter, I provide the technical details for creating such a next-generation dataset. The technical details lead to a pipeline for the production of the Phoenix event dataset. The Phoenix dataset is a daily updated, near-real-time political event dataset. The coding process makes use of open-source NLP software, an abundance of online news content, and other technical advances made possible by open-source software. This enables a dataset that is transparent and replicable, while providing a more accurate coding process than previously possible. Additionally, the dataset's near-real-time nature also enables many applications that were previously impossible with batch-updated datasets, such as monitoring of ongoing events. Thus, this dataset provides a significant improvement over previous event data generation efforts.
In the following sections I briefly outline the history of computer-generated political event data to this point in history. I then outline what the “next generation” of event data should look like. Following this, I discuss the many facets of creating a real-time political event dataset, mainly from a technological and infrastructure standpoint. Finally, the paper concludes with a brief empirical view of the Phoenix event dataset, which is the output of the previously-discussed technological pipeline.
Phoenix dataset is a daily updated, nearreal-time political event dataset. The coding process makes use of open-source NLP software, an abundance of online news content, and other technical advances made possible by open-source software.
|
In his dataset coding process, what techniques does he use?
|
An abundance of online news content, and other technical advances made possible by open-source software.
|
null | false
| null |
What is MapReduce?
|
MapReduce is a programming model and an associated implementation for processing and generating big data sets with a parallel, distributed algorithm on a cluster. A MapReduce program is composed of a map procedure, which performs filtering and sorting, and a reduce method, which performs a summary operation.
|
|
null | false
| null |
Imagine you are a grandparent who wants a grandchild. Write a persuasive argument to your children.
|
As a grandparent, one of my greatest joys has been raising my children. I want you to experience the same in your life. While there is hard work involved, seeing your children grow up is worth it all. I want you to be able to grow old with those memories and be surrounded by family who can spend time with you and help take care of you. I hope you consider having children.
|
|
null | false
| null |
What are some of the ways society can be improved?
|
Each country have their own customs, political systems, economic systems and beliefs. But some of the common ways we can improve society are to give population of a country clean water, clean air, nutritious food, basic education, health care and security. Additionally, creating economic system where individuals can work and contribute to their society's cultural, economic, and social growth may also improve society overall.
|
|
null | false
| null |
What is Delta Lake?
|
Delta Lake is an open source storage layer that brings reliability to data lakes. Delta Lake provides ACID transactions, scalable metadata handling, and unifies streaming and batch data processing. Delta Lake runs on top of your existing data lake and is fully compatible with Apache Spark APIs.
|
|
null | false
| 102
|
Rendering natural language descriptions from structured data is required in a wide variety of commercial applications such as generating descriptions of products, hotels, furniture, etc., from a corresponding table of facts about the entity. Such a table typically contains {field, value} pairs where the field is a property of the entity (e.g., color) and the value is a set of possible assignments to this property (e.g., color = red). Another example of this is the recently introduced task of generating one line biography descriptions from a given Wikipedia infobox BIBREF0 . The Wikipedia infobox serves as a table of facts about a person and the first sentence from the corresponding article serves as a one line description of the person. Figure FIGREF2 illustrates an example input infobox which contains fields such as Born, Residence, Nationality, Fields, Institutions and Alma Mater. Each field further contains some words (e.g., particle physics, many-body theory, etc.). The corresponding description is coherent with the information contained in the infobox.
Note that the number of fields in the infobox and the ordering of the fields within the infobox varies from person to person. Given the large size (700K examples) and heterogeneous nature of the dataset which contains biographies of people from different backgrounds (sports, politics, arts, etc.), it is hard to come up with simple rule-based templates for generating natural language descriptions from infoboxes, thereby making a case for data-driven models. Based on the recent success of data-driven neural models for various other NLG tasks BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , one simple choice is to treat the infobox as a sequence of {field, value} pairs and use a standard seq2seq model for this task. However, such a model is too generic and does not exploit the specific characteristics of this task as explained below. First, note that while generating such descriptions from structured data, a human keeps track of information at two levels. Specifically, at a macro level, she would first decide which field to mention next and then at a micro level decide which of the values in the field needs to be mentioned next. For example, she first decides that at the current step, the field occupation needs attention and then decides which is the next appropriate occupation to attend to from the set of occupations (actor, director, producer, etc.). To enable this, we use a bifocal attention mechanism which computes an attention over fields at a macro level and over values at a micro level. We then fuse these attention weights such that the attention weight for a field also influences the attention over the values within it. Finally, we feed a fused context vector to the decoder which contains both field level and word level information. Note that such two-level attention mechanisms BIBREF6 , BIBREF7 , BIBREF8 have been used in the context of unstructured data (as opposed to structured data in our case), where at a macro level one needs to pay attention to sentences and at a micro level to words in the sentences.
Next, we observe that while rendering the output, once the model pays attention to a field (say, occupation) it needs to stay on this field for a few timesteps (till all the occupations are produced in the output). We refer to this as the stay on behavior. Further, we note that once the tokens of a field are referred to, they are usually not referred to later. For example, once all the occupations have been listed in the output we will never visit the occupation field again because there is nothing left to say about it. We refer to this as the never look back behavior. To model the stay on behaviour, we introduce a forget (or remember) gate which acts as a signal to decide when to forget the current field (or equivalently to decide till when to remember the current field). To model the never look back behaviour we introduce a gated orthogonalization mechanism which ensures that once a field is forgotten, subsequent field context vectors fed to the decoder are orthogonal to (or different from) the previous field context vectors.
We experiment with the WikiBio dataset BIBREF0 which contains around 700K {infobox, description} pairs and has a vocabulary of around 400K words. We show that the proposed model gives a relative improvement of 21% and 20% as compared to current state of the art models BIBREF0 , BIBREF9 on this dataset. The proposed model also gives a relative improvement of 10% as compared to the basic seq2seq model. Further, we introduce new datasets for French and German on the same lines as the English WikiBio dataset. Even on these two datasets, our model outperforms the state of the art methods mentioned above.
We show that the proposed model gives a relative improvement of 21% and 20% as compared to current state of the art models (Lebret et al., 2016; Mei et al., 2016) on this dataset.
|
By how much does their best model outperform the state-of-the-art?
|
They show that the proposed model gives a relative improvement of 21% and 20% as compared to current state of the art models (Lebret et al., 2016; Mei et al., 2016) on this dataset.
|
null | false
| null |
Categorize the below names into fishes and insects.
Hilsa, Catla, Maggot, Mite
|
Insects - Maggot, Mite
Fish - Hilsa, Catla
|
|
null | false
| null |
What language is HP GL
|
HP-GL, short for Hewlett-Packard Graphics Language and often written as HPGL, is a printer control language created by Hewlett-Packard (HP). HP-GL was the primary printer control language used by HP plotters. It was introduced with the plotter HP-8972 in 1977 and became a standard for almost all plotters. Hewlett-Packard's printers also usually support HP-GL/2 in addition to PCL.
|
|
null | false
| null |
What Peloton instructor should I take a cycling class with for the following moods and why? Happy, Sad, Angry, Unmotivated
|
There are 48 Peloton instructors who teach a variety of classes including: strength, tread, cycling, yoga, boxing, rowing, running, stretching, meditation, cardio, pilates, barre, and bootcamp. They have both male and female instructors representing a variety of body types. The classes are high energy in nature and feature music from todays hits, genres, decades, and more. There are classes on the platform in the following languages: German, Spanish, and English. There is a workout set to a musical playlist for every type of athlete on the Peloton platform. They have four types of intensity levels for classes: beginner, intermediate, advanced, and all levels. As a rule of thumb, all instructors teach classes across all levels. The classes vary in length from 5 minutes to 60+ minutes long. The most popular class lengths are 10, 20, and 30 minute long classes. Many instructors on the Peloton platform have a large following on social media. Devout fans often delineate their fandom by displaying the instructor's fan hashtag within the Peloton app. An athlete looking to take a class when feeling unmotivated should take a class with the instructor Robin Arzon, she's a no none-sense instructor who will motivate you from your core, you'll often find latino beats and hip hop music as the the themes in her playlists, which translate to a high energy environment pulling even the most unmotivated person into a workout. An athlete looking to take a class when sad should take a class with Christine D'Ercole. Her reflection ride classes might bring out a cry if you're feeling sad, but she leaves space for you to feel all your feelings and work through them with soothing tunes and motion. An athlete looking to take a class when they are angry should take a class with Kendal Toole. Kendal's class will help you channel that anger into an intense workout where you will be able to get all of the anger into the movements. Plus, she has an emotions themed class on anger. For an athlete looking to take a class when happy they should take a class with Callie Gullickson, her happy go lucky personality and goofy jokes will keep the good mood going. She's known for pop playlists with upbeat tunes. She loves Trader Joes and often makes references to it in her workout classes.
|
|
null | false
| null |
Who founded Amazon?
|
Jeff Bezos
|
|
2001.07209
| false
| null |
To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.
To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.
|
How do they quantify moral relevance?
|
By complementing morally relevant seed words with a set of morally irrelevant seed words based on the notion of valence
|
null | false
| null |
Demond "Bob" Sanders (born February 24, 1981) is an American former professional football player who was a safety in the National Football League (NFL) for eight seasons. He played college football for the Iowa Hawkeyes and was drafted by the Indianapolis Colts in the second round of the 2004 NFL Draft. He was named NFL Defensive Player of the Year in 2007 and also played for the San Diego Chargers in his final season in 2011.
Sanders was nicknamed "The Sandman" because of his hard hits and tackles, and was also called "The Eraser" by former Colts' coach Tony Dungy because of his tendency to erase the mistakes of his teammates. He spent the majority of his career battling injuries, having played more than six games in a single season only twice, in 2005 and 2007.
|
Extract the teams that Bob Sanders played throughout his career, and put them in a comma-separated list.
|
Iowa Hawkeyes, Indianapolis Colts, San Diego Chargers
|
null | false
| null |
Touch typing (also called blind typing, or touch keyboarding) is a style of typing. Although the phrase refers to typing without using the sense of sight to find the keys—specifically, a touch typist will know their location on the keyboard through muscle memory—the term is often used to refer to a specific form of touch typing that involves placing the eight fingers in a horizontal row along the middle of the keyboard (the home row) and having them reach for specific other keys. (Under this usage, typists who do not look at the keyboard but do not use home row either are referred to as hybrid typists.) Both two-handed touch typing and one-handed touch typing are possible.
|
What does Touch Typing refer to?
|
Touch typing refers to typing without using the sense of sight to find the keys
|
null | false
| null |
Why am I allergic to cats?
|
Allergies to cats are primarily caused by a single protein which is found on the skin and in the saliva. There are at least seven other cat proteins that may cause cat allergies in humans. A combination of these allergy producing proteins accumulate on the fur and skin of the cat during grooming.
|
|
null | false
| null |
Tardar Sauce (April 4, 2012 – May 14, 2019), nicknamed Grumpy Cat, was an American Internet celebrity cat. She was known for her permanently "grumpy" facial appearance, which was caused by an underbite and feline dwarfism. She came to prominence when a photograph of her was posted on September 22, 2012, on social news website Reddit by Bryan Bundesen, the brother of her owner Tabatha Bundesen. "Lolcats" and parodies created from the photograph by Reddit users became popular. She was the subject of a popular Internet meme in which humorously negative, cynical images were made from photographs of her.
|
Given this paragraph about Grumpy Cat, tell me the cat's real name.
|
Grumpy Cat was actually a nickname for the famous internet celebrity meme cat. Her real name was Tardar Sauce.
|
null | false
| null |
Give me a list of the five best selling consoles of all time with amount of units sold
|
- Playstation 2 - 155 million
- Nintendo DS - 154.02 million
- Nintendo Switch - 122.55 million
- Game Boy - 118.69 million
- Playstation 4 - 117.2 million
|
|
null | false
| 192
|
To estimate the quality of topic models in a real task, we chose Islam informational portal "Golos Islama" (Islam Voice) (in Russian). This portal contains both news articles related to Islam and articles discussing Islam basics. We supposed that the thematic analysis of this specialized site can be significantly improved with domain-specific knowledge described in the thesaurus form. We extracted the site contents using Open Web Spider and obtained 26,839 pages.
To combine knowledge with a topic model, we used RuThes thesaurus together with the additional block of the Islam thesaurus. The Islam thesaurus contains more than 5 thousand Islam-related terms including single words and expressions.
For each combined model, we ran two experiments with 100 topics and with 200 topics. The generated topics were evaluated by two linguists, who had previously worked on the Islam thesaurus. The evaluation task was formulated as follows: the experts should read the top elements of the generated topics and try to formulate labels of these topics. The labels should be different for each topic in the set generated with a specific model. The experts should also assign scores to the topics' labels:
Then we can sum up all the scores for each model under consideration and compare the total scores in value. Thus, maximum values of the topic score are 200 for a 100-topic model and 400 for a 200-topic model. In this experiment we do not measure inter-annotator agreement for each topic, but try to get expert's general impression.
Due to the complicated character of the Islam portal contents for automatic extraction (numerous words and names difficult for Russian morphological analyzers), we did not use automatic extraction of multiword expressions and exploited only phrases described in RuThes or in the Islam Thesaurus. We added thesaurus phrases in two ways: most frequent 1000 phrases (as in BIBREF14 , BIBREF16 ) and phrases with frequency more than 10 (More10phrases): the number of such phrases is 9351.
The results of the evaluation are shown in Table 5. The table contains the overall expert scores for a topic model (Score), kernel uniqueness as in the previous section (KernU), perplexity (Prpl). Also for each model kernels, we calculated the average number of known relations between topics’s elements: thesaurus relations (synonyms and direct relations between concepts) and component-based relations between phrases (Relc).
It can be seen that if we add phrases without accounting component similarity (Runs 2, 3), the quality of topics decreases: the more phrases are added, the more the quality degrades. The human scores also confirm this fact. But if the similarity between phrase components is considered then the quality of topics significantly improves and becomes better than for unigram models (Runs 4, 5). All measures are better. Relational coherence between kernel elements also grows. The number of added phrases is not very essential.
Adding unary synonyms decreases the quality of the models (Run 6) according to human scores. But all other measures behave differently: kernel uniqueness is high, perplexity decreases, relational coherence grows. The problem of this model is in that non-topical, general words are grouped together, reinforce one another but do not look as related to any topic. Adding all thesaurus relations is not very beneficial (Runs 7, 8). If we consider all relations except hyponyms, the human scores are better for corresponding runs (Runs 9, 10). Relational coherence in topics’ kernels achieves very high values: the quarter of all elements have some relations between each other, but it does not help to improve topics. The explanation is the same: general words can be grouped together.
At last, we removed General Lexicon concepts from the RuThes data, which are top-level, non-thematic concepts that can be met in arbitrary domains BIBREF19 and considered all-relations and without-hyponyms variants (Runs 11, 12). These last variants achieved maximal human scores because they add thematic knowledge and avoid general knowledge, which can distort topics. Kernel uniqueness is also maximal.
Table 6 shows similar topics obtained with the unigram, phrase-enriched (Run 5) and the thesaurus-enriched topic model (Run 12). The Run-5 model adds thesaurus phrases with frequency more than 10 and accounts for the component similarity between phrases. The Run-12 model accounts both component relations and hypernym thesaurus relations. All topics are of high quality, quite understandable. The experts evaluated them with the same high scores.
Phrase-enriched and thesaurus-enriched topics convey the content using both single words and phrases. It can be seen that phrase-enriched topics contain more phrases. Sometimes the phrases can create not very convincing relations such as Russian church - Russian language. It is explainable but does not seem much topical in this case.
The thesaurus topics seem to convey the contents in the most concentrated way. In the Syrian topic general word country is absent; instead of UN (United Nations), it contains word rebel, which is closer to the Syrian situation. In the Orthodox church topic, the unigram variant contains extra word year, relations of words Moscow and Kirill to other words in the topic can be inferred only from the encyclopedic knowledge.
The problem of this model is in that non-topical, general words are grouped together, reinforce one another but do not look as related to any topic. Adding all thesaurus relations is not very beneficial (Runs 7, 8).
|
What is the problem of the model?
|
In that non-topical, general words are grouped together, reinforce one another but do not look as related to any topic. Adding all thesaurus relations is not very beneficial (Runs 7, 8).
|
null | false
| null |
Settlement in what is now Romania began in the Lower Paleolithic followed by written records attesting the kingdom of Dacia, its conquest, and subsequent Romanization by the Roman Empire during late antiquity. The modern Romanian state was formed in 1859 through a personal union of the Danubian Principalities of Moldavia and Wallachia. The new state, officially named Romania since 1866, gained independence from the Ottoman Empire in 1877. During World War I, after declaring its neutrality in 1914, Romania fought together with the Allied Powers from 1916. In the aftermath of the war, Bukovina, Bessarabia, Transylvania, and parts of Banat, Crișana, and Maramureș became part of the Kingdom of Romania. In June–August 1940, as a consequence of the Molotov–Ribbentrop Pact and Second Vienna Award, Romania was compelled to cede Bessarabia and Northern Bukovina to the Soviet Union and Northern Transylvania to Hungary. In November 1940, Romania signed the Tripartite Pact and, consequently, in June 1941 entered World War II on the Axis side, fighting against the Soviet Union until August 1944, when it joined the Allies and recovered Northern Transylvania. Following the war and occupation by the Red Army, Romania became a socialist republic and a member of the Warsaw Pact. After the 1989 Revolution, Romania began a transition towards democracy and a market economy.
|
Summarize Romania's history and founding.
|
Romania began as Dacia and then was conquered by the Roman Empire in late antiquity. Modern Romania formed in 1859. After World War I because they joined the Allies Romania gained extra territories and the Kingdom of Romania formed. In World War II Romania was forced to cede some of it's areas. Following that Romania became a socialist republic until the 1989 Revolution when it transitioned to democracy.
|
null | false
| null |
House is a music genre characterized by a repetitive four-on-the-floor beat and a typical tempo of 120 beats per minute. It was created by DJs and music producers from Chicago's underground club culture in the early/mid 1980s, as DJs began altering disco songs to give them a more mechanical beat.
House was pioneered by African American DJs and producers in Chicago such as Frankie Knuckles, Ron Hardy, Jesse Saunders, Chip E., Joe Smooth, Steve "Silk" Hurley, Farley "Jackmaster" Funk, Marshall Jefferson, Phuture, and others. House music expanded to other cities such as London, then New York City and became a worldwide phenomenon.
|
Given the following paragraph, where did House music originate?
|
House music originated in Chicago, Illinois in the United States of America by producers/DJs such as Frankie Knuckles, Ron Hardy, and Jesse Saunders.
|
null | false
| null |
What nationality was Morse inventor of the famous code
|
American
|
|
null | false
| null |
Classify each of the following bands as Classic Rock , or not: Led Zeppelin, Jimi Hendrix, Journey, Rolling Stones, Miley Cyrus, Taylor Swift, Aerosmith
|
Led Zeppelin - Classic Rock
Jimi Hendrix - Classic Rock
Journey - Classic Rock
Rolling Stones - Classic Rock
Miley Cyrus - Not Classic Rock
Taylor Swift - Not Classic Rock
Aerosmith - Classic Rock
|
|
null | false
| 41
|
The PA corpus contains several classes of sentences that are of interest. In this paper, we focus on three important classes of sentences viz., sentences that discuss strengths (class STRENGTH), weaknesses of employees (class WEAKNESS) and suggestions for improving her performance (class SUGGESTION). The strengths or weaknesses are mostly about the performance in work carried out, but sometimes they can be about the working style or other personal qualities. The classes WEAKNESS and SUGGESTION are somewhat overlapping; e.g., a suggestion may address a perceived weakness. Following are two example sentences in each class.
STRENGTH:
WEAKNESS:
SUGGESTION:
Several linguistic aspects of these classes of sentences are apparent. The subject is implicit in many sentences. The strengths are often mentioned as either noun phrases (NP) with positive adjectives (Excellent technology leadership) or positive nouns (engineering strength) or through verbs with positive polarity (dedicated) or as verb phrases containing positive adjectives (delivers innovative solutions). Similarly for weaknesses, where negation is more frequently used (presentations are not his forte), or alternatively, the polarities of verbs (avoid) or adjectives (poor) tend to be negative. However, sometimes the form of both the strengths and weaknesses is the same, typically a stand-alone sentiment-neutral NP, making it difficult to distinguish between them; e.g., adherence to timing or timely closure. Suggestions often have an imperative mood and contain secondary verbs such as need to, should, has to. Suggestions are sometimes expressed using comparatives (better process compliance). We built a simple set of patterns for each of the 3 classes on the POS-tagged form of the sentences. We use each set of these patterns as an unsupervised sentence classifier for that class. If a particular sentence matched with patterns for multiple classes, then we have simple tie-breaking rules for picking the final class. The pattern for the STRENGTH class looks for the presence of positive words / phrases like takes ownership, excellent, hard working, commitment, etc. Similarly, the pattern for the WEAKNESS class looks for the presence of negative words / phrases like lacking, diffident, slow learner, less focused, etc. The SUGGESTION pattern not only looks for keywords like should, needs to but also for POS based pattern like “a verb in the base form (VB) in the beginning of a sentence”.
We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation.
The PA corpus contains several classes of sentences that are of interest. In this paper, we focus on three important classes of sentences viz., sentences that discuss strengths (class STRENGTH), weaknesses of employees (class WEAKNESS) and suggestions for improving her performance (class SUGGESTION).
|
What classes of sentences are focused on by them in the paper?
|
Sentences that discuss strengths (class STRENGTH), weaknesses of employees (class WEAKNESS) and suggestions for improving her performance (class SUGGESTION).
|
null | false
| null |
Identify which instrument is string or percussion: Kouxian, Tres cubano
|
Tres cubano is string, Kouxian is percussion.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.