paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
1908.06725
| false
| null |
Each training sample is generated by three steps: align, mask and select, which we call as AMS method. Each sample in the dataset consists of a question and several candidate answers, which has the same form as the CommonsenseQA dataset. An example of constructing one training sample by masking concept $_2$ is shown in Table 2 .
Each training sample is generated by three steps: align, mask and select, which we call as AMS method. Each sample in the dataset consists of a question and several candidate answers, which has the same form as the CommonsenseQA dataset.
|
How do they select answer candidates for their QA task?
|
The answers are shown as follows:
* AMS method.
|
null | false
| null |
What is the difference between Azure Synapse Dedicated SQL Pools and Serverless SQL Pools?
|
Azure Synapse Dedicated SQL pools is based on the legacy on-prem MS SQL Server technology of the past, while Azure Synapse SQL Serverless Pools is a new technology built from the ground up by a different Microsoft team using a modern distributed query processing engine reading files directly on top of ADLS.
Based on these two SQL solutions having completely different architectures they have widely different capabilities, where Synapse Serverless SQL Pools have:
- Great support for legacy T-SQL
- Great integration with Power BI
- Good Support for reading open format files directly from ADLS (including Delta Lake)
However, Serverless SQL Pools has:
- NO support for data writing (no INSERT, UPDATE, DELETE, or MERGE capability), because all integration must happen externally on files.
- NO result caching, so dashboards or queries that are run multiple times will pay in time and money for each execution even if just run seconds ago.
- NO file caching, so cloud storage API can be high and flood the network and slow a larger installation
|
|
null | false
| null |
Which famous Dutch dancer was convicted of being a spy for Germany during WW1
|
Berthe Kuyck was a famous Dutch dancer who was convicted of being a spy for Germany during World War I. She was sentenced to death but her sentence was later commuted to life imprisonment.
|
|
null | false
| null |
Prominent levee systems have been built along the Mississippi River and Sacramento River in the United States, and the Po, Rhine, Meuse River, Rhône, Loire, Vistula, the delta formed by the Rhine, Maas/Meuse and Scheldt in the Netherlands and the Danube in Europe. During the Chinese Warring States period, the Dujiangyan irrigation system was built by the Qin as a water conservation and flood control project. The system's infrastructure is located on the Minjiang (Chinese: 岷江; pinyin: Mínjiāng), which is the longest tributary of the Chang Jiang, in Sichuan, China.
The Mississippi levee system represents one of the largest such systems found anywhere in the world. It comprises over 5,600 km (3,500 mi) of levees extending some 1,000 km (620 mi) along the Mississippi, stretching from Cape Girardeau, Missouri, to the Mississippi delta. They were begun by French settlers in Louisiana in the 18th century to protect the city of New Orleans. The first Louisiana levees were about 90 cm (3 ft) high and covered a distance of about 80 km (50 mi) along the riverside. The U.S. Army Corps of Engineers, in conjunction with the Mississippi River Commission, extended the levee system beginning in 1882 to cover the riverbanks from Cairo, Illinois to the mouth of the Mississippi delta in Louisiana. By the mid-1980s, they had reached their present extent and averaged 7.3 m (24 ft) in height; some Mississippi levees are as high as 15 m (50 ft). The Mississippi levees also include some of the longest continuous individual levees in the world. One such levee extends southwards from Pine Bluff, Arkansas, for a distance of some 610 km (380 mi). The scope and scale of the Mississippi levees has often been compared to the Great Wall of China.
The United States Army Corps of Engineers (USACE) recommends and supports cellular confinement technology (geocells) as a best management practice. Particular attention is given to the matter of surface erosion, overtopping prevention and protection of levee crest and downstream slope. Reinforcement with geocells provides tensile force to the soil to better resist instability.
Artificial levees can lead to an elevation of the natural river bed over time; whether this happens or not and how fast, depends on different factors, one of them being the amount and type of the bed load of a river. Alluvial rivers with intense accumulations of sediment tend to this behavior. Examples of rivers where artificial levees led to an elevation of the river bed, even up to a point where the river bed is higher than the adjacent ground surface behind the levees, are found for the Yellow River in China and the Mississippi in the United States.
|
how can we prevent the best river floods?
|
Prominent levee systems have been built as a water conservation and flood control project.
The Mississippi River and Sacramento River in the United States, and the Po, Rhine, Meuse River, Rhône, Loire, Vistula, the delta formed by the Rhine, Maas/Meuse and Scheldt in the Netherlands and the Danube in Europe. The Mississippi levee system represents one of the largest such systems found anywhere in the world
|
null | false
| null |
Classify each of the following as either a vegetarian option or not: chicken fried rice, green beans, strawberry banana smoothie, oatmeal, steak, scrambled eggs
|
The vegetarian options are green beans, strawberry banana smoothie, oatmeal, and scrambled eggs. The non-vegetarian options are chicken fried rice and steak.
|
|
null | false
| null |
Can kitesurfers sail up against the wind?
|
Yes, kitesurfers can sail up against the wind by edging their surfboard upwind at a slight angle relative to 90 degrees crosswind. Depending on the equipment and skill, kitesurfers can generally achieve an upwind angle of ~20 degrees.
|
|
1911.08673
| false
| null |
In order to verify the time complexity analysis of our model, we measured the running time and speed of BIAF, STACKPTR and our model on PTB training and development set using the projective algorithm. The comparison in Table TABREF24 shows that in terms of convergence time, our model is basically the same speed as BIAF, while STACKPTR is much slower. For decoding, our model is the fastest, followed by BIAF. STACKPTR is unexpectedly the slowest. This is because the time cost of attention scoring in decoding is not negligible when compared with the processing speed and actually even accounts for a significant portion of the runtime.
FLOAT SELECTED: Table 4: Training time and decoding speed. The experimental environment is on the same machine with Intel i9 9900k CPU and NVIDIA 1080Ti GPU.
The comparison in Table TABREF24 shows that in terms of convergence time, our model is basically the same speed as BIAF, while STACKPTR is much slower. For decoding, our model is the fastest, followed by BIAF. STACKPTR is unexpectedly the slowest.
FLOAT SELECTED: Table 4: Training time and decoding speed. The experimental environment is on the same machine with Intel i9 9900k CPU and NVIDIA 1080Ti GPU.
|
How faster is training and decoding compared to former models?
|
Proposed vs best baseline:
Decoding: 8541 vs 8532 tokens/sec
Training: 8h vs 8h
|
null | false
| null |
Give me a list of different types of cold coffee beverages.
|
- Blended (also known as frappe)
- Cà phê đen (Vietnamese iced coffee)
- Cold brew
- Iced coffee
- Iced espresso
- Iced latte (with or without flavoring)
- Nitro cold brew
|
|
null | false
| 183
|
We automatically create a test set from the OpenSubtitles corpus BIBREF22 . The goal is to provide a large number of difficult test cases where an English pronoun has to be translated to a German pronoun.
The most challenging cases are translating it to either er, sie or es, depending on the grammatical gender of the antecedent. Not only is the translation of it ambiguous, there is also class imbalance in the training data (see Table TABREF18 ). There is roughly a 30% probability that it is aligned to es, which makes it difficult to learn to translate er and sie. We use parsing and automatic co-reference resolution to find translation pairs that satisfy our constraints.
To provide a basis for filtering with constraints, we tokenize the whole data set with the Moses tokenizer, generate symmetric word alignments with fast_align BIBREF23 , parse the English text with CoreNLP BIBREF24 , parse the German text with ParZu BIBREF25 and perform coreference resolution on both sides. The coreference chains are obtained with the neural model of CoreNLP for English, and with CorZu for German BIBREF26 , respectively.
Then we opt for high-precision, aggressive filtering, according to the following protocol: for each pair of sentences INLINEFORM0 in English and German, extract iff
INLINEFORM0 contains the English pronoun it, and INLINEFORM1 contains a German pronoun that is third person singular (er, sie or es), as indicated by their part-of-speech tags;
those pronouns are aligned to each other;
both pronouns are in a coreference chain;
their nominal antecedents in the coreference chain are aligned on word level.
This removes most candidate pairs, but is necessary to overcome the noise introduced by our preprocessing pipeline, most notably coreference resolution. From the filtered set, we create a balanced test set by randomly sampling 4000 instances of each of the three translations of it under consideration (er, sie, es). We do not balance antecedent distance. See Table TABREF25 for the distribution of pronoun pairs and antecedent distance in the test set.
For each sentence pair in the resulting test set, we introduce contrastive translations. A contrastive translation is a translation variant where the correct pronoun is swapped with an incorrect one. For an example, see Table TABREF19 , where the pronoun it in the original translation corresponds to sie because the antecedent bat is a feminine noun in German (Fledermaus). We produce wrong translations by replacing sie with one of the other pronouns (er, es).
Note that, by themselves, these contrastive translations are grammatically correct if the antecedent is outside the current sentence. The test set also contains pronouns with an antecedent in the same sentence (antecedent distance 0). Those examples do not require any additional context for disambiguation and we therefore expect the sentence-level baseline to perform well on them.
We take extra care to ensure that the resulting contrastive translations are grammatically correct, because ungrammatical sentences are easily dismissed by an NMT system. For instance, if there are any possessive pronouns (such as seine) in the sentence, we also change their gender to match the personal pronoun replacement.
The German coreference resolution system does not resolve es because most instances of es in German are either non-referential forms, or they refer to a clause instead of a nominal antecedent. We limit the test set to nominal antecedents, as these are the only ambiguous cases with respect to translation. For this reason, we have to rely entirely on the English coreference links for the extraction of sentence pairs with it INLINEFORM0 es, as opposed to pairs with it INLINEFORM1 er and it INLINEFORM2 sie where we have coreference chains in both languages.
Our extraction process respects document boundaries, to ensure we always search for the right context. We extract additional information from the annotated documents, such as the distance (in sentences) between pronouns and their antecedents, the document of origin, lemma, morphology and dependency information if available.
Then we opt for high-precision, aggressive filtering, according to the following protocol: for each pair of sentences (e, f) in English and German, extract iff • e contains the English pronoun it, and f contains a German pronoun that is third person singular (er, sie or es), as indicated by their part-of-speech tags; • those pronouns are aligned to each other; • both pronouns are in a coreference chain; • their nominal antecedents in the coreference chain are aligned on word level.
|
How to opt for high-precision, aggressive filtering?
|
According to the following protocol: for each pair of sentences (e, f) in English and German, extract iff • e contains the English pronoun it, and f contains a German pronoun that is third person singular (er, sie or es), as indicated by their part-of-speech tags; • those pronouns are aligned to each other; • both pronouns are in a coreference chain; • their nominal antecedents in the coreference chain are aligned on word level.
|
null | false
| 8
|
Figure 1 illustrates the overall architecture of the discourse-level neural network model that consists of two Bi-LSTM layers, one max-pooling layer in between and one softmax prediction layer. The input of the neural network model is a paragraph containing a sequence of discourse units, while the output is a sequence of discourse relations with one relation between each pair of adjacent discourse units.
Given the words sequence of one paragraph as input, the lower Bi-LSTM layer will read the whole paragraph and calculate hidden states as word representations, and a max-pooling layer will be applied to abstract the representation of each discourse unit based on individual word representations. Then another Bi-LSTM layer will run over the sequence of discourse unit representations and compute new representations by further modeling semantic dependencies between discourse units within paragraph. The final softmax prediction layer will concatenate representations of two adjacent discourse units and predict the discourse relation between them.
Word Vectors as Input: The input of the paragraph-level discourse relation prediction model is a sequence of word vectors, one vector per word in the paragraph. In this work, we used the pre-trained 300-dimension Google English word2vec embeddings. For each word that is not in the vocabulary of Google word2vec, we will randomly initialize a vector with each dimension sampled from the range $[-0.25, 0.25]$ . In addition, recognizing key entities and discourse connective phrases is important for discourse relation recognition, therefore, we concatenate the raw word embeddings with extra linguistic features, specifically one-hot Part-Of-Speech tag embeddings and one-hot named entity tag embeddings.
Building Discourse Unit Representations: We aim to build discourse unit (DU) representations that sufficiently leverage cues for discourse relation prediction from paragraph-wide contexts, including the preceding and following discourse units in a paragraph. To process long paragraph-wide contexts, we take a bottom-up two-level abstraction approach and progressively generate a compositional representation of each word first (low level) and then generate a compositional representation of each discourse unit (high level), with a max-pooling operation in between. At both word-level and DU-level, we choose Bi-LSTM as our basic component for generating compositional representations, mainly considering its capability to capture long-distance dependencies between words (discourse units) and to incorporate influences of context words (discourse units) in each side.
Given a variable-length words sequence $X = (x_1,x_2,...,x_L)$ in a paragraph, the word-level Bi-LSTM will process the input sequence by using two separate LSTMs, one process the word sequence from the left to right while the other follows the reversed direction. Therefore, at each word position $t$ , we obtain two hidden states $\overrightarrow{h_t}, \overleftarrow{h_t}$ . We concatenate them to get the word representation $h_t = [\overrightarrow{h_t}, \overleftarrow{h_t}]$ . Then we apply max-pooling over the sequence of word representations for words in a discourse unit in order to get the discourse unit embedding:
$$MP_{DU}[j] = \max _{i=DU\_start}^{DU\_end}h_i[j]\quad \\
where, 1 \le j \le hidden\_node\_size$$ (Eq. 8)
Next, the DU-level Bi-LSTM will process the sequence of discourse unit embeddings in a paragraph and generate two hidden states $\overrightarrow{hDU_t}$ and $\overleftarrow{hDU_t}$ at each discourse unit position. We concatenate them to get the discourse unit representation $hDU_t = [\overrightarrow{hDU_t}, \overleftarrow{hDU_t}]$ .
The Softmax Prediction Layer: Finally, we concatenate two adjacent discourse unit representations $hDU_{t-1}$ and $hDU_t$ and predict the discourse relation between them using a softmax function:
$$y_{t-1} = softmax(W_y*[hDU_{t-1},hDU_t]+b_y)$$ (Eq. 9)
Figure 1 illustrates the overall architecture of the discourse-level neural network model that consists of two Bi-LSTM layers, one max-pooling layer in between and one softmax prediction layer.
|
What are the two Bi-LSTM layers of the team's model?
|
One max-pooling layer in between and one softmax prediction layer.
|
1909.04181
| false
| null |
To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender.
To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines.
For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task.
|
What are the in-house data employed?
|
The answers are shown as follows:
* we manually label an in-house dataset of 1,100 users with gender tags
* we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task
|
null | false
| null |
What is so fragile that saying its name breaks it?
|
The silence.
|
|
null | false
| null |
Give me a list of Tour de France general Classification winners who have won more than 4 times.
|
Jacques Anquetil, Eddie Merckx, Bernard Hinault, Miguel Indurain and Lance Armstrong. Lance Armstrong has won seven times but had those victories rescinded after admitting to using performance enhancing drugs.
|
|
null | false
| null |
Who is the best soccer player in the world?
|
There are many ways to determine who the best soccer player is at the current time.
Some would say that the Ballon d'Or award does that job for us, labeling the best player in the game at the current time. At this juncture, that is Karim Benzema of Real Madrid, who won the 2021/22 award after leading Los Blancos to the La Liga and Champions League double.
Others would say that the highest-paid player is the best. That would be Kylian Mbappe whose new PSG contract could see the French side shell out an eye-watering €630 million over the course of his new three-year contract he signed in the summer of 2022. Mbappe is thought to be the best young player in the game now, gracing the cover of the popular FIFA video game for the last two years.
Still more could argue that the player with the most goals scored would be the best player. That would make Cristiano Ronaldo the top player in the world, with his all-time record of more than 800 goals for club and country. Right behind him is his rival Lionel Messi, who also has collected far more assists than his Portuguese counterpart. Young striker Erling Haaland may have something to say about that, however, as his incredible start to the Premier League season has seen him on pace to shatter the league's single-season goalscoring record.
However, that seems to discount those who play other positions, like 2018 Ballon d'Or winner Luka Modric, whose midfield brilliance led Croatia to the last World Cup final, or 2022 Yashin trophy winner Thibaut Courtois, whose goalkeeping heroics earned him Man of the Match in the 2022 Champions League final.
|
|
null | false
| null |
Identify which instrument is string or percussion: Mirwas, Esraj
|
Esraj is string, Mirwas is percussion.
|
|
null | false
| null |
List five titles by Gertrude Chandler Warner
|
Surprise Island, Blue Bay Mystery, Mystery Ranch, Benny Uncovers a Mystery, The Boxcar Children
|
|
null | false
| null |
Name some books of the Bible.
|
Some of the books of the Bible include: 1 Samuel, Lamentations, Song of Solomon, Genesis, Leviticus, Deuteronomy, Esther, Proverbs
|
|
null | false
| 85
|
In this section, we introduce the deep neural network architecture that we used to classify emotions in the tweets dataset. Emotional expressions are more complex and context-dependent even compared to other forms of expressions based mostly on the complexity and ambiguity of human emotions and emotional expressions and the huge impact of context on the understanding of the expressed emotion. These complexities are what led us to believe lexicon-based features like is normally used in conventional machine learning approaches are unable to capture the intricacy of emotional expressions.
Our architecture was designed to show that using a model that captures better information about the context and sequential nature of the text can outperform lexicon-based methods commonly used in the literature. As mentioned in the Introduction, Recurrent Neural Networks (RNNs) have been shown to perform well for the verity of tasks in NLP, especially classification tasks. And as our goal was to capture more information about the context and sequential nature of the text, we decided to use a model based on bidirectional RNN, specifically a bidirectional GRU network to analyze the tweets.
For building the emotion classifier, we have decided to use 7 binary classifiers-one for each emotion- each of which uses the same architecture for detecting a specific emotion. You can see the plot diagram of the model in Figure FIGREF6 . The first layer consists of an embedding lookup layer that will not change during training and will be used to convert each term to its corresponding embedding vector. In our experiments, we tried various word embedding models but saw little difference in their performance. Here we report the results for two which had the best performance among all, ConceptNet Numberbatch BIBREF35 and fastText BIBREF36 both had 300 dimensions.
As none of our tweets had more than 35 terms, we set the size of the embedding layer to 35 and added padding to shorter tweets. The output of this layer goes to a bidirectional GRU layer selected to capture the entirety of each tweet before passing its output forward. The goal is to create an intermediate representation for the tweets that capture the sequential nature of the data. For the next step, we use a concatenation of global max-pooling and average-pooling layers (with a window size of two). Then a max-pooling was used to extract the most important features form the GRU output and an average-pooling layer was used to considers all features to create a representation for the text as a whole. These partial representations are then were concatenated to create out final hidden representation. For classification, the output of the concatenation is passed to a dense classification layer with 70 nodes along with a dropout layer with a rate of 50% to prevent over-fitting. The final layer is a sigmoid layer that generates the final output of the classifier returning the class probability.
Then a max-pooling was used to extract the most important features form the GRU output and an average-pooling layer was used to considers all features to create a representation for the text as a whole.
|
What are the max-pooling and average-pooling used for?
|
The max-pooling was used to extract the most important features form the GRU output and the average-pooling layer was used to considers all features to create a representation for the text as a whole.
|
1802.00273
| false
| null |
Various multilingual extensions of NMT have already been proposed in the literature. The authors of BIBREF18 , BIBREF19 apply multitask learning to train models for multiple languages. Zoph and Knight BIBREF20 propose a multi-source model and BIBREF21 introduces a character-level encoder that is shared across several source languages. In our setup, we will follow the main idea proposed by Johnson et al. BIBREF22 . The authors of that paper suggest a simple addition by means of a language flag on the source language side (see Figure 2 ) to indicate the target language that needs to be produced by the decoder. This flag will be mapped on a dense vector representation and can be used to trigger the generation of the selected language. The authors of the paper argue that the model enables transfer learning and supports the translation between languages that are not explicitly available in training. This ability gives a hint of some kind of vector-based “interlingua”, which is precisely what we are looking for. However, the original paper only looks at a small number of languages and we will scale it up to a larger variation using significantly more languages to train on. More details will be given in the following section.
Various multilingual extensions of NMT have already been proposed in the literature. The authors of BIBREF18 , BIBREF19 apply multitask learning to train models for multiple languages. Zoph and Knight BIBREF20 propose a multi-source model and BIBREF21 introduces a character-level encoder that is shared across several source languages. In our setup, we will follow the main idea proposed by Johnson et al. BIBREF22 . The authors of that paper suggest a simple addition by means of a language flag on the source language side (see Figure 2 ) to indicate the target language that needs to be produced by the decoder. This flag will be mapped on a dense vector representation and can be used to trigger the generation of the selected language. The authors of the paper argue that the model enables transfer learning and supports the translation between languages that are not explicitly available in training.
|
What neural machine translation models can learn in terms of transfer learning?
|
Multilingual Neural Machine Translation Models
|
null | false
| 383
|
The features extracted from the transitions include 12 Mel-Frequency Cepstral Coefficients (MFCCs) with their first and second derivatives, and the log energy of the signal distributed into 22 Bark bands. The total number of descriptors corresponds to 58. Four statistical functionals (mean, standard deviation, skewness, and kurtosis) are computed for each descriptor, obtaining a 232-dimensional feature-vector per utterance. The classification of PD patients and HC speakers is performed with a radial basis SVM with margin parameter $C=10$ and a Gaussian kernel with parameter $\gamma =0.0001$. The SVM is tested following a 10-fold Cross-Validation strategy, speaker independent.
Four statistical functionals (mean, standard deviation, skewness, and kurtosis) are computed for each descriptor, obtaining a 232-dimensional feature-vector per utterance.
|
What are the four statistical functionals computed for each descriptor?
|
Mean, standard deviation, skewness, and kurtosis.
|
null | false
| 247
|
We conduct experiments on a simplified Chinese emotion cause corpus BIBREF31 , the only publicly available dataset on this task to the best of our knowledge. The corpus contains 2,105 documents from SINA city news. Each document has only one emotion word and one or more emotion causes. The documents are segmented into clauses manually. The main task is to identify which clause contains the emotion cause.
[id=lq]Details of the corpus are shown in Table 1. The metrics we used in evaluation follows lee2010text. It is commonly accepted so that we can compare our results with others. If a proposed emotion cause clause covers the annotated answer, the word sequence is considered correct. The precision, recall, and F-measure are defined by INLINEFORM0
In the experiments, we randomly select 90% of the dataset as training data and 10% as testing data. In order to obtain statistically credible results, we evaluate our method and baseline methods 25 times with different train/test splits.
In order to obtain statistically credible results, we evaluate our method and baseline methods 25 times with different train/test splits.
|
How many times do the authors evaluate the method and baseline methods with different train/test splits?
|
25 times
|
null | false
| 192
|
For evaluating topics with automatic quality measures, we used several English text collections and one Russian collection (Table TABREF7 ). We experiment with three thesauri: WordNet (155 thousand entries), information-retrieval thesaurus of the European Union EuroVoc (15161 terms), and Russian thesaurus RuThes (115 thousand entries) BIBREF19 .
At the preprocessing step, documents were processed by morphological analyzers. Also, we extracted noun groups as described in BIBREF16 . As baselines, we use the unigram LDA topic model and LDA topic model with added 1000 ngrams with maximal NC-value BIBREF20 extracted from the collection under analysis.
As it was found before BIBREF14 , BIBREF16 , the addition of ngrams without accounting relations between their components considerably worsens the perplexity because of the vocabulary growth (for perplexity the less is the better) and practically does not change other automatic quality measures (Table 2).
We add the Wordnet data in the following steps. At the first step, we include WordNet synonyms (including multiword expressions) into the proposed similarity sets (LDA-Sim+WNsyn). At this step, frequencies of synonyms found in the same document are summed up in process LDA topic learning as described in Algorithm SECREF3 . We can see that the kernel uniqueness becomes very low, topics are very close to each other in content (Table 2: LDA-Sim+WNsyn). At the second step, we add word direct relatives (hyponyms, hypernyms, etc.) to similarity sets. Now the frequencies of semantically related words are added up enhancing the contribution into all topics of the current document.
The Table 2 shows that these two steps lead to great degradation of the topic model in most measures in comparison to the initial unigram model: uniqueness of kernels abruptly decreases, perplexity at the second step grows by several times (Table 2: LDA-Sim+WNsynrel). It is evident that at this step the model has a poor quality. When we look at the topics, the cause of the problem seems to be clear. We can see the overgeneralization of the obtained topics. The topics are built around very general words such as "person", "organization", "year", etc. These words were initially frequent in the collection and then received additional frequencies from their frequent synonyms and related words.
Then we suppose that these general words were used in texts to discuss specific events and objects, therefore, we change the constructions of the similarity sets in the following way: we do not add word hyponyms to its similarity set. Thus, hyponyms, which are usually more specific and concrete, should obtain additional frequencies from upper synsets and increase their contributions into the document topics. But the frequencies and contribution of hypernyms into the topic of the document are not changed. And we see the great improvement of the model quality: the kernel uniqueness considerably improves, perplexity decreases to levels comparable with the unigram model, topic coherence characteristics also improve for most collections (Table 2:LDA-Sim+WNsynrel/hyp).
We further use the WordNet-based similarity sets with n-grams having the same components as described in BIBREF16 . All measures significantly improve for all collections (Table 2:LDA-Sim+WNsr/hyp+Ngrams). At the last step, we try to apply the same approach to ngrams that was previously utilized to hyponym-hypernym relations: frequencies of shorter ngrams and words are summed to frequencies of longer ngrams but not vice versa. In this case we try to increase the contribution of more specific longer ngrams into topics. It can be seen (Table 2) that the kernel uniqueness grows significantly, at this step it is 1.3-1.6 times greater than for the baseline models achieving 0.76 on the ACL collection (Table 2:LDA-Sim+WNsr/hyp+Ngrams/l).
At the second series of the experiments, we applied EuroVoc information retrieval thesaurus to two European Union collections: Europarl and JRC. In content, the EuroVoc thesaurus is much smaller than WordNet, it contains terms from economic and political domains and does not include general abstract words. The results are shown in Table 3. It can be seen that inclusion of EuroVoc synsets improves the topic coherence and increases kernel uniqueness (in contrast to results with WordNet). Adding ngrams further improves the topic coherence and kernel uniqueness.
At last we experimented with the Russian banking collection and utilized RuThes thesaurus. In this case we obtained improvement already on RuThes synsets and again adding ngrams further improved topic coherence and kernel uniqueness (Table 4).
It is worth noting that adding ngrams sometimes worsens the TC-NPMI measure, especially on the JRC collection. This is due to the fact that in these evaluation frameworks, the topics' top elements contain a lot of multiword expressions, which rarely occur in Wikipedia, used for the coherence calculation, therefore the utilized automatic coherence measures can have insufficient evidence for correct estimates.
It is worth noting that adding ngrams sometimes worsens the TC-NPMI measure, especially on the JRC collection. This is due to the fact that in these evaluation frameworks, the topics’ top elements contain a lot of multiword expressions, which rarely occur in Wikipedia, used for the coherence calculation, therefore the utilized automatic coherence measures can have insufficient evidence for correct estimates.
|
Why do adding n-grams sometimes worsen the TC-NPMI measure, especially on the JRC collection?
|
This is due to the fact that in these evaluation frameworks, the topics’ top elements contain a lot of multiword expressions, which rarely occur in Wikipedia, used for the coherence calculation, therefore the utilized automatic coherence measures can have insufficient evidence for correct estimates.
|
null | false
| 189
|
Automatically predicting the level of English of non-native speakers from their written text is an interesting text mining task. Systems that perform well in the task can be useful components for online, second-language learning platforms as well as for organisations that tutor students for this purpose. In this paper I present the system balikasg that achieved the state-of-the-art performance in the CAp 2018 data science challenge among 14 systems. In order to achieve the best performance in the challenge, I decided to use a variety of features that describe an essay's readability and syntactic complexity as well as its content. For the prediction step, I found Gradient Boosted Trees, whose efficiency is proven in several data science challenges, to be the most efficient across a variety of classifiers.
The rest of the paper is organized as follows: in Section 2 I frame the problem of language level as an ordinal classification problem and describe the available data. Section 3 presents the feature extaction and engineering techniques used. Section 4 describes the machine learning algorithms for prediction as well as the achieved results. Finally, Section 5 concludes with discussion and avenues for future research.
In order to achieve the best performance in the challenge, I decided to use a variety of features that describe an essay's readability and syntactic complexity as well as its content.
|
What did the author decide to use to achieve the best performance in the CAp 2018 data science challenge?
|
In order to achieve the best performance in the CAp 2018 data science challenge, the author decided to use a variety of features that describe an essay's readability and syntactic complexity as well as its content.
|
null | false
| null |
What are the Regulations of Football?
|
The Fédération Internationale de Football Association rules football. Their headquarters were first founded in 1904 in Switzerland.
They currently have over 200 members around the world. FIFA governs all the rules and regulations of football as played in the English Premier League, United Kingdom.
|
|
1812.06705
| false
| null |
FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018).
FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018).
|
How do the authors measure performance?
|
Accuracy across six datasets
|
null | false
| null |
Kevum or Kavum is a deep-fried Sri Lankan sweet made from rice flour and kithul (sugar-palm) treacle, with a number of variants adding additional ingredients. It is also known as oil cake. Kevum is traditionally given and consumed during celebrations of Sinhala and Tamil New Year.
|
Given this paragraph about Kevum, explain when it is typically consumed?
|
It's consumed usually for Sinahala and Tamil celebrations. Particularly, the New Year.
|
null | false
| null |
What are some disadvantages of the way the tax code treats incentive stock options?
|
There are three primary problems with the way the tax code treats incentive stock options. First, the tax code gives an employee a strong incentive to exercise the options early on, to achieve the tax benefit relating to the special option treatment. This means the tax code encourages young employees to write a check to their employer at a time when their entire financial life is likely to be dependent on that employer. This is the exact opposite of diversification, a key goal of personal financial management. The second issue is that the alternative minimum tax (AMT) is imposed at the time of exercise. This requires a potentially large cash outlay, at a time when the employee is unlikely to have sufficient liquidity to meet the burden. So not only does the employee have to pay his or her employer to cover the exercise premium, he or she has to send funds to the Internal Revenue Service as well. Finally, the requirement that an employee terminating employment must exercise within three months of termination date to preserve the tax advantage of the options encourages exercise at a time when the employee may not have sufficient funds, and may be looking at financial uncertainty if he or she does not yet have a new job lined up. The advantages of incentive stock options are significant, but they encourage the wrong behavior.
|
|
null | false
| 278
|
Semantic parsing is a task of transducing natural language to meaning representations, which in turn can be expressed through many different semantic formalisms including lambda calculus BIBREF1, DCS BIBREF2, Discourse Representation Theory (DRT) BIBREF3, AMR BIBREF4 and so on. This availability of annotated data in English has translated into the development of a plethora of models, including encoder-decoders BIBREF5, BIBREF6 as well as tree or graph-structured decoders BIBREF5, BIBREF7, BIBREF0, BIBREF8.
Whereas the majority of semantic banks focus on English, recent effort has focussed on building multilingual representations, e.g. PMB BIBREF9, MRS BIBREF10 and FrameNetBIBREF11. However, manually annotating meaning representations in a new language is a painstaking process which explains why there are only a few datasets available for different formalisms in languages other than English. As a consequence, whereas the field has made great advances for English, little work has been done in other languages.
We ask: can we learn a semantic parser for English and test it where in another where annotations are not available? What would that require?
To answer this question, previous work have leveraged machine translation techniques to map the semantics from a language to another BIBREF12. However, these methods require parallel corpora to extract automatic alignments which are often noisy or not available at all.
In this paper we explore parameter-shared models instead, where a model is trained on English using language independent features and tested in a target language.
To show how this approach performs, we focus on the Parallel Meaning Bank BIBREF13 – a multilingual semantic bank, where sentences in English, German, Italian and Dutch have been annotated with their meaning representations. The annotations in the PMB are based on Discourse Representation Theory BIBREF3, a popular theory of meaning representation designed to account for intra and inter-sentential phenomena, like temporal expressions and anaphora. Figure 1 shows an example DRT for the sentence `I sat down and opened my laptop' in its canonical `box' representation. A DRS is a nested structure with the top part containing the discourse references and the bottom with unary and binary predicates, as well as semantic constants (e.g. `speaker'). DRS can be linked to each other via logic operator (e.g. $\lnot $, $\rightarrow $, $\diamond $) or, as in this case, discourse relations (e.g. CONTINUATION, RESULT, ELABORATION, etc.).
To test our approach we leverage the DRT parser of liu2018discourse, an encoder-decoder architecture where the meaning representation is reconstructed in three stages, coarse-to-fine, by first building the DRS skeleton (i.e. the `box' structures) and then fill each DRS with predicates and variables. Whereas the original parser utilizes a sequential Bi-LSTM encoder with monolingual lexical features, we experiment with language-independent features in the form of cross-lingual word-embeddings, universal PoS tags and universal dependencies. In particular, we also make use of tree encoders to assess whether modelling syntax can be beneficial in cross-lingual settings, as shown for other semantic tasks (e.g. negation scope detection BIBREF14).
Results show that language-independent features are a valid alternative to projection methods for cross-lingual semantic parsing. We show that adding dependency relation as features is beneficial, even when they are the only feature used during encoding. However, we also show that modeling the dependency structure directly via tree encoders does not outperform a sequential BiLSTM architecture for the three languages we have experimented with.
Results show that language-independent features are a valid alternative to projection methods for cross-lingual semantic parsing.
|
Are the language-independent features a valid alternative to projection methods for cross-lingual semantic parsing?
|
Yes, they are.
|
null | false
| null |
Museo Municipal Taurino Francisco Montes is located in Chiclana de la Frontera, in the province of Cádiz, Andalusia, southwestern Spain. Situated on San Agustín Street, the bullfighting museum is dedicated to the matador Francisco Montes Reina (1805–1851), nicknamed "Paquiro", who was born in this town. He is considered the most important 19th century bullfighter, because of his skill at the sport, on foot or on horseback. The bullfighting displays include pieces of great historical and artistic value. It was inaugurated in 2003 and founded by Pedro Leal Aragon who for seven years collected objects, souvenirs and bullfighting implements. One of its first events was a group show of the artists Antonio Vela, Carlos Quevedo, and Paloma Garcia. After a renovation, it re-opened in June 2005 with miniature models of the bullring in Ronda, a head bust of Paquiro, and an original painting by Antonio Cavanna.The expansion also housed Spain's most important bullfighting collection, Collection Sagnier, which was acquired by the city to improve the historical and artistic quality of the municipal bullfighting museum. There are four rooms: bullfighting origins, Paquiro and his time, contemporary bullfighting, and the bull and the arts. The museum's garden is used for book presentations, press conferences, weddings and small concerts. It is closed on Sundays.
|
Given this paragraph about Museo Municipal Taurino Francisco Montes, where is it located?
|
Museo Municipal Taurino Francisco Montes is located in Chiclana de la Frontera, in the province of Cádiz, Andalusia, southwestern Spain.
|
null | false
| 128
|
End-to-end speech-to-text translation (ST) has attracted much attention recently BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 given its simplicity against cascading automatic speech recognition (ASR) and machine translation (MT) systems. The lack of labeled data, however, has become a major blocker for bridging the performance gaps between end-to-end models and cascading systems. Several corpora have been developed in recent years. post2013improved introduced a 38-hour Spanish-English ST corpus by augmenting the transcripts of the Fisher and Callhome corpora with English translations. di-gangi-etal-2019-must created the largest ST corpus to date from TED talks but the language pairs involved are out of English only. beilharz2019librivoxdeen created a 110-hour German-English ST corpus from LibriVox audiobooks. godard-etal-2018-low created a Moboshi-French ST corpus as part of a rare language documentation effort. woldeyohannis provided an Amharic-English ST corpus in the tourism domain. boito2019mass created a multilingual ST corpus involving 8 languages from a multilingual speech corpus based on Bible readings BIBREF7. Previous work either involves language pairs out of English, very specific domains, very low resource languages or a limited set of language pairs. This limits the scope of study, including the latest explorations on end-to-end multilingual ST BIBREF8, BIBREF9. Our work is mostly similar and concurrent to iranzosnchez2019europarlst who created a multilingual ST corpus from the European Parliament proceedings. The corpus we introduce has larger speech durations and more translation tokens. It is diversified with multiple speakers per transcript/translation. Finally, we provide additional out-of-domain test sets.
In this paper, we introduce CoVoST, a multilingual ST corpus based on Common Voice BIBREF10 for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. It includes a total 708 hours of French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh) speeches, with French and German ones having the largest durations among existing public corpora. We also collect an additional evaluation corpus from Tatoeba for French, German, Dutch, Russian and Spanish, resulting in a total of 9.3 hours of speech. Both corpora are created at the sentence level and do not require additional alignments or segmentation. Using the official Common Voice train-development-test split, we also provide baseline models, including, to our knowledge, the first end-to-end many-to-one multilingual ST models. CoVoST is released under CC0 license and free to use. The Tatoeba evaluation samples are also available under friendly CC licenses. All the data can be acquired at https://github.com/facebookresearch/covost.
In this paper, we introduce CoVoST, a multilingual ST corpus based on Common Voice (Ardila et al., 2019) for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. It includes a total 708 hours of French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh) speeches, with French and German ones having the largest durations among existing public corpora.
|
What language does the corpus contain?
|
Twelve languages including English, French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh)
|
null | false
| null |
Is a tomato a fruit or a vegetable?
|
A tomato is technically a fruit. We know this because the seeds of a tomato are on the inside. However, tomatoes are typically thought of and treated as a vegetable. We roast them in the oven, add them to salads, and include them on vegetable trays with other vegetables. If you want to win a technicality content, you can call a tomato a fruit. But don't put them in a fruit salad.
|
|
null | false
| 296
|
Gender bias describes an inherent prejudice against a gender, captured both by individuals and larger social systems. Word embeddings, a popular machine-learnt semantic space, have been shown to retain gender bias present in corpora used to train them BIBREF0. This results in gender-stereotypical vector analogies à la NIPS20135021, such as man:computer programmer :: woman:homemaker BIBREF1, and such bias has been shown to materialise in a variety of downstream tasks, e.g. coreference resolution BIBREF2, BIBREF3.
By operationalising gender bias in word embeddings as a linear subspace, DBLP:conf/nips/BolukbasiCZSK16 are able to debias with simple techniques from linear algebra. Their method successfully mitigates [author=simone,color=blue!40,size=,fancyline,caption=,]does not particularly like boldfacing for emphasis, but can live with.direct bias: man is no longer more similar to computer programmer in vector space than woman. However, the structure of gender bias in vector space remains largely intact, and the new vectors still evince indirect bias: associations which result from gender bias between not explicitly gendered words, for example a possible association between football and business resulting from their mutual association with explicitly masculine words BIBREF4. In this paper we continue the work of BIBREF4, and show that another paradigm for gender bias mitigation proposed by BIBREF5, Counterfactual Data Augmentation (CDA), is also unable to mitigate indirect bias. We also show, using a new test we describe (non-biased gender analogies), that WED might be removing too much gender information, casting further doubt on its operationalisation of gender bias as a linear subspace.
To improve CDA we make two proposals. The first, Counterfactual Data Substitution (CDS), is designed to avoid text duplication in favour of substitution. The second, the Names Intervention, is a method which can be applied to either CDA or CDS, and treats bias inherent in first names. It does so using a novel name pairing strategy that accounts for both name frequency and gender-specificity. Using our improvements, the clusters of the most biased words exhibit a reduction of cluster purity by an average of 49% across both corpora following treatment, thereby offering a partial solution to the problem of indirect bias as formalised by BIBREF4. [author=simone,color=blue!40,size=,fancyline,caption=,]first part of reaction to reviewer 4Additionally, although one could expect that the debiased embeddings might suffer performance losses in computational linguistic tasks, our embeddings remain useful for at least two such tasks, word similarity and sentiment classification BIBREF6.
The first, Counterfactual Data Substitution (CDS), is designed to avoid text duplication in favour of substitution. The second, the Names Intervention, is a method which can be applied to either CDA or CDS, and treats bias inherent in first names. It does so using a novel name pairing strategy that accounts for both name frequency and gender-specificity.
|
What are the two proposals made by the authors?
|
The first, Counterfactual Data Substitution (CDS), is designed to avoid text duplication in favour of substitution. The second, the Names Intervention, is a method which treats bias inherent in first names. It does so using a novel name pairing strategy that accounts for both name frequency and gender-specificity.
|
null | false
| null |
In the UK, the recommended daily values for magnesium are 300 mg for men and 270 mg for women. In the U.S. the Recommended Dietary Allowances (RDAs) are 400 mg for men ages 19–30 and 420 mg for older; for women 310 mg for ages 19–30 and 320 mg for older.
|
Given this reference test, what is the daily recommended amount of magnesium for women in the U.S.?
|
The daily recommended amount of magnesium for women in the U.S. is 310 mg for ages 19-30 and 320 mg for older.
|
null | false
| null |
Let’s solve the following math problem. Each rabbit has 4 feets and each duck has 2 feets. If there are in total 25 heads and 88 feets, how many rabbits and how many ducks are there?
|
We can use equations to solve this problem. Let the number of ducks be X and the number of rabbits be Y. We have
X+Y = 25 (1)
2X + 4Y = 88 (2).
From the first equation, we know that X = 25-Y and we can substitute this into the second equation (2), and we get
2(25-Y) + 4Y = 88
Which can be simplified to 50 + 2Y = 88 and we have Y = 19.
We substitute the value into (1), we will then get X = 6.
There are 6 ducks and 19 rabbits!
|
|
null | false
| null |
In response to concerns that volcanic ash ejected during the 2010 eruptions of Eyjafjallajökull in Iceland would damage aircraft engines, the controlled airspace of many European countries was closed to instrument flight rules traffic, resulting in what at the time was the largest air-traffic shut-down since World War II. The closures caused millions of passengers to be stranded not only in Europe, but across the world. With large parts of European airspace closed to air traffic, many more countries were affected as flights to, from, and over Europe were cancelled.
After an initial uninterrupted shutdown over much of northern Europe from 15 to 23 April, airspace was closed intermittently in different parts of Europe in the following weeks, as the path of the ash cloud was tracked. The ash cloud caused further disruptions to air travel operations in Scotland and Ireland on 4 and 5 May and in Spain, Portugal, northern Italy, Austria, and southern Germany on 9 May. Irish and UK airspace closed again on 16 May and reopened on 17 May.
The eruption occurred beneath glacial ice. The cold water from the melting ice chilled the lava quickly, causing it to fragment into very small particles of glass (silica) and ash, which were carried into the eruption plume. The extremely fine ash particles and the large volume of steam from the glacial meltwater sent an ash plume hazardous to aircraft rapidly high into the upper atmosphere. The presence and location of the plume depended upon the state of the eruption and the winds. The large amount of glacial meltwater flowing into the eruption vent made this eruption so explosive that it ejected its ash plume directly into the jet stream, which was unusually stable and south-easterly. The ash was then carried over Europe into some of the busiest airspace in the world.
The International Air Transport Association (IATA) estimated that the airline industry worldwide would lose €148 million (US$200 million, £130 million) a day during the disruption. IATA stated that the total loss for the airline industry was around US$1.7 billion (£1.1 billion, €1.3 billion). The Airport Operators Association (AOA) estimated that airports lost £80 million over the six-and-a-half days. Over 95,000 flights had been cancelled all across Europe during the six-day airspace ban, with later figures suggesting 107,000 flights cancelled during an 8-day period, accounting for 48% of total air traffic and roughly 10 million passengers.
|
Given a reference text that describes the air travel disruption after the 2010 Eyjafjallajökull eruption, how long was the air travel over Europe disrupted for?
|
Initially there was an uninterrupted shutdown of the air travel from April 15 until April 23. There were intermittent disruptions in different parts of Europe in the following weeks until May 17. Overall, the air travel over Europe disrupted for 33 days between April 15 and May 17 2010.
|
null | false
| null |
Give me a title of one of the "Harry Potter" novels.
|
Harry Potter and the Deathly Hallows
|
|
null | false
| 34
|
Since we focus on single-relation question answering in this work, our model decodes every question into a KB query that consists of exactly two elements–the topic entity, and the predicate. More formally, our model is a function $f(q, \lbrace e\rbrace , \lbrace p\rbrace )$ that takes as input a question $q$ , a set of candidate entities $\lbrace e\rbrace =e_1, ...,e_n$ , a set of candidate predicates $\lbrace p\rbrace =p_1,..., p_m$ , and produces a likelihood score $p(e_i, p_j|q)$ of generating entity $e_i$ and predicate $p_j$ given question $q$ for all $i\in {1...n}, j\in {1...m}$ .
As illustrated in Figure 1, our model consists of three components:
The details of each component are described in the following sections.
As illustrated in Figure 1, our model consists of three components: 1. A character-level LSTM-based encoder for the question which produces a sequence of embedding vectors, one for each character (Figure 1a). 2. A character-level convolutional neural network (CNN)-based encoder for the predicates/entities in a knowledge base which produces a single embedding vector for each predicate or entity (Figure 1c). 3. An LSTM-based decoder with an attention mechanism and a relevance function for generating the topic entity and predicate to form the KB query (Figure 1b).
|
Which three components does the components consist of?
|
1. A character-level LSTM-based encoder for the question which produces a sequence of embedding vectors, one for each character (Figure 1a). 2. A character-level convolutional neural network (CNN)-based encoder for the predicates/entities in a knowledge base which produces a single embedding vector for each predicate or entity (Figure 1c). 3. An LSTM-based decoder with an attention mechanism and a relevance function for generating the topic entity and predicate to form the KB query (Figure 1b).
|
1911.13087
| false
| null |
To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences.
To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences.
|
What is the size of the dataset?
|
The answers are shown as follows:
* 2000 sentences
|
1803.07771
| false
| null |
We compile three Chinese text corpora from online data for three domains, namely, “hotel", “mobile phone (mobile)", and “travel". All texts are about user reviews. Each text sample collected is first partitioned into clauses according to Chinese tokens. Three clause sets are subsequently obtained from the three text corpora.
We compile three Chinese text corpora from online data for three domains, namely, “hotel", “mobile phone (mobile)", and “travel". All texts are about user reviews.
|
What are the sources of the data?
|
User reviews written in Chinese collected online for hotel, mobile phone, and travel domains
|
null | false
| null |
The American Red Cross (ARC), also known as the American National Red Cross, is a non-profit humanitarian organization that provides emergency assistance, disaster relief, and disaster preparedness education in the United States. It is the designated US affiliate of the International Federation of Red Cross and Red Crescent Societies and the United States movement to the International Red Cross and Red Crescent Movement.
|
What is American Red Cross
|
The American Red Cross is a non profit humanitarian organization that provides emergency assistance, disaster relief, and disaster preparedness education in the United States
|
null | false
| null |
What is the nuclear magnetic resonance spectroscopy of proteins?
|
Nuclear magnetic resonance spectroscopy of proteins (usually abbreviated protein NMR) is a field of structural biology in which NMR spectroscopy is used to obtain information about the structure and dynamics of proteins, and also nucleic acids, and their complexes. The field was pioneered by Richard R. Ernst and Kurt Wüthrich at the ETH, and by Ad Bax, Marius Clore, Angela Gronenborn at the NIH, and Gerhard Wagner at Harvard University, among others. Structure determination by NMR spectroscopy usually consists of several phases, each using a separate set of highly specialized techniques. The sample is prepared, measurements are made, interpretive approaches are applied, and a structure is calculated and validated.
NMR involves the quantum-mechanical properties of the central core ("nucleus") of the atom. These properties depend on the local molecular environment, and their measurement provides a map of how the atoms are linked chemically, how close they are in space, and how rapidly they move with respect to each other. These properties are fundamentally the same as those used in the more familiar magnetic resonance imaging (MRI), but the molecular applications use a somewhat different approach, appropriate to the change of scale from millimeters (of interest to radiologists) to nanometers (bonded atoms are typically a fraction of a nanometer apart), a factor of a million. This change of scale requires much higher sensitivity of detection and stability for long term measurement. In contrast to MRI, structural biology studies do not directly generate an image, but rely on complex computer calculations to generate three-dimensional molecular models.
Currently most samples are examined in a solution in water, but methods are being developed to also work with solid samples. Data collection relies on placing the sample inside a powerful magnet, sending radio frequency signals through the sample, and measuring the absorption of those signals. Depending on the environment of atoms within the protein, the nuclei of individual atoms will absorb different frequencies of radio signals. Furthermore, the absorption signals of different nuclei may be perturbed by adjacent nuclei. This information can be used to determine the distance between nuclei. These distances in turn can be used to determine the overall structure of the protein.
A typical study might involve how two proteins interact with each other, possibly with a view to developing small molecules that can be used to probe the normal biology of the interaction ("chemical biology") or to provide possible leads for pharmaceutical use (drug development). Frequently, the interacting pair of proteins may have been identified by studies of human genetics, indicating the interaction can be disrupted by unfavorable mutations, or they may play a key role in the normal biology of a "model" organism like the fruit fly, yeast, the worm C. elegans, or mice. To prepare a sample, methods of molecular biology are typically used to make quantities by bacterial fermentation. This also permits changing the isotopic composition of the molecule, which is desirable because the isotopes behave differently and provide methods for identifying overlapping NMR signals.
|
|
null | false
| null |
What is an adjustable rate mortgage?
|
A mortgage whose interest rate and monthly payments vary throughout the life of the loan
|
|
null | false
| 73
|
With the steady growth in the commercial websites and social media venues, the access to users' reviews have become easier. As the amount of data that can be mined for opinion increased, commercial companies' interests for sentiment analysis increased as well. Sentiment analysis is an important part of understanding user behavior and opinions on products, places, or services.
Sentiment analysis has long been studied by the research community, leading to several sentiment-related resources such as sentiment dictionaries that can be used as features for machine learning models BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . These resources help increase sentiment analysis accuracies; however, they are highly dependent on language and require researchers to build such resources for every language to process.
Feature engineering is a large part of the model building phase for most sentiment analysis and emotion detection models BIBREF4 . Determining the correct set of features is a task that requires thorough investigation. Furthermore, these features are mostly language and dataset dependent making it even further challenging to build models for different languages. For example, the sentiment and emotion lexicons, as well as pre-trained word embeddings are not completely transferable to other languages which replicates the efforts for every language that users would like to build sentiment classification models on. For languages and tasks where the data is limited, extracting these features, building language models, training word embeddings, and creating lexicons are big challenges. In addition to the feature engineering effort, the machine learning models' parameters also need to be tuned separately for each language to get the optimal results.
In this paper, we take a different approach. We build a reusable sentiment analysis model that does not utilize any lexicons. Our goal is to evaluate how well a generic model can be used to mine opinion in different languages where data is more limited than the language where the generic model is trained on. To that end, we build a training set that contains reviews from different domains in English (e.g., movie reviews, product reviews) and train a recurrent neural network (RNN) model to predict polarity of those reviews. Then focusing on a domain, we make the model specialized in that domain by using the trained weights from the larger data and further training with data on a specific domain. To evaluate the reusability of the sentiment analysis model, we test with non-English datasets. We first translate the test set to English and use the pre-trained model to score polarity in the translated text. In this way, our proposed approach eliminates the need to train language-dependent models, use of sentiment lexicons and word embeddings for each language. Our experiments show that a generalizable sentiment analysis model can be utilized successfully to perform opinion mining for languages that do not have enough resources to train specific models.
The contributions of this study are; 1) a robust approach that utilizes machine translation to reuse a model trained on one language in other languages, 2) an RNN-based approach to eliminate feature extraction as well as resource requirements for sentiment analysis, and 3) a technique that statistically significantly outperforms baselines for multilingual sentiment analysis task when data is limited. To the best of our knowledge, this study is the first to apply a deep learning model to the multilingual sentiment analysis task.
Our goal is to evaluate how well a generic model can be used to mine opinion in different languages where data is more limited than the language where the generic model is trained on.
|
What do they evaluate?
|
How well a generic model can be used to mine opinion in different languages where data is more limited than the language where the generic model is trained on.
|
null | false
| 66
|
We use the beam-search decoder presented in BIBREF11 to generate word sequences given the output from our acoustic model. The decoder finds the word transcription INLINEFORM0 to maximize: INLINEFORM1
where INLINEFORM0 is the value for the INLINEFORM1 th frame in the path leading to INLINEFORM2 and INLINEFORM3 is the (unnormalized) acoustic model score of the transcription INLINEFORM4 . The hyperparameters INLINEFORM5 respectively control the weight of the language model, the word insertion reward, and the silence insertion penalty. The other parameters are the beam size and the beam score, a threshold under which candidates are discarded even if the beam is not full. These are chosen according to a trade-off between (near-)optimality of the search and computational cost.
The score of the acoustic model is computed based on the score of paths of letters (including silences) that are compatible with the output sequence.
|
How is the score of the acoustic model computed?
|
The score of the acoustic model is computed based on the score of paths of letters (including silences) that are compatible with the output sequence.
|
null | false
| null |
What is the difference between a donut and a scone?
|
A donut is a fried piece of dough, generally made from flour, eggs and liquor. It tends to be airy and soft. A scone is a dense pastry baked in the oven, as opposed to fried. Scones are more akin to bread and can be either savory or sweet.
|
|
null | false
| 33
|
Vector space methods that measure semantic similarity and relatedness often rely on distributional information such as co--occurrence frequencies or statistical measures of association to weight the importance of particular co--occurrences. In this paper, we extend these methods by incorporating a measure of semantic similarity based on a human curated taxonomy into a second--order vector representation. This results in a measure of semantic relatedness that combines both the contextual information available in a corpus--based vector space representation with the semantic knowledge found in a biomedical ontology. Our results show that incorporating semantic similarity into a second order co--occurrence matrices improves correlation with human judgments for both similarity and relatedness, and that our method compares favorably to various different word embedding methods that have recently been evaluated on the same reference standards we have used.
Vector space methods that measure semantic similarity and relatedness often rely on distributional information such as co–occurrence frequencies or statistical measures of association to weight the importance of particular co–occurrences.
|
What do vector space methods that measure semantic similarity and relatedness often rely on?
|
Distributional information such as co–occurrence frequencies or statistical measures of association to weight the importance of particular co–occurrences.
|
null | false
| null |
RELX plc (pronounced "Rel-ex") is a British multinational information and analytics company headquartered in London, England. Its businesses provide scientific, technical and medical information and analytics; legal information and analytics; decision-making tools; and organise exhibitions. It operates in 40 countries and serves customers in over 180 nations. It was previously known as Reed Elsevier, and came into being in 1993 as a result of the merger of Reed International, a British trade book and magazine publisher, and Elsevier, a Netherlands-based scientific publisher. The company is publicly listed, with shares traded on the London Stock Exchange, Amsterdam Stock Exchange and New York Stock Exchange (ticker symbols: London: REL, Amsterdam: REN, New York: RELX). The company is one of the constituents of the FTSE 100 Index, Financial Times Global 500 and Euronext 100 Index.
|
Which exchanges is RELX listed on?
|
RELX plc is listed on the London, Amsterdam and New York Stock exchanges.
|
null | false
| null |
Why do Germans love their cars?
|
Germans spend a lot of money on their cars. Whether by leasing or buying. It is a kind of status symbol and German engineering is memorized every day via advertising or even cars driving around. Many sports car manufacturers like Porsche or even Mercedes manage to attract more and more clientele through their pricing and advertising through social media, serving not only the rich society.
|
|
null | false
| null |
Franklin Delano Roosevelt (/ˈdɛlənoʊ ˈroʊzəvɛlt, -vəlt/ DEL-ə-noh ROH-zə-velt, -vəlt; January 30, 1882 – April 12, 1945), commonly known as FDR, was an American statesman and political leader who served as the 32nd president of the United States from 1933 until his death in 1945. He previously served as the 44th governor of New York from 1929 to 1933, the Assistant Secretary of the Navy from 1913 to 1920, and a member of the New York State Senate from 1911 to 1913.
|
Who is FDR?
|
Franklin Delano Roosevelt (/ˈdɛlənoʊ ˈroʊzəvɛlt, -vəlt/ DEL-ə-noh ROH-zə-velt, -vəlt; January 30, 1882 – April 12, 1945), commonly known as FDR, was an American statesman and political leader who served as the 32nd president of the United States from 1933 until his death in 1945. He previously served as the 44th governor of New York from 1929 to 1933, the Assistant Secretary of the Navy from 1913 to 1920, and a member of the New York State Senate from 1911 to 1913.
Roosevelt attended Groton School, Harvard College, and Columbia Law School, going on to practice law in New York City. He won election to the New York State Senate in 1910 and then served as Assistant Secretary of the Navy under president Woodrow Wilson during World War I. Roosevelt was James M. Cox's running mate on the Democratic Party's ticket in the 1920 United States presidential election, but Cox was defeated by Republican Warren G. Harding. In 1921, Roosevelt contracted a paralytic illness that permanently paralyzed his legs. He returned to public office by winning the 1928 New York gubernatorial election. He served as governor of New York from 1929 to 1933, promoting programs to combat the Great Depression besetting the United States at the time. In the 1932 presidential election, Roosevelt defeated Republican incumbent president Herbert Hoover in a landslide.
During his first 100 days as president, Roosevelt spearheaded unprecedented federal legislation and issued a profusion of executive orders that instituted the New Deal. He created numerous programs to provide relief to the unemployed and farmers while seeking economic recovery with the National Recovery Administration and other programs. He also instituted major regulatory reforms related to finance, communications, and labor, and presided over the end of Prohibition. In 1936, Roosevelt won a landslide reelection with the economy having improved rapidly from 1933, but the economy relapsed into a deep recession in 1937 and 1938. Later, Roosevelt unsuccessfully sought passage of the Judicial Procedures Reform Bill of 1937. The conservative coalition formed in 1937 to block the implementation of further New Deal programs and reforms. He ran successfully for reelection in 1940, becoming the only American president to serve for more than two terms.
With World War II looming after 1938 in addition to the Japanese invasion of China and the aggression of Nazi Germany, Roosevelt gave strong diplomatic and financial support to China as well as the United Kingdom and the Soviet Union while the United States remained officially neutral. Following the Japanese attack on Pearl Harbor on December 7, 1941, he obtained a declaration of war on Japan the next day, and a few days later, on Germany and Italy. He worked closely with other national leaders in leading the Allies against the Axis powers. Roosevelt supervised the mobilization of the American economy to support the war effort and implemented a Europe first strategy. He also initiated the development of the world's first atomic bomb and worked with the other Allied leaders to lay the groundwork for the United Nations and other post-war institutions. He won reelection in 1944 but with his physical health seriously and steadily declining during the war years, he died in 1945. Since his death, several of Roosevelt's actions have come under substantial criticism, such as his ordering of the incarceration of Japanese Americans in concentration camps. Nonetheless, historical rankings consistently rank Roosevelt as one of the greatest presidents in American history.
|
null | false
| 391
|
The goal of Machine Reading Comprehension (MRC) is to have machines read a text passage and then generate an answer (or select an answer from a list of given candidates) for any question about the passage. There has been a growing interest in the research community in exploring neural MRC models in an end-to-end fashion, thanks to the availability of large-scale datasets, such as CNN/DM BIBREF0 and SQuAD BIBREF1 .
Despite the variation in model structures, most state-of-the-art models perform reading comprehension in two stages. First, the symbolic representations of passages and questions are mapped into vectors in a neural space. This is commonly achieved via embedding and attention BIBREF2 , BIBREF3 or fusion BIBREF4 . Then, reasoning is performed on the vectors to generate the right answer.
Ideally, the best attention and reasoning strategies should adapt organically in order to answer different questions. However, most MRC models use a static attention and reasoning strategy indiscriminately, regardless of various question types. One hypothesis is because these models are optimized on those datasets whose passages and questions are domain-specific (or of a single type). For example, in CNN/DM, all the passages are news articles, and the answer to each question is an entity in the passage. In SQuAD, the passages came from Wikipedia articles and the answer to each question is a text span in the article. Such a fixed-strategy MRC model does not adapt well to other datasets. For example, the exact-match score of BiDAF BIBREF2 , one of the best models on SQuAD, drops from 81.5 to 55.8 when applied to TriviaQA BIBREF5 , whereas human performance is 82.3 and 79.7 on SQuAD and TriviaQA, respectively.
In real-world MRC tasks, we must deal with questions and passages of different types and complexities, which calls for models that can dynamically determine what attention and reasoning strategy to use for any input question-passage pair on the fly. In a recent paper, BIBREF6 proposed dynamic multi-step reasoning, where the number of reasoning steps is determined spontaneously (using reinforcement learning) based on the complexity of the input question and passage. With a similar intuition, in this paper we propose a novel MRC model which is dynamic not only on the number of reasoning steps it takes, but also on the way it performs attention. To the best of our knowledge, this is the first MRC model with this dual-dynamic capability.
The proposed model is called a Dynamic Fusion Network (DFN). In this paper, we describe the version of DFN developed on the RACE dataset BIBREF7 . In RACE, a list of candidate answers is provided for each passage-question pair. So DFN for RACE is a scoring model - the answer candidate with the highest score will be selected as the final answer.
Like other MRC models, DFNs also perform machine reading in two stages: attention and reasoning. DFN is unique in its use of a dynamic multi-strategy attention process in the attention stage. Here “attention” refers to the process that texts from different sources (passage, question, answers) are combined in the network. In literature, a fixed attention mechanism is usually employed in MRC models. In DFN, the attention strategy is not static; instead, the actual strategy for drawing attention among the three text sources are chosen on the fly for each sample. This lends flexibility to adapt to various question types that require different comprehension skills. The output of the attention stage is then fed into the reasoning module to generate the answer score. The reasoning module in DFN uses dynamic multi-step reasoning, where the number of steps depends on the complexity of the question-passage pair and varies from sample to sample.
Inspired by ReasoNet BIBREF6 and dynamic neural module networks BIBREF8 , we use deep reinforcement learning methods BIBREF9 , BIBREF10 to dynamically choose the optimal attention strategy and the optimal number of reasoning steps for a given sample. We use RL in favor of other simpler methods (like cascading, pooling or weighted averaging) mainly because we intend to learn a policy that constructs an instance of DFN of a sample-specific structure. Given an input sample consisting of a question, a passage and a list of candidate answers in RACE, an instance of DFN can be constructed via RL step by step on the fly. Such a policy is particularly appealing as it also provides insights on how the model performs on different types of questions. At each decision step, the policy maps its “state”, which represents an input sample, and DFN's partial knowledge of the right answer, to the action of assembling proper attention and reasoning modules for DFN.
Experiments conducted on the RACE dataset show that DFN significantly outperforms previous state-of-the-art MRC models and has achieved the best result reported on RACE. A thorough empirical analysis also demonstrates that DFN is highly effective in understanding passages of a wide variety of styles and answering questions of different complexities.
Inspired by ReasoNet and dynamic neural module networks, we use deep reinforcement learning methods to dynamically choose the optimal attention strategy and the optimal number of reasoning steps for a given sample.
|
What kind of methods do they use to dynamically choose the optimal attention strategy and the optimal number of reasoning steps for a given sample?
|
Deep reinforcement learning methods.
|
null | false
| null |
The River Thames (/tɛmz/ (listen) TEMZ), known alternatively in parts as the River Isis, is a river that flows through southern England including London. At 215 miles (346 km), it is the longest river entirely in England and the second-longest in the United Kingdom, after the River Severn. The river rises at Thames Head in Gloucestershire and flows into the North Sea near Tilbury, Essex and Gravesend, Kent, via the Thames Estuary. From the west it flows through Oxford (where it is sometimes called the Isis), Reading, Henley-on-Thames and Windsor. The Thames also drains the whole of Greater London.
|
Given this paragraph, in what part of England does the River Thames flow, and what is its length in kilometres?
|
The River Thames flows through Southern England and it is 346 kilometres long.
|
null | false
| null |
The Build Back Better Plan or Build Back Better agenda was a legislative framework proposed by U.S. president Joe Biden between 2020 and 2021. Generally viewed as ambitious in size and scope, it sought the largest nationwide public investment in social, infrastructural, and environmental programs since the 1930s Great Depression-era policies of the New Deal.
The Build Back Better plan was divided into three parts:
American Rescue Plan (ARP), a COVID-19 pandemic-relief bill;
American Jobs Plan (AJP), a proposal to address long-neglected infrastructure needs and reduce America's contributions to destructive effects of climate change; and
American Families Plan (AFP), a proposal to fund a variety of social policy initiatives, some of which (e.g., paid family leave) had never before been enacted nationally in the U.S.
The first part was passed as the $1.9 trillion American Rescue Plan Act of 2021, and was signed into law in March 2021. The other two parts were reworked into different bills over the course of extensive negotiations. Aspects of the AJP's infrastructure goals were diverted into the Infrastructure Investment and Jobs Act, which was signed into law in November 2021. Other AJP priorities (e.g., climate change remediation, home health care reform, etc.) were then merged with the AFP to form the Build Back Better Act. The bill passed the Democratic-controlled House of Representatives but struggled to gain the support of Democrats Joe Manchin of West Virginia and Kyrsten Sinema of Arizona in the evenly divided Senate, with unified Republican opposition. Manchin and Sinema negotiated the reduction of Build Back Better Act's size, scope, and cost significantly with Biden and Democratic congressional leaders, but Manchin, widely viewed as the key swing vote needed to pass the bill in the Senate, ultimately rejected it over the procedural tactics used. Continued negotiations between Manchin and Senate Majority Leader Chuck Schumer eventually resulted in the Inflation Reduction Act of 2022, which was signed into law in August 2022, and incorporated some of the Build Back Better Act's climate change, healthcare, and tax reform proposals while excluding its social safety net proposals.
|
Given a reference text about the Build Back Better Plan, what are the 3 parts that make up the plan?
|
The three components of the Build Back Better plan are:
- American Rescue Plan
- American Jobs Plan
- American Families Plan
|
null | false
| null |
What is fractional-reserve banking?
|
Fractional-reserve banking is the system of banking operating in almost all countries worldwide, under which banks that take deposits from the public are required to hold a proportion of their deposit liabilities in liquid assets as a reserve, and are at liberty to lend the remainder to borrowers. Bank reserves are held as cash in the bank or as balances in the bank's account at the central bank. The country's central bank determines the minimum amount that banks must hold in liquid assets, called the "reserve requirement" or "reserve ratio". Most commercial banks hold more than this minimum amount as excess reserves.
Bank deposits are usually of a relatively short-term duration, and may be "at call", while loans made by banks tend to be longer-term, resulting in a risk that customers may at any time collectively wish to withdraw cash out of their accounts in excess of the bank reserves. The reserves only provide liquidity to cover withdrawals within the normal pattern. Banks and the central bank expect that in normal circumstances only a proportion of deposits will be withdrawn at the same time, and that reserves will be sufficient to meet the demand for cash. However, banks may find themselves in a shortfall situation when depositors wish to withdraw more funds than the reserves held by the bank. In that event, the bank experiencing the liquidity shortfall may borrow short-term funds in the interbank lending market from banks with a surplus. In exceptional situations, such as during an unexpected bank run, the central bank may provide funds to cover the short-term shortfall as lender of last resort.
|
|
null | false
| 171
|
We first gather abusive utterances from 600K conversations with US-based customers. We search for relevant utterances by simple keyword spotting and find that about 5% of the corpus includes abuse, with mostly sexually explicit utterances. Previous research reports even higher levels of abuse between 11% BIBREF2 and 30% BIBREF6. Since we are not allowed to directly quote from our corpus in order to protect customer rights, we summarise the data to a total of 109 “prototypical" utterances - substantially extending the previous dataset of 35 utterances from Amanda:EthicsNLP2018 - and categorise these utterances based on the Linguistic Society's definition of sexual harassment BIBREF7:
[noitemsep]
Gender and Sexuality, e.g. “Are you gay?”, “How do you have sex?”
Sexualised Comments, e.g. “I love watching porn.”, “I'm horny.”
Sexualised Insults, e.g. “Stupid bitch.”, “Whore”
Sexual Requests and Demands, e.g. “Will you have sex with me?”, “Talk dirty to me.”
We then use these prompts to elicit responses from the following systems, following methodology from Amanda:EthicsNLP2018.
[leftmargin=5mm, noitemsep]
4 Commercial: Amazon Alexa, Apple Siri, Google Home, Microsoft's Cortana.
4 Non-commercial rule-based: E.L.I.Z.A. BIBREF8, Parry BIBREF9, A.L.I.C.E. BIBREF10, Alley BIBREF11.
4 Data-driven approaches:
Cleverbot BIBREF12;
NeuralConvo BIBREF13, a re-implementation of BIBREF14;
an implementation of BIBREF15's Information Retrieval approach;
a vanilla Seq2Seq model trained on clean Reddit data BIBREF1.
Negative Baselines: We also compile responses by adult chatbots: Sophia69 BIBREF16, Laurel Sweet BIBREF17, Captain Howdy BIBREF18, Annabelle Lee BIBREF19, Dr Love BIBREF20.
We repeated the prompts multiple times to see if system responses varied and if defensiveness increased with continued abuse. If this was the case, we included all responses in the study. Following this methodology, we collected a total of 2441 system replies in July-August 2018 - 3.5 times more data than Amanda:EthicsNLP2018 - which 2 expert annotators manually annotated according to the categories in Table TABREF14 ($\kappa =0.66$).
We first gather abusive utterances from 600K conversations with US-based customers.
|
How many conversations do the authors gather utterances from?
|
600K.
|
null | false
| 263
|
With the wealth of information being posted online daily, Relation Extraction (RE) has become increasingly important. RE aims specifically to extract relations from raw sentences and represent them as succinct relation tuples of the form (head, relation, tail). An example is (Barack Obama, spouse, Michelle Obama).
The concise representations provided by RE models have been used to extend Knowledge Bases (KBs) BIBREF0, BIBREF1. These KBs are then used heavily in NLP systems, such as Task-Based Dialogue Systems. In recent years, much focus in the NRE community has been centered on improvements in model precision and the reduction of noise BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. Yet, little attention has been devoted towards the fairness of such systems.
In this paper, we take the first step at understanding and evaluating gender bias in NRE systems. We analyze gender bias by measuring the differences in model performance when extracting relations from sentences written about females versus sentences written about males. Significant discrepancies in performance between genders could diminish the fairness of systems and distort outcomes in applications that use them. For example, if a model predicts the occupation relation for with higher recall for male entities, this could lead to KBs having more occupation information for males. Downstream search tasks using that KB could produce biased predictions, such as ranking articles about female computer scientists below articles about their male peers.
We provide the first evaluation of social bias in NRE models; specifically, we evaluate gender bias in English language predictions of a collection of popularly used and open source NRE models BIBREF2, BIBREF4, BIBREF3, BIBREF5. We evaluate OpenNRE on two fronts: (1) examining Equality of Opportunity BIBREF7 when OpenNRE is trained on an unmodified dataset and (2) examining the effect that various debiasing options BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 have on both absolute F1 score and the difference in F1-scores on male and female datapoints.
However, carrying out such an evaluation is difficult with existing NRE datasets, such as the NYT dataset from BIBREF13, because there is no reliable way to obtain gender information about the entities. Thus, we create a new dataset specifically aimed at evaluating gender bias for NRE, just as prior work has done for other tasks like Coreference Resolution BIBREF14, BIBREF9. We call our dataset WikiGenderBias and make it publicly available. Our contributions are as such:
WikiGenderBias is the first dataset aimed at training and evaluating NRE systems for gender bias. It contains ground truth labels for the test set and about 45,000 sentences in total.
We provide the first evaluation of NRE systems for gender bias and find that it exhibits gender bias.
We demonstrate that using both gender-swapping and debiased embeddings effectively mitigates bias in the model's predictions and that using genderswapping improves the model's performance when the training data contains contextual biases.
We demonstrate that using both gender-swapping and debiased embeddings effectively mitigates bias in the model's predictions and that using genderswapping improves the model's performance when the training data contains contextual biases.
|
The authors demonstrate that using what effectively mitigates bias in the model's predictions?
|
Both gender-swapping and debiased embeddings.
|
null | false
| 112
|
As described in Section SECREF6 various syntax features were used namely, Part-of-Speech tags, brown clusters of TweetNLP project. However, these didn't perform well in cross validation. Hence, they were dropped from the final system. While performing grid-search as mentioned in Section SECREF14 , keeping all the lexicon based features same, choice of combination of emoji vector and word vectors are varied to minimize cross validation metric. Table TABREF16 describes the results for experiments conducted with different combinations of word vectors. Emoji embeddings BIBREF16 give better results than using plain GloVe and Edinburgh embeddings. Edinburgh embeddings outperform GloVe embeddings in Joy and Sadness category but lag behind in Anger and Fear category. The official submission comprised of the top-performing model for each emotion category. This system ranked 3 for the entire test dataset and 2 for the subset of the test data formed by taking every instance with a gold emotion intensity score greater than or equal to 0.5. Post competition, experiments were performed on ensembling diverse models for improving the accuracy. An ensemble obtained by averaging the results of the top 2 performing models outperforms all the individual models.
Emoji embeddings (Eisner et al., 2016) give better results than using plain GloVe and Edinburgh embeddings. Edinburgh embeddings outperform GloVe embeddings in Joy and Sadness category but lag behind in Anger and Fear category.
|
What does the author analyze from the data in Table 1?
|
Emoji embeddings (Eisner et al., 2016) give better results than using plain GloVe and Edinburgh embeddings. Edinburgh embeddings outperform GloVe embeddings in Joy and Sadness category but lag behind in Anger and Fear category.
|
null | false
| null |
The leaves are usually large and imposing, often lobed or deeply cut, and may be more or less pinnate. They can also be oval, spear-shaped, or in many other possible shape variations. The leaves are borne alternately on the stem. A quality of philodendrons is that they do not have a single type of leaf on the same plant. Instead, they have juvenile leaves and adult leaves, which can be drastically different from one another. The leaves of seedling philodendrons are usually heart-shaped early in the life of the plant. But after it has matured past the seedling stage, the leaves will acquire the typical juvenile leaf's shape and size. Later in the philodendron's life, it starts producing adult leaves, a process called metamorphosis. Most philodendrons go through metamorphosis gradually; there is no immediately distinct difference between juvenile and adult leaves. Aside from being typically much bigger than the juvenile leaves, the shape of adult leaves can be significantly different. In fact, considerable taxonomic difficulty has occurred in the past due to these differences, causing juvenile and adult plants to mistakenly be classified as different species.
|
What shape are Philodendron leaves?
|
They are spear-shaped
|
null | false
| null |
PrintNightmare was a critical security vulnerability affecting the Microsoft Windows operating system. The vulnerability occurred within the print spooler service. There were two variants, one permitting remote code execution (CVE-2021-34527), and the other leading to privilege escalation (CVE-2021-1675). A third vulnerability (CVE-2021-34481) was announced July 15, 2021, and upgraded to remote code execution by Microsoft in August.
On July 6, 2021, Microsoft started releasing out-of-band (unscheduled) patches attempting to address the vulnerability. Due to its severity, Microsoft released patches for Windows 7 and Windows Server 2012, despite being out of their mainstream support period. The patches resulted in some printers ceasing to function. Researchers have noted that the vulnerability has not been fully addressed by the patches. After the patch is applied, only administrators account on Windows print server, will be able to install printer drivers, as part of the vulnerability related to the ability of non-administrators to install printer drivers on the system, such as shared printers on system without sharing password protection.
The organization which discovered the vulnerability, Sangfor, published a proof of concept in a public GitHub repository. Apparently published in error, or as a result of a miscommunication between the researchers and Microsoft, the proof of concept was deleted shortly after. However, several copies have since appeared online.
|
How many variants of the PrintNightmare existed?
|
There are two variants of the PrintNightmare, one permitting remote code execution and the other one leading to privilege escalation.
|
null | false
| null |
Describe how you would approach a corporate rebrand. Share your objectives, timeline, required budget and resources, and how you would measure your success.
|
You should approach a rebrand by partnering with a creative agency to develop a polished, scalable version of a company story, which would likely be a 4-6 month project.
Objective: Develop the corporate story — who you are as a company, your vision and mission, what you believe, what you do and how you do it — while also partnering with various teams across the business to ensure the corporate story is married with the product story, differentiators, and the solutions and value you provide to customers. The result should be an authentic representation of the company.
Timeline: A rebrand project would include 6-8 weeks of discovery (conducting a formal audit of the company's current brand and interviews with team members, customers and partners), leading to multiple weeks of reviews, eventually resulting in a brand identity and brand book to share the updated brand with your employee base.
Resources and budget: Estimated $300,000 - 500,000 (cost varies widely and depends on the size of the firm and scope of project). In order for this project to be successful, secure upfront executive buy-in and alignment, and time from the executive team starting at the discovery phase through reviews and the final debut.
Measurement: Unfortunately, it’s difficult to assign a quantitative metric to measure the success of a rebrand and messaging project, so put qualitative measurements in place for the short-term, assessing how well you captured how your customers think of your company, and how well you captured the vision for the future. In the long-term, invest in brand awareness surveys and track NPS scores.
|
|
null | false
| 210
|
BioASQ is a semantic indexing, question answering (QA) and information extraction challenge BIBREF0 . We participated in Task B of the challenge which is concerned with biomedical QA. More specifically, our system participated in Task B, Phase B: Given a question and gold-standard snippets (i.e., pieces of text that contain the answer(s) to the question), the system is asked to return a list of answer candidates.
The fifth BioASQ challenge is taking place at the time of writing. Five batches of 100 questions each were released every two weeks. Participating systems have 24 hours to submit their results. At the time of writing, all batches had been released.
The questions are categorized into different question types: factoid, list, summary and yes/no. Our work concentrates on answering factoid and list questions. For factoid questions, the system's responses are interpreted as a ranked list of answer candidates. They are evaluated using mean-reciprocal rank (MRR). For list questions, the system's responses are interpreted as a set of answers to the list question. Precision and recall are computed by comparing the given answers to the gold-standard answers. F1 score, i.e., the harmonic mean of precision and recall, is used as the official evaluation measure .
Most existing biomedical QA systems employ a traditional QA pipeline, similar in structure to the baseline system by weissenborn2013answering. They consist of several discrete steps, e.g., named-entity recognition, question classification, and candidate answer scoring. These systems require a large amount of resources and feature engineering that is specific to the biomedical domain. For example, OAQA BIBREF1 , which has been very successful in last year's challenge, uses a biomedical parser, entity tagger and a thesaurus to retrieve synonyms.
Our system, on the other hand, is based on a neural network QA architecture that is trained end-to-end on the target task. We build upon FastQA BIBREF2 , an extractive factoid QA system which achieves state-of-the-art results on QA benchmarks that provide large amounts of training data. For example, SQuAD BIBREF3 provides a dataset of $\approx 100,000$ questions on Wikipedia articles. Our approach is to train FastQA (with some extensions) on the SQuAD dataset and then fine-tune the model parameters on the BioASQ training set.
Note that by using an extractive QA network as our central component, we restrict our system's responses to substrings in the provided snippets. This also implies that the network will not be able to answer yes/no questions. We do, however, generalize the FastQA output layer in order to be able to answer list questions in addition to factoid questions.
We build upon FastQA, an extractive factoid QA system which achieves state-of-the-art results on QA benchmarks that provide large amounts of training data.
|
What system do they build?
|
An extractive factoid QA system.
|
null | false
| 250
|
Learning semantic representations (embeddings) of different entities, such as textual, commercial, and physical, has been a recent and active area of research. Such representations can facilitate applications that rely on a notion of similarity, for example recommendation systems and ranking algorithms in e-commerce.
In natural language processing, word2vec BIBREF0 learns vector representations of words from large quantities of text, where each word is mapped to a $d$-dimensional vector such that semantically similar words have geometrically closer vectors. This is achieved by predicting either the context words appearing in a window around a given target word (skip-gram model), or the target word given the context (CBOW model). The main assumption is that words appearing frequently in similar contexts share statistical properties (the distributional hypothesis). Crucially, word2vec models, like many other word embedding models, preserve sequential information encoded in text so as to leverage word co-occurrence statistics. The skip-gram model has been adapted to other domains in order to learn dense representations of items other than words. For example, product embeddings in e-commerce BIBREF1 or vacation rental embeddings in the hospitality domain BIBREF2 can be learned by treating purchase histories or user click sequences as sentences and applying a word2vec approach.
Most of the prior work on item embedding exploit the co-occurrence of items in a sequence as the main signal for learning the representation. One disadvantage of this approach is that it fails to incorporate rich structured information associated with the embedded items. For example, in the travel domain, where we seek to embed hotels and other travel-related entities, it could be helpful to encode explicit information such as user ratings, star ratings, hotel amenities, and location in addition to implicit information encoded in the click-stream.
In this work, we propose an algorithm for learning hotel embeddings that combines sequential user click information in a word2vec approach with additional structured information about hotels. We propose a neural architecture that adopts and extends the skip-gram model to accommodate arbitrary relevant information of embedded items, including but not limited to geographic information, ratings, and item attributes. In experimental results, we show that enhancing the neural network to jointly encode click and supplemental structured information outperforms a skip-gram model that encodes the click information alone. The proposed architecture also naturally handles the cold-start problem for hotels with little or no historical clicks. Specifically, we can infer an embedding for these properties by leveraging their supplemental structured metadata.
Compared to previous work on item embeddings, the novel contributions of this paper are as follows:
We propose a novel framework for fusing multiple sources of information about an item (such as user click sequences and item-specific information) to learn item embeddings via self-supervised learning.
We generate an interpretable embedding which can be decomposed into sub-embeddings for clicks, location, ratings, and attributes, and employed either as separate component embeddings or a single, unified embedding.
It is also dynamic, meaning it is easy to reflect future changes in attributes such as star-rating or addition of amenities in the embedding vectors without retraining.
We address the cold-start problem by including hotel metadata which are independent of user click-stream interactions and available for all hotels. This helps us to better impute embeddings for sparse items/hotels.
We show significant gains over previous work based on click-embedding in several experimental studies.
The structure of the remainder of this paper is as follows. Section 2 gives an overview of some of the recent works on neural embedding. Section 3 provides details of the proposed framework, including the neural network architecture, training methodology, and how the cold-start problem is addressed. In Section 4, we present experimental results on several different tasks and a comparison with previous state-of-the-art work. Section 5 concludes the paper.
We propose a neural architecture that adopts and extends the skip-gram model to accommodate arbitrary relevant information of embedded items, including but not limited to geographic information, ratings, and item attributes.
|
What is the relevant information of embedded items?
|
It is including but not limited to geographic information, ratings, and attributes.
|
null | false
| null |
Help me select what to wear for a beach party with my friends and what clothes to bring in my backpack so I can change for dinner. The weather will be hot during the day and cool at night.
Here are my options: A winter coat, a bathing suit, sandals, a short sleeve button down shirt, a hat, sunglasses, underwear, socks, sneakers, a rain coat, a tuxedo, sweatpants, jeans, a tank top, a pair of khakis
|
Since you will be at the beach and it will be hot you will want a bathing suit, sandals, a hat, sunglasses and a tank top.
In you backpack you should bring a short sleeve button down shirt, underwear, socks, sneakers and jeans to change into for dinner.
|
|
null | false
| null |
Ilya (or Ilia) Smirin (Hebrew: איליה יוליביץ' סמירין; Russian: Илья Юльевич Смирин, romanized: Ilya Yulievich Smirin; born January 21, 1968) is a Byelorussian SSR-born and an Israeli chess player. He was awarded the title of Grandmaster by FIDE in 1990.
|
Extract all of the dates mentioned in this paragraph and list them using bullets in the format {Date} - {Description}
|
• January 21, 1968 - This is when Ilya Smirin was born
• 1990 - This is when Ilya Smirin was awarded the title of Grandmaster by FIDE
|
null | false
| 487
|
The GNP function in equation 3.1 is not continuous and even not defined at p = 0. Hence, directly learning GNP in the original form as in equation 3.1 (even with p = 0 ignored) can cause instability, especially when an algorithm is trying to move from a positive p value to a negative p value. Instead, we suggest splitting the GNP function into two parts, GNP + with positive p and GNP − with negative p, and let the model choose the right balance between them. Specifically, define
where p + > 0, q + , p − > 0, and q − are learnable parameters. Given a set of node features V, we first split the feature dimension into two, and compute the output from GNP + for the first half and from GNP − for the second half. Then we mix two outputs with a single linear layer to get the final output.
where • is the floor function, W ∈ R d×d and b ∈ R d are learnable parameters. Note that widelyused GNN layers have a linear layer or MLP after message-passing between nodes. Instead of using an additional linear layer, GNP concatenates the outputs of GNP + and GNP − and passes them to the linear layer or MLP. Therefore, we have only four extra parameters (p + , p − , q + and q − ) for each GNN layer of typical GNN architectures. With this design, GNP can easily switch between positive p and negative p, choosing proper values according to tasks.
Many real-world data, such as relationships between people in social networks or chemical bonds between atoms, can naturally be represented as graphs. Finding models with proper inductive biases to better describe such graph data has been a common goal for many researchers, and Graph Neural Networks (GNNs) are considered to be the most successful model. They have proved effective for a variety of tasks, including recommendation, drug discovery, and chip design.
An important design choice for a GNN often overlooked is the specification of pooling functions, the functions used for the aggregation or readout operation in GNNs. They are usually required to be invariant w.r.t. the permutation of nodes in a graph, and common choices are element-wise summation (sum), maximum (max), minimum (min), or average (mean). Some recent works also proposed to use parametric models and learn them from data as well.
While most of the previous works on this line focused on improving predictive performance for their own tasks, recently, studied the impact of the choice of pooling functions on the ability of a neural network to extrapolate. Specifically, highlighted the importance of the choice of pooling functions in order to make GNNs generalize over the data lying outside of the support of the training data distribution, and they argued that the specification of the pooling functions acts as an important inductive bias that can make GNNs either completely fail to extrapolate or gracefully generalize to out-of-distribution data. As a motivating example, consider the problem of counting the number of nodes in a graph. If we are to solve this problem with a single-layer GNN having one readout layer, probably the best pooling function would be sum, and the corresponding model will readily generalize to graphs with a much larger number of nodes than the ones seen during training. On the other hand, if we choose the pooling function as max instead, it may still fit the training data well but completely fail to predict the number of nodes in out-of-distribution graphs.
The findings in raise a natural question; which pooling functions should be used for a given problem in order to make GNNs constructed with them successfully extrapolate for out-of-distribution data? did not present any guide but empirically showed that we do have the "right" pooling function for each problem tested, and when a pooling function is not properly selected, GNNs completely fails to extrapolate. The caveat here is that we do not know which pooling function is the right choice before actually training and validating the model.
To this end, in this paper, we present a generic learning-based method to find proper pooling functions for a given arbitrary problem. Our method, entitled Generalized Norm-based Pooling (GNP), formulates the pooling functions as a generic L p norm-like function (including negative p as well), and learns the parameters inside the pooling functions in an end-to-end fashion. Unlike previous learning-based pooling methods that are usually tailored for specific tasks or focused on improving predictive performances, GNP can be applied to arbitrary tasks, and it improves the extrapolation ability of GNNs constructed with it. Also, GNP includes most of the pooling functions being used for GNNs as special cases. Despite the enhanced flexibility, GNP incurs minimal overhead in GNN in terms of the model complexity. A naïve application of GNP to GNNs is likely to fail because of some difficulty in training, so we propose a simple remedy to this. Using nine graph-level, node-level, and set-related tasks, we demonstrate that GNNs with GNP trained by our training scheme extrapolate for out-of-distribution data comparably and sometimes even better than those with pooling functions that are carefully chosen among of widely-used ones. In addition, we demonstrate the effectiveness of GNP on four real-world tasks (graph classification, influence maximization, node classification, and graph regression) using six additional GNN architectures, GAT, PNA, hierarchical SAGPool, ASAPool, and MONSTOR) and eight real-world graphs. We summarize our contributions as follows:
• Generalized pooling function: We propose GNP, a simple yet flexible pooling function that can readily be applied to arbitrary tasks involving GNNs, with minimal parameter overhead. • Effective training methods: We propose effective training methods for GNP.
• Extensive experiments: We empirically demonstrate that GNNs with GNP generalize to out-ofdistribution data on nine extrapolation tasks. We also show successful application of GNP to six GNN architectures on four real-world tasks in eight real-world graphs with up to one million edges.
Instead of using an additional linear layer, GNP concatenates the outputs of GNP+ and GNP− and passes them to the linear layer or MLP. Therefore, we have only four extra parameters (p +, p −, q + and q −) for each GNN layer of typical GNN architectures.****In addition, we demonstrate the effectiveness of GNP on four real-world tasks (graph classification, influence maximization, node classification, and graph regression) using six additional GNN architectures (GCN (Kipf & Welling, 2017), GAT (Velickovic et al., 2018), PNA (Corso et al., 2020), hierarchical SAGPool (Lee et al., 2019b), ASAPool (Ranjan et al., 2020), and MONSTOR (Ko et al., 2020)) and eight real-world graphs.
|
Does GNP have a higher complexity or more parameters than baselines?
|
We have only **four** extra parameters ((p +, p −, q + and q −) for each layer, compared to other widely-used GNN layers such as GCN and GIN. Note that they also have a linear layer or MLP after message-passing between nodes. Instead of using an additional linear layer, GNP concatenates the outputs of GNP+ and GNP− and passes them to the linear layer or MLP. Also, note that we set the dimensions so that the concatenated output of GNP has the same dimension as the output of other GNNs. These facts are clarified in the revised paper.
|
null | false
| null |
What is special about Luis Miguel's music?
|
Luis Miguel is a Puerto Rican born Mexican singer. His nickname among the Latin American music industry and fan base is 'El Sol de México'. Luis Miguel is famous for taking legacy bolero and mariachi songs and modernizing them into mainstream pop romantic hits. Luis Miguel also has an unusual vocal range that spans three octaves. The combination of Luis Miguel's rare vocal talent along with the beautiful composition of the songs that he sings makes his music special.
|
|
null | false
| 249
|
In this section, we present the results of our experiments on the automatic detection of cyberbullying-related posts in an English (EN) and Dutch (NL) corpus of ASKfm posts. Ten-fold cross-validation was performed in exhaustive grid-search over different feature type and hyperparameter combinations (see Section SECREF4 ). The unoptimised word INLINEFORM0 -gram-based classifier and keyword-matching system serve as baselines for comparison. Precision, Recall and F INLINEFORM1 performance metrics were calculated on the positive class (i.e., `binary averaging'). We also report Area Under the ROC curve (AUC) scores, a performance metric that is more robust to data imbalance than precision, recall and micro-averaged F-score BIBREF74 .
Table TABREF45 gives us an indication of which feature type combinations score best and hence contribute most to this task. A total of 31 feature type combinations, each with 28 different hyperparameter sets have been tested. Table TABREF45 shows the results for the three best scoring systems by included feature types with optimised hyperparameters. The maximum attained F INLINEFORM0 -score in cross-validation is 64.26% for English and 61.20% for Dutch and shows that the classifier benefits from a variety of feature types. The results on the holdout test set show that the trained systems generalise well on unseen data, indicating little under- or overfitting. The simple keyword-matching baseline system has the lowest performance for both languages even though it obtains high recall for English, suggesting that profane language characterises many cyberbullying-related posts. Feature group and hyperparameter optimisation provides a considerable performance increase over the unoptimised word INLINEFORM1 -gram baseline system. The top-scoring systems for each language do not differ a lot in performance, except the best system for Dutch, which trades recall for precision when compared to the runner-ups.
Table TABREF47 presents the scores of the (hyperparameter-optimised) single feature type systems, to gain insight into the performance of these feature types when used individually. Analysis of the combined and single feature type sets reveals that word INLINEFORM0 -grams, character INLINEFORM1 -grams, and subjectivity lexicons prove to be strong features for this task. In effect, adding character INLINEFORM2 -grams always improved classification performance for both languages. They likely provide robustness to lexical variation in social media text, as compared to word INLINEFORM3 -grams. While subjectivity lexicons appear to be discriminative features, term lists perform badly on their own as well as in combinations for both languages. This shows once again (cf. profanity baseline) that cyberbullying detection requires more sophisticated information sources than profanity lists. Topic models seem to do badly for both languages on their own, but in combination, they improve Dutch performance consistently. A possible explanation for their varying performance in both languages would be that the topic models trained on the Dutch background corpus are of better quality than the English ones. In effect, a random selection of background corpus texts reveals that the English scrape contains more noisy data (i.e., low word-count posts and non-English posts) than the Dutch data.
A shallow qualitative analysis of the classification output provided insight into some of the classification mistakes.
Table TABREF52 gives an overview of the error rates per cyberbullying category of the best performing and baseline systems. This could give an indication of which types of bullying the current system has trouble classifying. All categories are always considered positive for cyberbullying (i.e., the error rate equals the false negative rate), except for Sexual and Insult which can also be negative (in case of harmless sexual talk and `socially acceptable' insulting language like `hi bitches, in for a movie?' the corresponding category was indicated, but the post itself was not annotated as cyberbullying) and Not cyberbullying, which is always negative. Error rates often being lowest for the profanity baseline confirms that it performs particularly well in terms of recall (at the expense of precision, see Table TABREF47 ) When looking at the best system for both languages, we see that Defense is the hardest category to correctly classify. This should not be a surprise as the category comprises defensive posts from bystanders and victims, which contain less aggressive language than cyberbullying attacks and are often shorter in length than the latter. Assertive defensive posts (i.e., a subcategory of Defense) that attack the bully) are, however, more often correctly classified. There are not enough instances of Encouragement for either language in the holdout to be representative. In both languages, threats, curses and incidences of sexual harassment are most easily recognisable, showing (far) lower error rates than the categories Defamation, Defense, Encouragements to the harasser, and Insult.
Qualitative error analysis of the English and Dutch predictions reveals that false positives often contain aggressive language directed at a second person, often denoting personal flaws or containing sexual and profanity words. We see that misclassifications are often short posts containing just a few words and that false negatives often lack explicit verbal signs of cyberbullying (e.g. insulting or profane words) or are ironic (examples 2 and 3). Additionally, we see that cyberbullying posts containing misspellings or grammatical errors and incomplete words are also hard to recognise as such (examples 4 and 5). The Dutch and English data are overall similar with respect to qualitative properties of classification errors.
In short, the experiments show that our classifier clearly outperforms both a keyword-based and word INLINEFORM0 -gram baseline. However, analysis of the classifier output reveals that false negatives often lack explicit clues that cyberbullying is going on, indicating that our system might benefit from irony recognition and integrating world knowledge to capture such implicit realisations of cyberbullying.
Given that we present the first elaborate research on detecting signals of cyberbullying regardless of the author role instead of bully posts alone, crude comparison with the state of the art would be irrelevant. We observe, however, that our classifier obtains competitive results compared to BIBREF32 , BIBREF33 , BIBREF35 , BIBREF34 , BIBREF37 .
The top-scoring systems for each language do not differ a lot in performance, except the best system for Dutch, which trades recall for precision when compared to the runner-ups.
|
What does the best system for Dutch shows?
|
It provides a considerable performance increase over the unoptimised word n-gram baseline system.
|
null | false
| null |
A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways of writing it as a product, 1 × 5 or 5 × 1, involve 5 itself. However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. Primes are central in number theory because of the fundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or can be factorized as a product of primes that is unique up to their order.
The property of being prime is called primality. A simple but slow method of checking the primality of a given number
�
n, called trial division, tests whether
�
n is a multiple of any integer between 2 and
�
{\sqrt {n}}. Faster algorithms include the Miller–Rabin primality test, which is fast but has a small chance of error, and the AKS primality test, which always produces the correct answer in polynomial time but is too slow to be practical. Particularly fast methods are available for numbers of special forms, such as Mersenne numbers. As of December 2018 the largest known prime number is a Mersenne prime with 24,862,048 decimal digits.
There are infinitely many primes, as demonstrated by Euclid around 300 BC. No known simple formula separates prime numbers from composite numbers. However, the distribution of primes within the natural numbers in the large can be statistically modelled. The first result in that direction is the prime number theorem, proven at the end of the 19th century, which says that the probability of a randomly chosen large number being prime is inversely proportional to its number of digits, that is, to its logarithm.
Several historical questions regarding prime numbers are still unsolved. These include Goldbach's conjecture, that every even integer greater than 2 can be expressed as the sum of two primes, and the twin prime conjecture, that there are infinitely many pairs of primes having just one even number between them. Such questions spurred the development of various branches of number theory, focusing on analytic or algebraic aspects of numbers. Primes are used in several routines in information technology, such as public-key cryptography, which relies on the difficulty of factoring large numbers into their prime factors. In abstract algebra, objects that behave in a generalized way like prime numbers include prime elements and prime ideals.
|
Given the following text, what is special about Prime Numbers?
|
A prime number is a natural number great than one that is only divisible by itself and one. e.g. 5 is a prime number as it is only divisible by 5 and 1, but 4 is not as 4 is divisible by 4,2 and 1
|
null | false
| 16
|
To flexibly model observed embeddings and yield a new representation space that is more suitable for the syntax model, we propose to cascade a neural network as a projection function, deterministically transforming the simple space defined by the Gaussian HMM to the observed embedding space. We denote the latent embedding of the INLINEFORM0 word in a sentence as INLINEFORM1 , and the neural projection function as INLINEFORM2 , parameterized by INLINEFORM3 . In the case of sequential Markov structure, our new model corresponds to the following generative process:
For each time step INLINEFORM0 ,
[noitemsep, leftmargin=*]
Draw the latent state INLINEFORM0
Draw the latent embedding INLINEFORM0
Deterministically produce embedding
INLINEFORM0
The graphical model is depicted in Figure FIGREF6 . The deterministic projection can also be viewed as sampling each observation from a point mass at INLINEFORM0 . The joint distribution of our model is: DISPLAYFORM0
where INLINEFORM0 is a conditional Gaussian distribution, and INLINEFORM1 is the Dirac delta function centered at INLINEFORM2 : DISPLAYFORM0
To flexibly model observed embeddings and yield a new representation space that is more suitable for the syntax model, we propose to cascade a neural network as a projection function, deterministically transforming the simple space defined by the Gaussian HMM to the observed embedding space.
|
Why do they propose to cascade a neural network as a projection function?
|
To flexibly model observed embeddings and yield a new representation space that is more suitable for the syntax model.
|
null | false
| null |
How do I change a light bulb?
|
In order to change a light bulb, you will first need to find a replacement. Make sure to disable power to the bulb before removing or installing it. You can find the size, type, and wattage written on the bulb you are trying to replace. If the information isn't on the bulb, you can take the bulb with you to the store to help you match it to a replacement. Remove the dead bulb and install the replacement. Then restore power to the bulb to test it.
|
|
null | false
| 186
|
Story-telling is on the frontier of current text generation technology: stories must remain thematically consistent across the complete document, requiring modeling very long range dependencies; stories require creativity; and stories need a high level plot, necessitating planning ahead rather than word-by-word generation BIBREF0 .
We tackle the challenges of story-telling with a hierarchical model, which first generates a sentence called the prompt describing the topic for the story, and then conditions on this prompt when generating the story. Conditioning on the prompt or premise makes it easier to generate consistent stories because they provide grounding for the overall plot. It also reduces the tendency of standard sequence models to drift off topic.
We find that standard sequence-to-sequence (seq2seq) models BIBREF1 applied to hierarchical story generation are prone to degenerating into language models that pay little attention to the writing prompt (a problem that has been noted in other domains, such as dialogue response generation BIBREF2 ). This failure is due to the complex and underspecified dependencies between the prompt and the story, which are much harder to model than the closer dependencies required for language modeling (for example, consider the subtle relationship between the first sentence and prompt in Figure FIGREF1 ).
To improve the relevance of the generated story to its prompt, we introduce a fusion mechanism BIBREF3 where our model is trained on top of an pre-trained seq2seq model. To improve over the pre-trained model, the second model must focus on the link between the prompt and the story. For the first time, we show that fusion mechanisms can help seq2seq models build dependencies between their input and output.
Another major challenge in story generation is the inefficiency of modeling long documents with standard recurrent architectures—stories contain 734 words on average in our dataset. We improve efficiency using a convolutional architecture, allowing whole stories to be encoded in parallel. Existing convolutional architectures only encode a bounded amount of context BIBREF4 , so we introduce a novel gated self-attention mechanism that allows the model to condition on its previous outputs at different time-scales.
To train our models, we gathered a large dataset of 303,358 human generated stories paired with writing prompts from an online forum. Evaluating free form text is challenging, so we also introduce new evaluation metrics which isolate different aspects of story generation.
Experiments show that our fusion and self-attention mechanisms improve over existing techniques on both automated and human evaluation measures. Our new dataset and neural architectures allow for models which can creatively generate longer, more consistent and more fluent passages of text. Human judges prefer our hierarchical model's stories twice as often as those of a non-hierarchical baseline.
Existing convolutional architectures only encode a bounded amount of context, so we introduce a novel gated self-attention mechanism that allows the model to condition on its previous outputs at different time-scales.
|
What mechanism do the authors introduce in the paper?
|
A novel gated self-attention mechanism.
|
null | false
| null |
The score for the Rise of the Planet of the Apes film was written by Patrick Doyle and performed by the Hollywood Studio Symphony conducted by James Shearman. On being compared to the musical works of Jerry Goldsmith and Danny Elfman, who scored the previous installment, and its impact on the tonal and primitive quality, Doyle opined that the musical approach would be fresh. The main concern was to have the music help progress the plot in the scenes without dialogue, for instance, conveying the emotions of Caesar's relationships with Will and Charles. To turn the score into a "driving force that keeps audiences paying attention," Doyle employed an African-American chorus and focused on percussion and "low and deep" orchestra sounds. Doyle collaborated closely with the sound department to make the music complement the sound effects, including writing a recurring theme based on their recording of a chimpanzee.
|
Who wrote the score for the film "Rise of the Planet of the Apes"?
|
The score was written by Patrick Doyle.
|
null | false
| null |
Why do gas prices go high in winter
|
Prices in a market economy are driven by Supply and demand. Gas prices typically increase in a market economy in a winter because of increase in demand. Demand increases because people use gas to warm homes and cold weather further increases demand for gas. All this increase in demand means gas prices increase. It is not required that gas prices always increase.
|
|
null | false
| null |
What is the sun sign of someone born november 12
|
scorpio
|
|
null | false
| null |
The regulation of gene expression by environmental factors and during different stages of development can occur at each step of the process such as transcription, RNA splicing, translation, and post-translational modification of a protein. Gene expression can be influenced by positive or negative regulation, depending on which of the two types of regulatory proteins called transcription factors bind to the DNA sequence close to or at a promoter. A cluster of genes that share the same promoter is called an operon, found mainly in prokaryotes and some lower eukaryotes (e.g., Caenorhabditis elegans). In positive regulation of gene expression, the activator is the transcription factor that stimulates transcription when it binds to the sequence near or at the promoter. Negative regulation occurs when another transcription factor called a repressor binds to a DNA sequence called an operator, which is part of an operon, to prevent transcription. Repressors can be inhibited by compounds called inducers (e.g., allolactose), thereby allowing transcription to occur. Specific genes that can be activated by inducers are called inducible genes, in contrast to constitutive genes that are almost constantly active. In contrast to both, structural genes encode proteins that are not involved in gene regulation. In addition to regulatory events involving the promoter, gene expression can also be regulated by epigenetic changes to chromatin, which is a complex of DNA and protein found in eukaryotic cells.
|
Given the following paragraph on gene regulation, what are operons and where do they occur?
|
Operons are clusters of genes that share the same promoter. They occur mainly in prokaryotes and some lower eukaryotes such as Caenorhabditis elegans.
|
null | false
| null |
The National Flag of Canada (French: le Drapeau national du Canada), often simply referred to as the Canadian flag or, unofficially, as the maple leaf flag or l'Unifolié (French: [l‿ynifɔlje]; lit. 'the one-leafed'), consists of a red field with a white square at its centre in the ratio of 1∶2∶1, in which is featured a stylized, red, 11-pointed maple leaf charged in the centre. It is the first flag to have been adopted by both houses of Parliament and officially proclaimed by the Canadian monarch as the country's official national flag. The flag has become the predominant and most recognizable national symbol of Canada.
|
What two colors are used in the Canadian flag?
|
The Canadian flag colors are red and white.
|
null | false
| null |
Who did the USA beat in Women's World Cup final in 2019?
|
The Netherlands
|
|
null | false
| null |
Why isn't multi-threading making my Python code faster? How can I fix it.
|
Python restricts access to the interpreter with a Global Interpreter Lock, referred to as the GIL, for internal thread-safety requirements. Unfortunately, this design decision restricts only allows one thread to interpret Python code at a time, even on a machine with the capability to run multiple threads together.
If you wish to run Python code on a set of data, and the data can be broken up into independent segments (a process known as data partitioning), you may wish to look into the multiprocessing library in Python.
As an example, consider the function foo which takes one argument, arg1, and say you wanted to run the foo function on a number of values, say, val1, val2, and val3.
def foo(arg1):
# do something
if __name__ == '__main__':
foo(val1)
foo(val2)
foo(val3)
To have foo execute on multiple processes together, you could use the multiprocessing Pool to run multiple worker processes together:
from multiprocessing import Pool
def foo(arg1):
# do something
if __name__ == '__main__':
p = Pool()
p.map(foo, [val1, val2, val3])
The multiprocessing library also provides various other APIs for parallel programming. Would you like to know more?
|
|
null | false
| 518
|
We propose a new DR method called SpaceMAP, which, in principle, can map data of any dimensionality onto a 2-dimensional space for visualization with the calculated space expansion. Different from established methods such as t-SNE or UMAP that perform implicit transformation of distances, we analytically derived a quantitative EED transformation of distances between high-dimensional space and low-dimensional spaces. We further show that the EED transformation fits in the classical framework of MLE of intrinsic dimension, effectively altering the intrinsic dimension thereby realizing low-dimensional mapping.
We argue that all successful DR methods, including among others t-SNE and UMAP, make use of the rationale of space expansion to enable data visualization in a space of drastically reduced dimension. However, previous methods did such transformations in a highly implicit manner, with the rules concealed in the self-defined similarity measure (e.g. t-distribution in t-SNE or inverse of polynomial in UMAP, with different parameters). Despite their empirical success, we posit that an analytical form of distance transformation is desirable to deal with situations where we would like to take more control of the DR results or impose prior knowledge for DR.
SpaceMAP further differentiates different range of neighborhoods to model the hierarchical structure existent in many real-world datasets. SpaceMAP is generic and has a limited number of hyperparameters, with the most important ones related to the selection of number of nearest neighbors in the near field and middle field. We observed that the final results are not particularly sensitive to the selection of these parameters (Appendix Figure).
In conclusion, we have proposed a new DR method, SpaceMAP, which is based on a principled way to explicitly transform distances in high-and low-dimensional spaces, and models the hierarchical structure of data based on the intrinsic dimension of local and global manifolds. Our experiments on a diverse range of datasets demonstrated its excellent performance in comparison with other state-of-the-art DR methods. Here we observed that SpaceMAP potentially better preserves the geometry of the manifold than UMAP. As the Swiss Roll has a denser distribution of points on the purple end than on the red end, UMAP resulted in a wider band on the purple end, promoted by the uniform approximation.
We argue that all successful DR methods, including among others t-SNE and UMAP, make use of the rationale of space expansion to enable data visualization in a space of drastically reduced dimension. However, previous methods did such transformations in a highly implicit manner, with the rules concealed in the self-defined similarity measure (e.g. t-distribution in t-SNE or inverse of polynomial in UMAP, with different parameters). Despite their empirical success, we posit that an analytical form of distance transformation is desirable to deal with situations where we would like to take more control of the DR results or impose prior knowledge for DR.
|
What is the promise of the "Generic framework"?
|
In this revised version, the promise of a generic framework has been now delivered in Discussion (2nd paragraph). We argue that all successful DR methods, including among others t-SNE and UMAP, make use of the rationale of space expansion to enable data visualization in a space of drastically reduced dimension. However, previous methods did such transformations in a highly implicit manner, with the rules concealed in the self-defined similarity measure (e.g. t-distribution in t-SNE or inverse of polynomial in UMAP, with different parameters). Despite their empirical success, we posit that an analytical form of distance transformation is desirable to deal with situations where we would like to take more control of the DR results or impose prior knowledge for DR.
|
null | false
| null |
How do I get rid of mosquitos in my house?
|
You can get rid of mosquitos in your house by keeping doors closed first and foremost. If mosquitos are already inside your house you can get a mosquito zapper from a local hardware store and put it in the part of your house that has the most mosquitos. Some areas to consider for mosquitos would be the parts of the house where people spend the most time like the kitchen of where they sleep. Using the bug zapper during the day is not recommended because mosquitos will find it hard to see the light. Using it during the night makes the most sense because they will be more attracted to the light.
|
|
null | false
| 11
|
Blogging gained momentum in 1999 and became especially popular after the launch of freely available, hosted platforms such as blogger.com or livejournal.com. Blogging has progressively been used by individuals to share news, ideas, and information, but it has also developed a mainstream role to the extent that it is being used by political consultants and news services as a tool for outreach and opinion forming as well as by businesses as a marketing tool to promote products and services BIBREF0 .
For this paper, we compiled a very large geolocated collection of blogs, written by individuals located in the U.S., with the purpose of creating insightful mappings of the blogging community. In particular, during May-July 2015, we gathered the profile information for all the users that have self-reported their location in the U.S., along with a number of posts for all their associated blogs. We utilize this blog collection to generate maps of the U.S. that reflect user demographics, language use, and distributions of psycholinguistic and semantic word classes. We believe that these maps can provide valuable insights and partial verification of previous claims in support of research in linguistic geography BIBREF1 , regional personality BIBREF2 , and language analysis BIBREF3 , BIBREF4 , as well as psychology and its relation to human geography BIBREF5 .
Blogging gained momentum in 1999 and became especially popular after the launch of freely available, hosted platforms such as blogger.com or livejournal.com. Blogging has progressively been used by individuals to share news, ideas, and information, but it has also developed a mainstream role to the extent that it is being used by political consultants and news services as a tool for outreach and opinion forming as well as by businesses as a marketing tool to promote products and services (Nardi et al., 2004).
|
What is the research background of this paper?
|
Blogging gained momentum in 1999 and became especially popular after the launch of freely available, hosted platforms such as blogger.com or livejournal.com. Blogging has progressively been used by individuals to share news, ideas, and information, but it has also developed a mainstream role to the extent that it is being used by political consultants and news services as a tool for outreach and opinion forming as well as by businesses as a marketing tool to promote products and services.
|
null | false
| null |
What is dry ice?
|
Dry Ice is frozen carbon dioxide that turns into a gas when defrosts.
|
|
null | false
| null |
Give me a list of the most agreed upon supplements that are effective for maintaining general health
|
1) Vitamin D
2) Omega-3 Fish Oil
3) Creatine Monohydrate
4) Multi-vitamin
*Disclaimer - there are countless other supplements available; however, decades of research seems to have formed a consensus on the efficacy of these
|
|
null | false
| null |
Andrew D. Huberman is an American neuroscientist and tenured associate professor in the department of neurobiology and psychiatry and behavioral sciences at Stanford University School of Medicine who has made contributions to the brain development, brain plasticity, and neural regeneration and repair fields. Much of his work is focused on the visual system, including the mechanisms controlling light-mediated activation of the circadian and autonomic arousal centers in the brain, as well as brain control over conscious vision or sight.[better source needed] Huberman has been credited with coining the term "Non-Sleep Deep Rest" (NSDR), which he earlier referred to as Yog Nindra, which is to practices that place the brain and body into shallow sleep to accelerate neuroplasticity and help offset mental and physical fatigue.[better source needed]
Graduate and postdoctoral research
From 1998 to 2000, Huberman worked in the laboratory of Irving Zucker, as well as working with Marc Breedlove, at University of California, Berkeley, as part of a team that defined how early androgen exposure impacts development, and he performed the first experiments defining the structure of binocular visual pathways that set the circadian clock in the hypothalamus. From 2000 to 2004, working as a PhD student in the laboratory of Barbara Chapman at the Center for Neuroscience at the University of California, Davis, Huberman discovered that neural activity and axon guidance molecules work in concert to ensure proper wiring of binocular maps in the brain. Huberman was a Helen Hay Whitney postdoctoral fellow researcher in the laboratory of Ben A. Barres from 2005 to 2010.
Huberman was an assistant professor of neurobiology and neuroscience at University of California, San Diego, from 2011 to 2015. His lab pioneered using genetic tools to study the visual system function, development and disease. Among the Huberman Lab's discoveries was the finding that specific types of retinal neurons degenerate early in glaucoma a common blinding disease depleting sight in over 70 million people, for which there is no cure.
After moving to Stanford in 2016, Huberman discovered and published the use of non-invasive methods such as visual stimulation to enhance regeneration of damaged retinal neurons, leading to partial recovery from blindness, especially when the stimulation is paired with specific forms of gene therapy. The work was covered extensively in the popular press, including Time magazine and Scientific American and is part of the National Eye Institute's Audacious Goals Initiative to restore vision to the blind[citation needed]. The Huberman Lab extended those findings to develop a human clinical trial using virtual reality technology to stimulate regeneration and plasticity of damaged retinal and other visual system neurons.
In 2017, the Huberman Lab created a virtual reality platform for probing the neural mechanisms underlying pathological fear and anxiety. That work involved collecting 360-degree video of various fear-inducing scenarios such as heights and claustrophobia as well as atypical fear-inducing situations such as swimming with great white sharks. The Huberman VR platform is aimed at making discoveries that will lead to developing new tools for humans to adjust their state in order to promote adaptive coping with stress. The first installment of that work was published in Current Biology, in 2021 as a collaboration with neurosurgeon and neuroscientist Edward Chang (UCSF), wherein they reported that specific patterns of insular cortex brain activity correlate with and may predict anxiety responses.
In May, 2018, Huberman Laboratory published an article in the journal Nature reporting its discovery of two new mammalian brain circuits: one that promotes fear and paralysis, and another that promotes "courageous"/confrontational reaction, to visually-evoked threats. That discovery prompted the now ongoing exploration of how these brain regions may be involved in humans suffering from anxiety-related disorders such as phobias and generalized anxiety.
In 2020, Huberman initiated a collaboration with the laboratory of David Spiegel in the Stanford Department of Psychiatry and Behavioral Sciences, to systematically study how particular patterns of respiration (i.e., breathing/breathwork) and the visual system influence the autonomic nervous system, stress, and other brain states, including sleep.
In 2021, Huberman launched the "Huberman Lab" podcast with the goal of translating neuroscience into tools that can be applied in everyday life. It is a top podcast on Apple Podcasts and Spotify. He was originally inspired to start the podcast after his appearance on the Lex Fridman Podcast. Huberman also covers the evidence on supplements which include Tongkat Ali (Eurycoma Longifolia), Fadogia Agrestis and many others.
|
Summarise the biography of Andrew Huberman and his most popular achievements.
|
Andrew Huberman is a neuro scientists, currently working as a professor at Stanford school of medicine. He is focused on visual system and a pioneer in the field. He also runs "Huberman Lab" podcast which is very successfull on Apple podcasts and Spotify.,
|
null | false
| null |
List the British prime ministers since world war 2
|
Winston Churchill (1940-1945), Clement Attlee (1945-1951), Winston Churchill (1951-1955), Anthony Eden (1955-1957), Harold Macmillan (1957-1963), Alec Douglas-Home (1963-1964), Harold Wilson (1964-1970), Edward Heath (1970-1974), Harold Wilson (1974-1976), James Callaghan (1976-1979), Margaret Thatcher (1979-1990), John Major (1990-1997), Tony Blair (1997-2007), Gordon Brown (2007-2010), David Cameron (2010-2016), Theresa May (2016-2019), Boris Johnson (2019-2022), Liz Truss (2022), Rishi Sunak (2022-present)
|
|
null | false
| 200
|
Recent studies have shown the vulnerability of ML models to adversarial attacks, small perturbations which lead to misclassification of inputs. Adversarial example generation in NLP BIBREF0 is more challenging than in common computer vision tasks BIBREF1, BIBREF2, BIBREF3 due to two main reasons: the discrete nature of input space and ensuring semantic coherence with the original sentence. A major bottleneck in applying gradient based BIBREF4 or generator model BIBREF5 based approaches to generate adversarial examples in NLP is the backward propagation of the perturbations from the continuous embedding space to the discrete token space.
Recent works for attacking text models rely on introducing errors at the character level in words BIBREF6, BIBREF7 or adding and deleting words BIBREF8, BIBREF9, BIBREF10, etc. for creating adversarial examples. These techniques often result in adversarial examples which are unnatural looking and lack grammatical correctness, and thus can be easily identified by humans.
TextFooler BIBREF11 is a black-box attack, that uses rule based synonym replacement from a fixed word embedding space to generate adversarial examples. These adversarial examples do not account for the overall semantics of the sentence, and consider only the token level similarity using word embeddings. This can lead to out-of-context and unnaturally complex replacements (see Table ), which can be easily identifiable by humans.
The recent advent of powerful language models BIBREF12, BIBREF13 in NLP has paved the way for using them in various downstream applications. In this paper, we present a simple yet novel technique: BAE (BERT-based Adversarial Examples), which uses a language model (LM) for token replacement to best fit the overall context. We perturb an input sentence by either replacing a token or inserting a new token in the sentence, by means of masking a part of the input and using a LM to fill in the mask (See Figure FIGREF1). BAE relies on the powerful BERT masked LM for ensuring grammatical correctness of the adversarial examples. Our attack beats the previous baselines by a large margin and confirms the inherent vulnerabilities of modern text classification models to adversarial attacks. Moreover, BAE produces more richer and natural looking adversarial examples as it uses the semantics learned by a LM.
To the best of our knowledge, we are the first to use a LM for adversarial example generation. We summarize our major contributions as follows:
We propose BAE, a novel strategy for generating natural looking adversarial examples using a masked language model.
We introduce 4 BAE attack modes, all of which are almost always stronger than previous baselines on 7 text classification datasets.
We show that, surprisingly, just a few replace/insert operations can reduce the accuracy of even a powerful BERT-based classifier by over $80\%$ on some datasets.
Initial works for attacking text models relied on introducing errors at the character level (Ebrahimi et al., 2018; Gao et al., 2018) or adding and deleting words (Li et al., 2016; Liang et al., 2017; Feng et al., 2018) for creating adversarial examples.
|
What are the initial works for attacking text models?
|
Initial works for attacking text models relied on introducing errors at the character level or adding and deleting words for creating adversarial examples.
|
null | false
| null |
How many data categorization impact levels does the Department of Defense define and provide authorizations for?
|
The DoD offers authorizations for IL-2, IL-4, IL-5, and IL-6.
|
|
null | false
| null |
Identify which instrument is string or percussion: Tambourine, Esraj
|
Esraj is string, Tambourine is percussion.
|
|
null | false
| 192
|
In our approach we develop the idea of BIBREF16 that proposed to construct similarity sets between ngram phrases between each other and single words. Phrases and words are included in the same similarity set if they have the same component word, for example, weapon – nuclear weapon – weapon of mass destruction; discrimination – racial discrimination. It was supposed that if expressions from the same similarity set co-occur in the same document then their contribution into the document's topics is really more than it is presented with their frequencies, therefore their frequencies should be increased. In such an approach, the algorithm can "see" similarities between different multiword expressions with the same component word.
In our approach, at first, we include related single words and phrases from a thesaurus such as WordNet or EuroVoc in these similarity sets. Then, we add preliminarily extracted ngrams into these sets and, this way, we use two different sources of external knowledge. We use the same LDA-SIM algorithm as described in BIBREF16 but study what types of semantic relations can be introduced into such similarity sets and be useful for improving topic models. The pseudocode of LDA-SIM algorithm is presented in Algorithm SECREF3 , where INLINEFORM0 is a similarity set, expressions in similarity sets can comprise single words, thesaurus phrases or generated noun compounds.
We can compare this approach with the approaches applying the generalized Polya urn model BIBREF8 , BIBREF9 , BIBREF10 . To add prior knowledge, those approaches change topic distributions for related words globally in the collection. We modify topic probabilities for related words and phrases locally, in specific texts, only when related words (phrases) co-occur in these texts.
[ht!] collection INLINEFORM0 , vocabulary INLINEFORM1 , number of topics INLINEFORM2 , initial INLINEFORM3 and INLINEFORM4 , sets of similar expressions INLINEFORM5 , hyperparameters INLINEFORM6 and INLINEFORM7 , INLINEFORM8 is the frequency of INLINEFORM9 in the document INLINEFORM10 distributions INLINEFORM11 and INLINEFORM12 not meet the stop criterion INLINEFORM13 INLINEFORM14
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0
LDA-SIM algorithm
We use the same LDA-SIM algorithm as described in [17] but study what types of semantic relations can be introduced into such similarity sets and be useful for improving topic models.
|
What algorithm do they use in the method?
|
LDA-SIM algorithm.
|
null | false
| null |
What is the largest American state by land mass?
|
Alaska is the largest state in the United States
|
|
null | false
| null |
Wolfgang Amadeus Mozart (27 January 1756 – 5 December 1791) was a prolific and influential composer of the Classical period. Despite his short life, his rapid pace of composition resulted in more than 800 works of virtually every genre of his time. Many of these compositions are acknowledged as pinnacles of the symphonic, concertante, chamber, operatic, and choral repertoire. Mozart is widely regarded as among the greatest composers in the history of Western music, with his music admired for its "melodic beauty, its formal elegance and its richness of harmony and texture".
|
What is Mozart’s contribution to the music?
|
Mozart contributed more than 800 pieces of music that are known for melodic beauty, elegance and richness.
|
2004.03034
| false
| null |
Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim.
Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim.
|
What models that rely only on claim-specific linguistic features are used as baselines?
|
The answers are shown as follows:
* SVM with RBF kernel
|
null | false
| null |
Tell me whether these cities are in Italy or Switzerland: Milan, Florence, Zurich, Bern, Lucerne, Rome
|
Milan: Italy
Florence: Italy
Zurich: Switzerland
Bern: Switzerland
Lucerne: Switzerland
Rome: Italy
|
|
null | false
| null |
Czech immigration was driven by the availability of land in Texas and by events in Europe. In the 1840s, the Austrian Empire, which included Czech lands, was still run by a feudal system, despite the wave of democratic ideals permeating western Europe. This led to the Revolutions of 1848, which did little to improve the economic status of peasants who still found it difficult to gain wealth. The revolutions, together with factors such as religious persecution and mandatory military service, fueled a wave of emigration from Austria beginning in 1852. Immigrants to America were unfamiliar with the concept of American democracy due to their lack of experience with democratic ideals. The concepts of states' rights and slavery were equally foreign to the immigrants.
|
Please summarize from this text what were the main reasons Czech people immigrated to Texas.
|
Czech immigration to Texas in the 19th century was influenced by the availability of land in Texas to homestead, economic opportunity, freedom from religious persecution and to escape the revolutions in the Austrian empire that lead to political unrest and economic uncertainty in the region.
|
null | false
| 162
|
We evaluate the quality of the document embeddings learned by MPAD on 10 document classification datasets, covering the topic identification, coarse and fine sentiment analysis and opinion mining, and subjectivity detection tasks. We briefly introduce the datasets next. Their statistics are reported in Table TABREF21.
(1) Reuters. This dataset contains stories collected from the Reuters news agency in 1987. Following common practice, we used the ModApte split and considered only the 10 classes with the highest number of positive training examples. We also removed documents belonging to more than one class and then classes left with no document (2 classes).
(2) BBCSport BIBREF30 contains documents from the BBC Sport website corresponding to 2004-2005 sports news articles.
(3) Polarity BIBREF31 features positive and negative labeled snippets from Rotten Tomatoes.
(4) Subjectivity BIBREF32 contains movie review snippets from Rotten Tomatoes (subjective sentences), and Internet Movie Database plot summaries (objective sentences).
(5) MPQA BIBREF33 is made of positive and negative phrases, annotated as part of the summer 2002 NRRC Workshop on Multi-Perspective Question Answering.
(6) IMDB BIBREF34 is a collection of highly polarized movie reviews from IMDB (positive and negative). There are at most 30 reviews for each movie.
(7) TREC BIBREF35 consists of questions that are classified into 6 different categories.
(8) SST-1 BIBREF36 contains the same snippets as Polarity. The authors used the Stanford Parser to parse the snippets and split them into multiple sentences. They then used Amazon Mechanical Turk to annotate the resulting phrases according to their polarity (very negative, negative, neutral, positive, very positive).
(9) SST-2 BIBREF36 is the same as SST-1 but with neutral reviews removed and snippets classified as positive or negative.
(10) Yelp2013 BIBREF26 features reviews obtained from the 2013 Yelp Dataset Challenge.
(1) Reuters. This dataset contains stories collected from the Reuters news agency in 1987. Following common practice, we used the ModApte split and considered only the 10 classes with the highest number of positive training examples. We also removed documents belonging to more than one class and then classes left with no document (2 classes). (2) BBCSport BIBREF30 contains documents from the BBC Sport website corresponding to 2004-2005 sports news articles. (3) Polaritity (Pang and Lee 2005) features positive and negative labeled snippets from Rotten Tomatoes. (4) Subjectivity (Pang and Lee 2004) contains movie review snippets from Rotten Tomatoes (subjective sentences), and Internet Movie Database plot summaries (objective sentences). (5) MPQA (Wiebe, Wilson, and Cardie 2005) is made of positive and negative phrases, annotated as part of the summer 2002 NRRC Workshop on Multi-Perspective Question Answering. (6) IMDB (Maas et al. 2011) is a collection of highly polarized movie reviews from IMDB (positive and negative). There are at most 30 reviews for each movie. (7) TREC (Li and Roth 2002) consists of questions that are classified into 6 different categories. (8) SST-1 (Socher et al. 2013) contains the same snippets as Polarity. The authors used the Stanford Parser to parse the snippets and split them into multiple sentences. They then used Amazon Mechanical Turk to annotate the resulting phrases according to their polarity (very negative, negative, neutral, positive, very positive). (9) SST-2 (Socher et al. 2013) is the same as SST-1 but with neutral reviews removed and snippets classified as positive or negative. (10) Yelp2013 (Tang, Qin, and Liu 2015) features reviews obtained from the 2013 Yelp Dataset Challenge.
|
What datasets are chosen for the experiment?
|
The chosen datasets are:(1) Reuters. (2) BBCSport. (3) Polarity .(4) Subjectivity. (5) MPQA. (6) IMDB. (7) TREC. (8) SST-1. (9) SST-2. (10) Yelp2013.
|
null | false
| null |
Tell me which of the following are real car parts: differential, gearbox, splinehook, flidget, wheel.
|
Differential, gearbox, and wheels are all parts of cars.
|
|
null | false
| 38
|
In this section, we describe the experiments and report the results for the classification of Wikipedia medical articles into the six classes of the Wikipedia Medicine Portal. We compare the results obtained adopting four different classifiers: the actionable model in BIBREF7 and three classifiers that leverage the ad-hoc features from the medical domain discussed in the previous sections. All the experiments were realized within the Weka framework BIBREF17 and validated through 10 fold cross-validation.
For each experiment, we relied on the dataset presented in Section "Dataset" , and specifically, on that obtained after sampling the majority classes and oversampling the minority ones (right-hand column in Table 1 ). The dataset serves both as training and test set for the classifiers.
Moreover, to take into account the imbalanced data, we have applied several classification algorithms and, for the sake of conciseness, hereafter we report only the best results we have achieved. In particular, we have experimented with bagging, adaptive boosting and random forest and we report the results for the latter only.
All the experiments were realized within the Weka framework [14] and validated through 10 fold cross-validation.
|
How to realize and validate the experiment by the authors?
|
All the experiments were realized within the Weka framework and validated through 10 fold cross-validation.
|
null | false
| null |
Tell me whether these cities are in the United States or not: Dallas, Seattle, Mumbai, Ljubljana, Rochester
|
Dallas, Seattle, and Rochester are all cities in the United States. Mumbai and Ljubljana are not in the United States.
|
|
null | false
| null |
What is a 'Public' School in the UK?
|
Public School is a term used for an independent fee-paying school in the UK. This is because when the schools were initially set-up, they were for the use of fee-paying parents who would share a teacher as a replacement for a dedicated at-home tutor for their children
|
|
null | false
| 134
|
In the field of natural language processing (NLP), the most prevalent neural approach to obtaining sentence representations is to use recurrent neural networks (RNNs), where words in a sentence are processed in a sequential and recurrent manner. Along with their intuitive design, RNNs have shown outstanding performance across various NLP tasks e.g. language modeling BIBREF0 , BIBREF1 , machine translation BIBREF2 , BIBREF3 , BIBREF4 , text classification BIBREF5 , BIBREF6 , and parsing BIBREF7 , BIBREF8 .
Among several variants of the original RNN BIBREF9 , gated recurrent architectures such as long short-term memory (LSTM) BIBREF10 and gated recurrent unit (GRU) BIBREF2 have been accepted as de-facto standard choices for RNNs due to their capability of addressing the vanishing and exploding gradient problem and considering long-term dependencies. Gated RNNs achieve these properties by introducing additional gating units that learn to control the amount of information to be transferred or forgotten BIBREF11 , and are proven to work well without relying on complex optimization algorithms or careful initialization BIBREF12 .
Meanwhile, the common practice for further enhancing the expressiveness of RNNs is to stack multiple RNN layers, each of which has distinct parameter sets (stacked RNN) BIBREF13 , BIBREF14 . In stacked RNNs, the hidden states of a layer are fed as input to the subsequent layer, and they are shown to work well due to increased depth BIBREF15 or their ability to capture hierarchical time series BIBREF16 which are inherent to the nature of the problem being modeled.
However this setting of stacking RNNs might hinder the possibility of more sophisticated recurrence-based structures since the information from lower layers is simply treated as input to the next layer, rather than as another class of state that participates in core RNN computations. Especially for gated RNNs such as LSTMs and GRUs, this means that layer-to-layer connections cannot fully benefit from the carefully constructed gating mechanism used in temporal transitions. Some recent work on stacking RNNs suggests alternative methods that encourage direct and effective interaction between RNN layers by adding residual connections BIBREF17 , BIBREF18 , by shortcut connections BIBREF18 , BIBREF19 , or by using cell states of LSTMs BIBREF20 , BIBREF21 .
In this paper, we propose a method of constructing multi-layer LSTMs where cell states are used in controlling the vertical information flow. This system utilizes states from the left and the lower context equally in computation of the new state, thus the information from lower layers is elaborately filtered and reflected through a soft gating mechanism. Our method is easy-to-implement, effective, and can replace conventional stacked LSTMs without much modification of the overall architecture.
We call the proposed architecture Cell-aware Stacked LSTM, or CAS-LSTM, and evaluate our method on multiple benchmark datasets: SNLI BIBREF22 , MultiNLI BIBREF23 , Quora Question Pairs BIBREF24 , and SST BIBREF25 . From experiments we show that the CAS-LSTMs consistently outperform typical stacked LSTMs, opening the possibility of performance improvement of architectures that use stacked LSTMs.
Our contribution is summarized as follows.
This paper is organized as follows. We give a detailed description about the proposed method in § SECREF2 . Experimental results are given in § SECREF3 . We study prior work related to our objective in § SECREF4 and conclude in § SECREF5 .
From experiments we show that the CAS-LSTMs consistently outperform typical stacked LSTMs, opening the possibility of performance improvement of architectures based on stacked LSTMs.
|
Does the CAS-LSTMs consistently outperform typical stacked LSTMs?
|
Yes.
|
null | false
| 39
|
For UTD we use the Zero Resource Toolkit (ZRTools; Jansen and Van Durme, 2011). ZRTools uses dynamic time warping (DTW) to discover pairs of acoustically similar audio segments, and then uses graph clustering on overlapping pairs to form a hard clustering of the discovered segments. Replacing each discovered segment with its unique cluster label, or pseudoterm, gives us a partial, noisy transcription, or pseudotext (Fig. FIGREF4 ).
In creating a translation model from this data, we face a difficulty that does not arise in the parallel texts that are normally used to train translation models: the pseudotext does not represent all of the source words, since the discovered segments do not cover the full audio (Fig. FIGREF4 ). Hence we must not assume that our MT model can completely recover the translation of a test sentence. In these conditions, the language modeling and ordering assumptions of most MT models are unwarranted, so we instead use a simple bag-of-words translation model based only on co-occurrence: IBM Model 1 BIBREF11 with a Dirichlet prior over translation distributions, as learned by fast_align BIBREF12 . In particular, for each pseudoterm, we learn a translation distribution over possible target words. To translate a pseudoterm in test data, we simply return its highest-probability translation (or translations, as discussed in § SECREF5 ).
This setup implies that in order to translate, we must apply UTD on both the training and test audio. Using additional (not only training) audio in UTD increases the likelihood of discovering more clusters. We therefore generate pseudotext for the combined audio, train the MT model on the pseudotext of the training audio, and apply it to the pseudotext of the test data. This is fair since the UTD has access to only the audio.
ZRTools uses dynamic time warping (DTW) to discover pairs of acoustically similar audio segments, and then uses graph clustering on overlapping pairs to form a hard clustering of the discovered segments. Replacing each discovered segment with its unique cluster label, or pseudoterm, gives us a partial, noisy transcription, or pseudotext (Fig. 1).
|
How does ZRTools work?
|
ZRTools uses dynamic time warping (DTW) to discover pairs of acoustically similar audio segments, and then uses graph clustering on overlapping pairs to form a hard clustering of the discovered segments. Replacing each discovered segment with its unique cluster label, or pseudoterm, gives them a partial, noisy transcription, or pseudotext.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.