paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
1901.00570
| true
| null |
The main contributions of this paper are (1) to overcome twitter challenges of acronyms, short text, ambiguity and synonyms, (2) to identify the set of word-pairs to be used as features for live event detection, (3) to build an end-to-end framework that can detect the events lively according to the word counts. This work can be applied to similar problems, where specific tweets can be associated with life events such as disease outbreak or stock market fluctuation. This work can be extended to predict future events with one day in advance, where we will use the same method for feature selection in addition to to time series analysis of the historical patterns of the word-pairs.
. This work can be extended to predict future events with one day in advance, where we will use the same method for feature selection in addition to to time series analysis of the historical patterns of the word-pairs.
|
Do the authors suggest any future extensions to this work?
|
Yes.
|
null | false
| null |
Tell me which of the eight planets in our solar system have moons and which do not. Classify the planets into 'No Moon', 'One Moon', and 'Multiple Moons'. The planets are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune.
|
No Moon: Mercury and Venus.
One Moon: Earth and Mars.
Multiple Moons: Jupiter, Saturn, Uranus, and Neptune.
|
|
null | false
| 158
|
In this paper, we introduce Query-based Attention CNN(QACNN) for Text Similarity Map, an end-to-end neural network for question answering. This network is composed of compare mechanism, two-staged CNN architecture with attention mechanism, and a prediction layer. First, the compare mechanism compares between the given passage, query, and multiple answer choices to build similarity maps. Then, the two-staged CNN architecture extracts features through word-level and sentence-level. At the same time, attention mechanism helps CNN focus more on the important part of the passage based on the query information. Finally, the prediction layer find out the most possible answer choice. We conduct this model on the MovieQA dataset using Plot Synopses only, and achieve 79.99% accuracy which is the state of the art on the dataset.
Then, the two-staged CNN architecture extracts features through word level and sentence-level. At the same time, attention mechanism helps CNN focus more on the important part of the passage based on the query information.
|
What's the function of the two-staged CNN architecture with an attention mechanism?
|
The two-staged CNN architecture extracts features through word level and sentence-level. At the same time, attention mechanism helps CNN focus more on the important part of the passage based on the query information.
|
null | false
| 137
|
Pre-trained models have demonstrated their effectiveness in many downstream natural language processing (NLP) tasks. The availability of multilingual pre-trained models enables zero-shot transfer of NLP tasks from high resource languages to low resource ones. However, recent research in improving pre-trained models focuses heavily on English. While it is possible to train the latest neural architectures for other languages from scratch, it is undesirable due to the required amount of compute. In this work, we tackle the problem of transferring an existing pre-trained model from English to other languages under a limited computational budget. With a single GPU, our approach can obtain a foreign BERT base model within a day and a foreign BERT large within two days. Furthermore, evaluating our models on six languages, we demonstrate that our models are better than multilingual BERT on two zero-shot tasks: natural language inference and dependency parsing.
Furthermore, evaluating our models on six languages, we demonstrate that our models are better than multilingual BERT on two zero-shot tasks: natural language inference and dependency parsing.
|
What do the authors demonstrate by evaluating their models on six languages?
|
Their models are better than multilingual BERT on two zero-shot tasks: natural language inference and dependency parsing.
|
null | false
| 301
|
This study uses the original and updated VADER (Valence Aware Dictionary and Sentiment Reasoner) to calculate the compound sentiment scores for about 14,000 Nigerian Pidgin tweets. The updated VADER lexicon (updated with 300 Pidgin tokens and their sentiment scores) performed better than the original VADER lexicon. The labelled sentiments from the updated VADER were then compared with sentiment labels by expert Pidgin English speakers.
The labelled sentiments from the updated VADER were then compared with sentiment labels by expert Pidgin English speakers.
|
What were the labeled sentiments from the updated VADER that were then compared with?
|
Sentiment labels by expert Pidgin English speakers.
|
2002.06424
| false
| null |
To further demonstrate the effectiveness of the additional task-specific BiRNN layers in our architecture, we conducted an ablation study using the CoNLL04 dataset. We trained and evaluated in the same manner described above, using the same hyperparameters, with the following exceptions:
We used either (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind.
We increased the number of shared BiRNN layers to keep the total number of model parameters consistent with the number of parameters in the baseline model.
We average the results for each set of hyperparameter across three trials with random weight initializations.
To further demonstrate the effectiveness of the additional task-specific BiRNN layers in our architecture, we conducted an ablation study using the CoNLL04 dataset. We trained and evaluated in the same manner described above, using the same hyperparameters, with the following exceptions:
We used either (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind.
We increased the number of shared BiRNN layers to keep the total number of model parameters consistent with the number of parameters in the baseline model.
We average the results for each set of hyperparameter across three trials with random weight initializations.
|
What were the variables in the ablation study?
|
The answers are shown as follows:
* (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind
|
null | false
| null |
Which is the best place to have Masala Dosa in Bangalore
|
The top 5 best places to have Masala Dosa in Bangalore are
1. Dose Adda near Devegowda Petrol Bunk, Padmanabhanagar
2. Rameshwaram Cafe, multiple outlets across Bangalore
3. Samrat Cafe, Near MG Road
4. Vidyarthi Bhavan, Gandhi Bazar
5. Chikkanna Dose, Jayanagar 3rd Block
|
|
null | false
| null |
How does Windows Fast Startup Work?
|
Fast Startup is a Windows feature that allows you to boot your computer in a few seconds rather than a minute. Rather than going through the cold boot path, Fast Startup uses a minimal hiberfile to resume the system. When the feature is enabled, selecting “Shutdown” in the Windows UI doesn’t actually shutdown the system. Instead, it closes all user applications, logs the current user out, and then creates a hiberfile. Because this hiberfile only includes the kernel, device drivers and a subset of applications, it is small and can be reloaded quickly.
Alternatively, the cold boot path requires loading the kernel and drivers from disk, initializing the kernel and drivers, and launching various user mode applications. This can be especially slow on computers that use spinning hard drives.
|
|
null | false
| null |
If the sky is the ocean then what is a bird?
|
In that analogy a bird would be a fish. There are a lot of similarities between birds and fish. They both have vertebrates and are part of the animal kingdom and belong to the phylum Chordata. They both consume nutrients to survive.
|
|
null | false
| 204
|
Recent years have witnessed increased performance in many computational linguistics tasks such as syntactic and semantic parsing BIBREF0 , BIBREF1 , emotion classification BIBREF2 , and sentiment analysis BIBREF3 , BIBREF4 , BIBREF5 , especially concerning the applicability of such tools to noisy online data. Moreover, the field has made substantial progress in developing multilingual models and extending semantic annotation resources to languages beyond English BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .
Concurrently, it has been argued for mental health research that it would constitute a `valuable critical step' BIBREF10 to analyse first-hand accounts by individuals with lived experience of severe mental health issues in blog posts, tweets, and discussion forums. Several severe mental health difficulties, e.g., bipolar disorder (BD) and schizophrenia are considered as chronic and clinical recovery, defined as being relapse and symptom free for a sustained period of time BIBREF11 , is considered difficult to achieve BIBREF12 , BIBREF13 , BIBREF14 . Moreover, clinically recovered individuals often do not regain full social and educational/vocational functioning BIBREF15 , BIBREF16 . Therefore, research originating from initiatives by people with lived experience of mental health issues has been advocating emphasis on the individual's goals in recovery BIBREF17 , BIBREF18 . This movement gave rise to the concept of personal recovery BIBREF19 , BIBREF20 , loosely defined as a `way of living a satisfying, hopeful, and contributing life even with limitations caused by illness' BIBREF18 . The aspects of personal recovery have been conceptualised in various ways BIBREF21 , BIBREF22 , BIBREF23 . According to the frequently used CHIME model BIBREF24 , its main components are Connectedness, Hope and optimism, Identity, Meaning and purpose, and Empowerment. Here, we focus on BD, which is characterised by recurring episodes of depressed and elated (hypomanic or manic) mood BIBREF25 , BIBREF12 . Bipolar spectrum disorders were estimated to affect approximately 2% of the UK population BIBREF13 with rates ranging from 0.1%-4.4% across 11 other European, American and Asian countries BIBREF26 . Moreover, BD is associated with a high risk of suicide BIBREF27 , making its prevention and treatment important tasks for society. BD-specific personal recovery research is motivated by mainly two facts: First, the pole of positive/elevated mood and ongoing mood instability constitute core features of BD and pose special challenges compared to other mental health issues, such as unipolar depression BIBREF25 . Second, unlike for some other severe mental health difficulties, return to normal functioning is achievable given appropriate treatment BIBREF28 , BIBREF16 , BIBREF29 .
A substantial body of qualitative and quantitative research has shown the importance of personal recovery for individuals diagnosed with BD BIBREF22 , BIBREF25 , BIBREF30 , BIBREF31 , BIBREF23 . Qualitative evidence mainly comes from (semi-)structured interviews and focus groups and has been criticised for small numbers of participants BIBREF10 , lacking complementary quantitative evidence from larger samples BIBREF32 . Some quantitative evidence stems from the standardised bipolar recovery questionnaire BIBREF30 and a randomised control trial for recovery-focused cognitive-behavioural therapy BIBREF31 . Critically, previous research has taken place only in structured settings. What is more, the recovery concept emerged from research primarily conducted in English-speaking countries, mainly involving researchers and participants of Western ethnicity. This might have led to a lack of non-Western notions of wellbeing in the concept, such as those found in indigenous peoples BIBREF32 , limiting its the applicability to a general population. Indeed, the variation in BD prevalence rates from 0.1% in India to 4.4% in the US is striking. It has been shown that culture is an important factor in the diagnosis of BD BIBREF33 , as well as on the causes attributed to mental health difficulties in general and treatments considered appropriate BIBREF34 , BIBREF35 . While approaches to mental health classification from texts have long ignored the cultural dimension BIBREF36 , first studies show that online language of individuals affected by depression or related mental health difficulties differs significantly across cultures BIBREF37 , BIBREF36 .
Hence, it seems timely to take into account the wealth of accounts of mental health difficulties and recovery stories from individuals of diverse ethnic and cultural backgrounds that are available in a multitude of languages on the internet. Corpus and computational linguistic methods are explicitly designed for processing large amounts of linguistic data BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , and as discussed above, recent advances have made it feasible to apply them to noisy user-generated texts from diverse domains, including mental health BIBREF42 , BIBREF43 . Computer-aided analysis of public social media data enables us to address several shortcomings in the scientific underpinning of personal recovery in BD by overcoming the small sample sizes of lab-collected data and including accounts from a more heterogeneous population.
In sum, our research questions are as follows: (1) How is personal recovery discussed online by individuals meeting criteria for BD? (2) What new insights do we get about personal recovery and factors that facilitate or hinder it? We will investigate these questions in two parts, looking at English-language data by westerners and at multilingual data by individuals of diverse ethnicities.
BD-specific personal recovery research is motivated by mainly two facts: First, the pole of positive/elevated mood and ongoing mood instability constitute core features of BD and pose special challenges compared to other mental health issues, such as unipolar depression (Jones et al., 2010).
|
What are the core features of bipolar disorder (BD)?
|
The pole of positive/elevated mood and ongoing mood instability constitute core features of bipolar disorder (BD).
|
null | false
| null |
Identify which instrument is string or percussion: Sounding stone, Cobza
|
Cobza is string, Sounding stone is percussion.
|
|
null | false
| 185
|
Learning natural language generation (NLG) models heavily relies on annotated training data. However, most available datasets are collected in a single language (typically English), which restricts deploying the applications to other languages. In this work, we aim at transferring the supervision of a monolingual NLG dataset to unseen languages, so that we can boost performance for the low-resource settings.
Various methods have been proposed over the years to learn universal cross-lingual word embeddings BIBREF0, BIBREF1, BIBREF2 or sentence encoders BIBREF3, BIBREF4, BIBREF5, which tries to encode multilingual texts into a single shared vector space. Despite achieving promising results on cross-lingual classification problems, cross-lingual pre-trained models purposed for NLG tasks remains relatively understudied.
The cross-lingual generation problem is challenging due to the following reasons. First, it requires the models to understand multilingual input texts, and generate multilingual target sequences. So both encoder and decoder should be pre-trained together. Second, the many-to-many nature of cross-lingual NLG increases language pairs with the square of the number of languages. Third, the prediction space of cross-lingual NLG is much larger than classification tasks, which makes the knowledge transfer of decoders quite critical.
Previous work mainly relies on machine translation (MT) systems to map texts to different languages. The first strand of research directly uses MT in a pipeline manner BIBREF6. For example, the input written in other languages is first translated to English, and fed into the NLG model that is trained by English data. Then the generated English text is translated back to the target language. Another strand of work employs MT to generate pseudo training data for other language pairs that are lack of annotations BIBREF7, BIBREF8. However, such methods have to use multiple MT systems, which renders them suffering from error propagation. Moreover, because the pipeline-based methods do not explicitly share the same parameter space across the languages, we can not directly transfer the task-specific supervision to other low-resource languages.
In this paper, we propose a cross-lingual pre-trained model (named as Xnlg) in order to transfer monolingual NLG supervision to other pre-trained languages by fine-tuning. Specifically, Xnlg shares the same sequence-to-sequence model across languages, and is pre-trained with both monolingual and cross-lingual objectives. The model not only learns to understand multilingual input, but also is able to generate specific languages by conditioning on the encoded semantics. Figure FIGREF2 demonstrates how to use Xnlg to perform cross-lingual transfer for downstream tasks. The proposed model enables us to fine-tune the pre-trained model on monolingual NLG training data, and then evaluate it beyond a single language, including zero-shot cross-lingual generation. Besides, we explore several fine-tuning strategies to make a compromise between cross-lingual ability and task ability. In addition, we introduce two cross-lingual NLG datasets (i.e., question generation, and abstractive summarization) for evaluation, which includes three languages, namely English, Chinese, and French. Experimental results on the NLG tasks show that Xnlg achieves competitive performance compared with the machine-translation-based pipeline model in zero-shot cross-lingual settings.
Previous work mainly relies on machine translation (MT) to map texts to different languages. The first strand of research directly uses MT in a pipeline manner (Wan, Li, and Xiao 2010). For example, the inputs written in other languages are first translated to English, and fed into the NLG model that is trained by English data. Then the generated English texts are translated back to the target language. Another strand of work uses MT to generate pseudo training data for other language pairs that are lack of annotations (Shen et al. 2018; Duan et al. 2019). However, such methods have to use multiple MT systems, which renders them suffering from error propagation. Moreover, because the pipelinebased methods do not explicitly share the same parameter space across languages, we can not directly transfer the taskspecific supervision to other low-resource languages.
|
What shortcomings do the previous methods have?
|
They mainly rely on MT to map texts to different languages, so the multiple MT systems have to be used which renders them suffer from error propagation. Taskspecific supervision to other low-resource languages can not be directly transferred to other low-resource languages because the pipeline-based methods do not explicitly share the same parameter space across languages.
|
null | false
| null |
António de Serpa Pimentel (1825 in Coimbra – 1900) was Prime Minister of Portugal from 14 January to 11 October 1890. His term in office began as a reaction to the British ultimatum concerning Portuguese colonial policy in southeast Africa. The signing of the Anglo-Portuguese Treaty later that year, which was intended as a step to resolve the crisis, was viewed as further appeasement of a powerful Britain. This led to his resignation and the fall of his government.
|
Extract the beginning and ending term of António de Serpa Pimentel from the following text
|
António de Serpa Pimentel was the Prime Minister of Portugal from 14 January 1890 to 11 October 1890
|
null | false
| null |
How would you book the cheapest rental car in a foreign country?
|
To book the cheapest and best rental car you could start using popular aggregator websites such as Kayak and Expedia to find the best prices from large domestic car rental companies with foreign operations. Next, you can make a search for local car rental companies in the area you are travelling to, and make calls or visit their website. This may require you to speak a local language. Finally, you can utilize alternative car rental companies such as Turo if they are available in the region you are travelling to.
|
|
null | false
| 136
|
Table TABREF14 shows the results of our main experiments on the 2016 and 2018 test sets for French and German. We use Meteor BIBREF31 as the main metric, as in the WMT tasks BIBREF25 . We compare our transformer baseline to transformer models enriched with image information, as well as to the deliberation models, with or without image information.
We first note that our multimodal models achieve the state of the art performance for transformer networks (constrained models) on the English-German dataset, as compared to BIBREF30 . Second, our deliberation models lead to significant improvements over this baseline across test sets (average INLINEFORM0 , INLINEFORM1 ).
Transformer-based models enriched with image information (base+sum, base+att and base+obj), on the other hand, show no major improvements with respect to the base performance. This is also the case for deliberation models with image information (del+sum, del+att, del+obj), which do not show significant improvement over the vanilla deliberation performance (del).
However, as it has been shown in the WMT shared tasks on MMT BIBREF23 , BIBREF24 , BIBREF25 , automatic metrics often fail to capture nuances in translation quality, such as, the ones we expect the visual modality to help with, which – according to human perception – lead to better translations. To test this assumption in our settings, we performed human evaluation involving professional translators and native speakers of both French and German (three annotators).
The annotators were asked to rank randomly selected test samples according to how well they convey the meaning of the source, given the image (50 samples per language pair per annotator). For each source segment, the annotator was shown the outputs of three systems: base+att, the current MMT state-of-the-art BIBREF30 , del and del+obj. A rank could be assigned from 1 to 3, allowing ties BIBREF32 . Annotators could assign zero rank to all translations if they were judged incomprehensible. Following the common practice in WMT BIBREF32 , each system was then assigned a score which reflects the proportion of times it was judged to be better or equal other systems.
Table TABREF19 shows the human evaluation results. They are consistent with the automatic evaluation results when it comes to the preference of humans towards the deliberation-based setups, but show a more positive outlook regarding the addition of visual information (del+obj over del) for French.
Manual inspection of translations suggests that deliberation setups tend to improve both the grammaticality and adequacy of the first pass outputs. For German, the most common modifications performed by the second-pass decoder are substitutions of adjectives and verbs (for test 2016, 15% and 12% respectively, of all the edit distance operations). Changes to adjectives are mainly grammatical, changes to verbs are contextual (e.g., changing laufen to rennen, both verbs mean run, but the second refers to running very fast). For French, 15% of all the changes are substitutions of nouns (for test 2016). These are again very contextual. For example, the French word travailleur (worker) is replaced by ouvrier (manual worker) in the contexts where tools, machinery or buildings are mentioned. For our analysis we used again spacy.
The information on detected objects is particularly helpful for specific adequacy issues. Figure FIGREF15 demonstrates some such cases. In the first case, the base+att model misses the translation of race car: the German word Rennen translates only the word race. del introduces the word car (Auto) into the translation. Finally, del+obj correctly translates the expression race car (Rennwagen) by exploiting the object information. For French, del translates the source part in a body of water, missing from the base+att translation. del+obj additionally translated the word paddling according to the detected object Paddle.
We use Meteor (Denkowski and Lavie, 2014) as the main metric, as in the WMT tasks (Barrault et al., 2018).
|
What is their main metric?
|
Meteor.
|
null | false
| null |
Boo-Boo Bear is a Hanna-Barbera cartoon character on The Yogi Bear Show. Boo-Boo is an anthropomorphic dwarf bear who wears a blue bowtie. Boo-Boo is Yogi Bear's constant companion (not his son, as sometimes believed), and often acts as his conscience. He tries (usually unsuccessfully) to keep Yogi from doing things he should not do, and also to keep Yogi from getting into trouble with Ranger Smith – often saying, "Mr. Ranger isn't gonna like this, Yogi." It is not readily apparent whether Boo-Boo is a juvenile bear with a precocious intellect, or simply an adult bear who is short of stature.
|
How old is Boo Boo Bear?
|
It is unclear how old Boo Boo Bear is. He may be an adolescent or just a short adult.
|
null | false
| null |
What is Tableau?
|
Tableau software is a company based in America. They offer visualization and dashboarding software that connects to a company's data in their systems of records. They have a strong presence in the Business Intelligence community as a leader in the visualization space, and they were found in 2003. Now SalesForce owns Tableau as of 2019 and they paid around 16 billion dollars to acquire Tableau.
|
|
1910.00825
| false
| null |
Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:
We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.
We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:
Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ .
We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling.
We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset.
|
How does SPNet utilize additional speaker role, semantic slot and dialog domain annotations?
|
The answers are shown as follows:
* Our encoder-decoder framework employs separate encoding for different speakers in the dialog.
* We integrate semantic slot scaffold by performing delexicalization on original dialogs.
* We integrate dialog domain scaffold through a multi-task framework.
|
null | false
| null |
Lætitia Sadier (born 6 May 1968, sometimes known as Seaya Sadier) is a French musician, best known as a founding member of the London-based avant-pop band Stereolab. In 1996, while Stereolab was still active, she formed the side project Monade. In 2009 – the same year Stereolab became inactive – she ended the Monade project and began to perform solo work under her own name; her current band is known as the Lætitia Sadier Source Ensemble. She has frequently performed guest vocals and collaborations with other artists.
|
From the passage provided, extract the alternative name of Lætitia Sadier.
|
Lætitia Sadier also goes by Seaya Sadier.
|
1803.03786
| false
| null |
General lexical features are often used in natural language processing as they are somewhat task-independent and reasonably effective in terms of classification accuracy. In our experiments, we used TF.IDF-based features over the title and over the content of the article we wanted to classify. We had these features twice – once for the title and once for the the content of the article, as we wanted to have two different representations of the same article. Thus, we used a total of 1,100 TF.IDF-weighted features (800 content + 300 title), limiting the vocabulary to the top 800 and 300 words, respectively (which occurred in more than five articles). We should note that TF.IDF features should be used with caution as they may not remain relevant over time or in different contexts without retraining.
In our experiments, we used TF.IDF-based features over the title and over the content of the article we wanted to classify. We had these features twice – once for the title and once for the the content of the article, as we wanted to have two different representations of the same article. Thus, we used a total of 1,100 TF.IDF-weighted features (800 content + 300 title), limiting the vocabulary to the top 800 and 300 words, respectively (which occurred in more than five articles).
|
what lexical features did they experiment with?
|
The answers are shown as follows:
* TF.IDF-based features
|
null | false
| null |
How many rounds are played in a professional golf tournament?
|
Four rounds are played in a professional golf tournament.
|
|
null | false
| null |
The 35th Filipino Academy of Movie Arts and Sciences Awards Night was held on May 6, 1988 at the Manila Hotel Fiesta Pavilion in the Philippines, honoring the best films of 1987.
Saan Nagtatago ang Pag-ibig won the most awards with four wins including the most coveted FAMAS Award for Best Picture Three people were elevated to the "Hall Of Fame" status after winning their respective categories five times. They were Fernando Poe Jr. for best actor, Augusto Salvador for editing and George Canseco for musical score. This is the second time for Canseco who was earlier inducted to the Hall of Fame for winning five time in the Theme Song Category.
|
Given a reference text about the 35th Filipino Academy of Movie Arts and Sciences Awards Night, tell me who won the best actor award.
|
Fernando Poe Jr. won the best actor award at the 35th Filipino Academy of Movie Arts and Sciences Awards Night.
|
null | false
| 99
|
In recent years there has been increasing interest on the issue of disinformation spreading on online social media. Global concern over false (or "fake") news as a threat to modern democracies has been frequently raised–ever since 2016 US Presidential elections–in correspondence of events of political relevance, where the proliferation of manipulated and low-credibility content attempts to drive and influence people opinions BIBREF0BIBREF1BIBREF2BIBREF3.
Researchers have highlighted several drivers for the diffusion of such malicious phenomenon, which include human factors (confirmation bias BIBREF4, naive realism BIBREF5), algorithmic biases (filter bubble effect BIBREF0), the presence of deceptive agents on social platforms (bots and trolls BIBREF6) and, lastly, the formation of echo chambers BIBREF7 where people polarize their opinions as they are insulated from contrary perspectives.
The problem of automatically detecting online disinformation news has been typically formulated as a binary classification task (i.e. credible vs non-credible articles), and tackled with a variety of different techniques, based on traditional machine learning and/or deep learning, which mainly differ in the dataset and the features they employ to perform the classification. We may distinguish three approaches: those built on content-based features, those based on features extracted from the social context, and those which combine both aspects. A few main challenges hinder the task, namely the impossibility to manually verify all news items, the lack of gold-standard datasets and the adversarial setting in which malicious content is created BIBREF3BIBREF6.
In this work we follow the direction pointed out in a few recent contributions on the diffusion of disinformation compared to traditional and objective information. These have shown that false news spread faster and deeper than true news BIBREF8, and that social bots and echo chambers play an important role in the diffusion of malicious content BIBREF6, BIBREF7. Therefore we focus on the analysis of spreading patterns which naturally arise on social platforms as a consequence of multiple interactions between users, due to the increasing trend in online sharing of news BIBREF0.
A deep learning framework for detection of fake news cascades is provided in BIBREF9, where the authors refer to BIBREF8 in order to collect Twitter cascades pertaining to verified false and true rumors. They employ geometric deep learning, a novel paradigm for graph-based structures, to classify cascades based on four categories of features, such as user profile, user activity, network and spreading, and content. They also observe that a few hours of propagation are sufficient to distinguish false news from true news with high accuracy. Diffusion cascades on Weibo and Twitter are analyzed in BIBREF10, where authors focus on highlighting different topological properties, such as the number of hops from the source or the heterogeneity of the network, to show that fake news shape diffusion networks which are highly different from credible news, even at early stages of propagation.
In this work, we consider the results of BIBREF11 as our baseline. The authors use off-the-shelf machine learning classifiers to accurately classify news articles leveraging Twitter diffusion networks. To this aim, they consider a set of basic features which can be qualitatively interpreted w.r.t to the social behavior of users sharing credible vs non-credible information. Their methodology is overall in accordance with BIBREF12, where authors successfully detect Twitter astroturfing content, i.e. political campaigns disguised as spontaneous grassroots, with a machine learning framework based on network features.
In this paper, we propose a classification framework based on a multi-layer formulation of Twitter diffusion networks. For each article we disentangle different social interactions on Twitter, namely tweets, retweets, mentions, replies and quotes, to accordingly build a diffusion network composed of multiple layers (on for each type of interaction), and we compute structural features separately for each layer. We pick a set of global network properties from the network science toolbox which can be qualitatively explained in terms of social dimensions and allow us to encode different networks with a tuple of features. These include traditional indicators, e.g. network density, number of strong/weak connected components and diameter, and more elaborated ones such as main K-core number BIBREF13 and structural virality BIBREF14. Our main research question is whether the use of a multi-layer, disentangled network yields a significant advance in terms of classification accuracy over a conventional single-layer diffusion network. Additionally, we are interested in understanding which of the above features, and in which layer, are most effective in the classification task.
We perform classification experiments with an off-the-shelf Logistic Regression model on two different datasets of mainstream and disinformation news shared on Twitter respectively in the United States and in Italy during 2019. In the former case we also account for political biases inherent to different news sources, referring to the procedure proposed in BIBREF2 to label different outlets. Overall we show that we are able to classify credible vs non-credible diffusion networks (and consequently news articles) with high accuracy (AUROC up to 94%), even when accounting for the political bias of sources (and training only on left-biased or right-biased articles). We observe that the layer of mentions alone conveys useful information for the classification, denoting a different usage of this functionality when sharing news belonging to the two news domains. We also show that most discriminative features, which are relative to the breadth and depth of largest cascades in different layers, are the same across the two countries.
The outline of this paper is the following: we first formulate the problem and describe data collection, network representation and structural properties employed for the classification; then we provide experimental results–classification performances, layer and feature importance analyses and a temporal classification evaluation–and finally we draw conclusions and future directions.
In this paper, we propose a classification framework based on a multi-layer formulation of Twitter diffusion networks.
|
What is the classification framework based on in this paper?
|
A multi-layer formulation of Twitter diffusion networks.
|
null | false
| 514
|
We have described SoftHebb, a highly biologically plausible neural algorithm that is founded on a Bayesian ML-theoretic framework. The model consists of elements fully compatible with conventional ANNs. It was previously not known which plasticity rule should be used to learn a Bayesian generative model of the input distribution in ANN WTA networks. Moreover, we showed that Hard WTA networks and neurons with other activation functions can be described within the same framework as variations of the probabilistic model. This theory could provide a new foundation for normative Hebbian ANN designs with practical significance. For example, SoftHebb's properties are sought-after by efficient neuromorphic learning chips. It is unsupervised, local, and requires no error or other feedback currents from upper layers, thus solving hardware-inefficiencies and bio-implausibilities of backpropagation such as weight-transport and update-locking. Surprisingly, it surpasses backpropagation even in accuracy, when training time and network size are limited. In a demonstration that goes beyond the common greedy-training approach to such networks, we achieved update-unlocked operation in practice, by updating the first layer before the input's full processing by the next layer. It is intriguing that, through its biological plausibility, emerge properties commonly associated with biological intelligence, such as speed of learning, and robustness to noise and adversarial attacks. Significant robustness emerges without specialized defences. Furthermore, SoftHebb tends to not merely be robust to attacks, but actually deflect them as specialized SOTA defences aim to do.
Here, we explored SoftHebb's applicability on several datasets. We measured its accuracy on MNIST, Fashion-MNIST, and CIFAR-10 in preliminary results, and we reported a functional convolutional SoftHebb network that improves accuracy on the significantly harder dataset of CIFAR-10. The convolutional implementation could become the foundation for deeper networks and complex problems. Ultimately, this could provide insights into the role of WTA microcircuits in larger networks in cortex with localized receptive fields, similar to area V1 of cortex. All in all, the algorithm has several properties that are individually interesting and novel, and worth future extension. Combined, however, SoftHebb's properties shown in this work may already enable certain small-scale but previously-impossible applications. For example, fast, on-line, unsupervised learning of simple tasks by edge sensing devices, operating in noisy conditions, with a small battery and only local processing, requires those algorithmic properties that we demonstrated here.
REPRODUCIBILITY STATEMENT Some of our main contributions are theoretical and their proof is fully reproducible by following the rigorous derivations in the main text and in the appendices. Our experimental results are based on standard machine-learning techniques, while we also describe the hyperparameters and experimental protocols. Moreover, we provide Python code with specific instructions to reproduce the main experiments. calculate weight change ∆w
The cosine similarity between the input vector and each centroid's parameters underpins the model (Eq. 4). This similarity is precisely computed by a linear neuron that receives normalized inputs x * := x ||x|| , and that normalizes its vector of synaptic weights: w * k := w ||w|| . Specifically, the neuron's summed weighted input u k = w * k • x * then determines the cosine similarity of an input sample to the weight vector, thus computing the likelihood function of each component of the input mixture (Eq. 2). It should be noted that even though u k depends on the weights of all input synapses, the weight values of other synapses do not need to be known to each updated synapse. Therefore, in the SoftHebb plasticity rule that we will present (Eq. 8), the term u k is a local, postsynaptic variable that does not undermine the locality of the plasticity. The bias term of each neuron can store the parameter w 0k of the prior Q(C k ; w 0k ). Based on these, it can also be shown that a set of K such neurons can actually compute the Bayesian posterior, if the neurons are connected in a configuration that implements softmax. Softmax has a biologically-plausible implementation through lateral inhibition (divisive normalization) between neurons. Specifically, based on the model of Definition 2.2, the posterior probability is
But in the neural description, u k + w 0k is the activation of the k-th linear neuron. That is, Eq. 7 shows that the result of Bayesian inference of the hidden cause from the input Q(C k |x) is found by a softmax operation on the linear neural activations. In this equivalence, we will be using y k := Q(C k |x; w) to symbolize the softmax output of the k-th neuron, i.e. the output after the WTA operation, interchangeably with Q(C k |x). It can be seen in Eq. 7 that the probabilistic model has one more, alternative, but equivalent neural interpretation. Specifically, Q(C k |x) can be described as the output of a neuron with exponential activation function (numerator in Eq. 7) that is normalized by its layer's total output (denominator). This is equally accurate, and more directly analogous to the biological description. This shows that the exponential activation of each individual neuron k directly equals the k-th exponential component distribution of the generative mixture model (Eq. 4). Therefore, the softmax-configured linear neurons, or equivalently, the normalized exponential neurons, fully implement the generative model of Definition 2.2, and also infer the Bayesian posterior probability given an input and the model parameters. However, the problem of calculating the model's parameters from data samples is a difficult one, if the input distribution's parameters are unknown. In the next sections we will show that this neural network can find these optimal parameters through Bayesian inference, in an unsupervised and on-line manner, based on only local Hebbian plasticity.
State-of-the-art (SOTA) artificial neural networks (ANNs) achieve impressive results in a variety of machine intelligence tasks. However, they largely rely on mechanisms that diverge from the original inspiration from biological neural networks. As a result, only a small part of this prolific field also contributes to computational neuroscience. In fact, this biological implausibility is also an important issue for machine intelligence. For their impressive performance, ANNs trade off other desired properties, which are present in biological systems. For example, ANN training often demands very large and labelled datasets. When labels are unavailable, self-supervised learning schemes exist, where supervisory error signals generated by the network itself are exploited and backpropagated from the output towards the input to update the network's parameters. However, this global propagation of signals in deep networks introduces another limitation. Namely, it prevents the implementation of efficient distributed computing hardware that would be based on only local signals from neighbouring physical nodes in the network, and is in contrast to local synaptic plasticity rules that partly govern biological learning. Several pieces of work have been addressing parts of the biological implausibility and hardware-inefficiency of backpropagation in ANNs. such as the need for exactly symmetric forward and backward weights or the waiting time caused by the network's forward-backward pass between two training updates in a layer (weight transport and update-locking problems). Recently, an approximation to backpropagation that is mostly Hebbian, i.e. relies on mostly pre-and post-synaptic activity of each synapse, has been achieved by reducing the global error requirements to 1-bit information. Two schemes that further localize the signal that is required for a weight update are Equilibrium Propagation and Predictive Coding. Both methods approximate backpropagation through Hebbian-like learning, by delegating the global aspect of the computation, from a global error signal, to a global convergence of the network state to an equilibrium. This equilibrium is reached through several iterative steps of feed-forward and feed-back communication throughout the network, before the ultimate weight update by one training example. The biological plausibility and hardware-efficiency of this added iterative process of signal propagation are open questions that begin to be addressed.
Moreover, learning through backpropagation, and presumably also its approximations, has another indication of biological implausibility, which also significantly limits ANN applicability. Namely, it produces networks that are confused by small adversarial perturbations of the input, which are imperceptible by humans. It has recently been proposed that a defence strategy of "deflection" of adversarial attacks may be the ultimate solution to that problem. Through this strategy, to cause confusion in the network's inferred class, the adversary is forced to generate such a changed input that really belongs to the distribution of a different input class. Intuitively, but also strictly by definition, this deflection is achieved if a human assigns to the perturbed input the same label that the network does. Deflection of adversarial attacks in ANNs has been demonstrated by an elaborate scheme that is based on detecting the attacks. However, the human ability to deflect adversarial perturbations likely does not rely on detecting them, but rather on effectively ignoring them, making the deflecting type of robustness an emergent property of biological computation rather than a defence mechanism. The biological principles that underlie this property of robustness are unclear, but it might emerge from the distinct algorithms that govern learning in the brain.
Therefore, what is missing is a biologically plausible model that can learn from fewer data-points, without labels, through local plasticity, and without feedback from distant layers. This model could then be tested for emergent adversarial robustness. A good candidate category of biological networks and learning algorithms is that of competitive learning. Neurons that compete for their activation through lateral inhibition are a common connectivity pattern in the superficial layers of the cerebral cortex). This pattern is described as winner-take-all (WTA), because competition suppresses activity of weakly activated neurons, and emphasizes strong ones. Combined with Hebbian-like plasticity rules, WTA connectivity gives rise to competitivelearning algorithms. These networks and learning schemes have been long studied (Von der and a large literature based on simulations and analyses describes their functional properties. A WTA neuronal layer, depending on its specifics, can restore missing input signals, perform decision making i.e. winner selection, and generate oscillations such as those that underlie brain rhythms. Perhaps more importantly, its neurons can learn to become selective to different input patterns, such as orientation of visual bars in models of the primary visual cortex (Von der, MNIST handwritten digits, CIFAR10 objects, spatiotemporal spiking patterns, and can adapt dynamically to model changing objects. The WTA model is indeed biologically plausible, Hebbian plasticity is local, and learning is input-driven, relying on only feed-forward communication of neurons -properties that seem to address several of the limitations of ANNs. However, the model's applicability is limited to simple tasks. That is partly because the related theoretical literature remains surprisingly unsettled, despite its long history, and the strong and productive community interest. described a very related theory but for a model that is largely incompatible with ANNs and thus less practical. It uses spiking and stochastic neurons, input has to be discretized, and each input feature must be encoded through multiple binary neurons. Moreover, it was only proven for neurons with an exponential activation function. It remains therefore unclear which specific plasticity rule and structure could optimize an ANN WTA for Bayesian inference. It is also unclear how to minimize a common loss function such as cross-entropy despite unsupervised learning, and how a WTA could represent varying families of probability distributions. In summary, on the theoretical side, an algorithm that is simultaneously normative, based on WTA networks and Hebbian unsupervised plasticity, performs Bayesian inference, and, importantly, is composed of conventional, i.e. non-spiking, ANN elements and is rigorously linked to modern ANN tools such as cross-entropy loss, would be an important advance but has been missing. On the practical side, evidence that Hebbian WTA networks could be useful for presently pertinent issues of modern ANNs such as adversarial robustness, generation of synthetic images, or faster learning, has remained limited. Here we aim to fill these gaps. Recently, when WTA networks were studied in a theoretical framework compatible with conventional machine learning (ML), but in the context of short-term as opposed to long-term Hebbian plasticity, it resulted in surprising practical advantages over supervised ANNs. A similar theoretical approach could also reveal unknown advantages of long-term Hebbian plasticity in WTA networks. In addition, it could provide insights into how a WTA microcircuit could participate in larger-scale computation by deeper cortical or artificial networks.
Here we construct "SoftHebb", a biologically plausible WTA model that is based on standard ratebased neurons as in ANNs, can accommodate various activation functions, and learns without labels, using local plasticity and only feed-forward communication, i.e. the properties we seek in an ANN. Importantly, it is equipped with a simple normalization of the layer's activations, and an optional temperature-scaling mechanism, producing a soft WTA instead of selecting a single "hard" winner neuron. This allows us to prove formally that a SoftHebb layer is a generative mixture model that objectively minimizes its Kullback-Leibler (KL) divergence from the input distribution through Bayesian inference, thus providing a new formal ML-theoretic perspective of these networks. We complement our main results, which are theoretical, with experiments that are small-scale but produce intriguing results. As a generative model, SoftHebb has a broader scope than classification, but we test it on image classification tasks. Surprisingly, in addition to overcoming several inefficiencies of backpropagation, the unsupervised WTA model also outperforms a supervised two-layer perceptron in several aspects: learning speed and accuracy in the first presentation of the training dataset, robustness to noisy data and to one of the strongest white-box adversarial attacks, i.e. projected gradient descent (PGD), and without any explicit defence. Interestingly, the SoftHebb model also exhibits inherent properties of deflection of the adversarial attacks, and generates object interpolations.
It is unsupervised, local, and requires no error or other feedback currents from upper layers, thus solving hardware-inefficiencies and bio-implausibilities of backpropagation such as weight-transport and update-locking.****Therefore, in the SoftHebb plasticity rule that we will present (Eq. 8), the term uk is a local, postsynaptic variable that does not undermine the locality of the plasticity.****Some of our main contributions are theoretical and their proof is fully reproducible by following the rigorous derivations in the main text and in the appendices.****Surprisingly, in addition to overcoming several inefficiencies of backpropagation, the unsupervised WTA model also outperforms a supervised two-layer perceptron in several aspects: learning speed and accuracy in the first presentation of the training dataset, robustness to noisy data and to one of the strongest white-box adversarial attacks, i.e. projected gradient descent (PGD) (Madry et al., 2017), and without any explicit defence. Interestingly, the SoftHebb model also exhibits inherent properties of deflection (Qin et al., 2020) of the adversarial attacks, and generates object interpolations.
|
What is the advantage of softHebb?
|
**SoftHebb has multiple advantages that are detailed in the revised paper's introduction and discussion.** Summary:
In the studied conditions, SoftHebb - solves simultaneously four inefficiencies and bio-implausibilities of backpropagation that have been under heavy research: -- non-locality -- update-locking -- weight-transport
-- detailed feedback-dependence - is normative and backed by rigorous theory, contrary to many approaches to Hebbian learning that are heuristic - contrary to the standards of the biologically-plausible learning literature, SoftHebb even outperforms backpropagation in terms of accuracy in certain conditions - is surprisingly and significantly robust to noise and to adversarial attacks *without any defence mechanism*.
|
1803.09745
| false
| null |
FLOAT SELECTED: Fig 5. (A) The average verb regularization fraction by county for the lower 48 states, along with (B) residuals and (C) Gi� z-score. A higher Gi� z-score means a county has a greater regularization fraction than expected. Counties colored black did not have enough data. We used the dataset in row (IV) of Table 1.
FLOAT SELECTED: Fig 5. (A) The average verb regularization fraction by county for the lower 48 states, along with (B) residuals and (C) Gi� z-score. A higher Gi� z-score means a county has a greater regularization fraction than expected. Counties colored black did not have enough data. We used the dataset in row (IV) of Table 1.
|
Which regions of the United States do they consider?
|
all regions except those that are colored black
|
2001.05284
| false
| null |
We conduct our experiments on $\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains.
We conduct our experiments on $\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains.
|
Over which datasets/corpora is this work evaluated?
|
The answers are shown as follows:
* $\sim $ 8.7M annotated anonymised user utterances
|
null | false
| null |
Which player has scored the most world cup goals?
|
After playing in four different World Cup editions, Germany's Miroslav Klose currently holds the record for most goals scored in the competition's history. Klos scored 16 times across the 2002, 2006, 2010, and 2014 tournaments combined, at an average of 0.67 goals per game.
|
|
null | false
| null |
How do I get better at bouldering?
|
Usually the best way at getting better at bouldering is to boulder as much as you can. You can check out youtube videos of some techniques to practice while on the wall and trying routes above your current grade for fun.
|
|
null | false
| null |
Classify these as things you need to peel in order to eat or things that you don't need to peel: orange, apple, oats, banana, potato
|
orange: peel, apple: don't peel, oats: don't peel, banana: peel, potato: don't peel
|
|
null | false
| 26
|
One of the most fundamental topics in natural language processing is how best to derive high-level representations from constituent parts, as natural language meanings are a function of their constituent parts. How best to construct a sentence representation from distributed word embeddings is an example domain of this larger issue. Even though sequential neural models such as recurrent neural networks (RNN) BIBREF0 and their variants including Long Short-Term Memory (LSTM) BIBREF1 and Gated Recurrent Unit (GRU) BIBREF2 have become the de-facto standard for condensing sentence-level information from a sequence of words into a fixed vector, there have been many lines of research towards better sentence representation using other neural architectures, e.g. convolutional neural networks (CNN) BIBREF3 or self-attention based models BIBREF4 .
From a linguistic point of view, the underlying tree structure—as expressed by its constituency and dependency trees—of a sentence is an integral part of its meaning. Inspired by this fact, some recursive neural network (RvNN) models are designed to reflect the syntactic tree structure, achieving impressive results on several sentence-level tasks such as sentiment analysis BIBREF5 , BIBREF6 , machine translation BIBREF7 , natural language inference BIBREF8 , and discourse relation classification BIBREF9 .
However, some recent works have BIBREF10 , BIBREF11 proposed latent tree models, which learn to construct task-specific tree structures without explicit supervision, bringing into question the value of linguistically-motivated recursive neural models. Witnessing the surprising performance of the latent tree models on some sentence-level tasks, there arises a natural question: Are linguistic tree structures the optimal way of composing sentence representations for NLP tasks?
In this paper, we demonstrate that linguistic priors are in fact useful for devising effective neural models for sentence representations, showing that our novel architecture based on constituency trees and their tag information obtains superior performance on several sentence-level tasks, including sentiment analysis and natural language inference.
A chief novelty of our approach is that we introduce a small separate tag-level tree-LSTM to control the composition function of the existing word-level tree-LSTM, which is in charge of extracting helpful syntactic signals for meaningful semantic composition of constituents by considering both the structures and linguistic tags of constituency trees simultaneously. In addition, we demonstrate that applying a typical LSTM to preprocess the leaf nodes of a tree-LSTM greatly improves the performance of the tree models. Moreover, we propose a clustered tag set to replace the existing tags on the assumption that the original syntactic tags are too fined-grained to be useful in neural models.
In short, our contributions in this work are as follows:
• We propose a new linguistically-motivated neural model which generates high-quality sentence representations by considering all the information extracted from constituency parse trees. • In addition, we demonstrate the superiority of the proposed models achieving new state-of-the-art performance within the similar model class on 4 out of 5 sentence classification benchmarks, as well as showing competitive results compared to other types of neural models. • We empirically show that another key point to the success of tree-structured models is to contextualize input word embeddings so that the corresponding input for each word in a sentence can better reflect the meaning of the whole sentence
|
What is the author's contribution to this work?
|
• They propose a new neural model of language motivation. • They demonstrate the benefits of the proposed model, implementing new state-of-the-art performance in similar model classes. • Their experience shows that another key to the success of the tree structure model is embedding input words into the subculture so that the input corresponding to each word in a sentence better reflects the meaning of the entire sentence.
|
1909.01093
| false
| null |
We also did a qualitative study on the Starbucks (SBUX) stock movement during this event. Figure FIGREF12 is the daily percentage change of SBUX and NASDAQ index between April 11th and April 20th. SBUX did not follow the upward trend of the whole market before April 17th, and then its change on April 20th, INLINEFORM0 , is quite significant from historical norms. We collected the historical 52 week stock prices prior to this event and calculated the daily stock price change. The distribution of the daily price change of the previous 52 weeks is Figure FIGREF13 with a mean INLINEFORM1 and standard deviation INLINEFORM2 . The INLINEFORM3 down almost equals to two standard deviations below the mean. Our observation is that plausibly, there was a negative aftereffect from the event of the notable decline in Starbucks stock price due to the major public relations crisis.
We also did a qualitative study on the Starbucks (SBUX) stock movement during this event. Figure FIGREF12 is the daily percentage change of SBUX and NASDAQ index between April 11th and April 20th. SBUX did not follow the upward trend of the whole market before April 17th, and then its change on April 20th, INLINEFORM0 , is quite significant from historical norms. We collected the historical 52 week stock prices prior to this event and calculated the daily stock price change. The distribution of the daily price change of the previous 52 weeks is Figure FIGREF13 with a mean INLINEFORM1 and standard deviation INLINEFORM2 .
|
How does the method measure the impact of the event on market prices?
|
The answers are shown as follows:
* We collected the historical 52 week stock prices prior to this event and calculated the daily stock price change. The distribution of the daily price change of the previous 52 weeks is Figure FIGREF13 with a mean INLINEFORM1 and standard deviation INLINEFORM2 .
|
null | false
| 160
|
Learning the distributed representation for long spans of text from its constituents has been a key step for various natural language processing (NLP) tasks, such as text classification BIBREF0 , BIBREF1 , semantic matching BIBREF2 , BIBREF3 , and machine translation BIBREF4 . Existing deep learning approaches take a compositional function with different forms to compose word vectors recursively until obtaining a sentential representation. Typically, these compositional functions involve recurrent neural networks BIBREF5 , BIBREF6 , convolutional neural networks BIBREF7 , BIBREF8 , and tree-structured neural networks BIBREF9 , BIBREF10 .
Among these methods, tree-structured neural networks (Tree-NNs) show theirs superior performance in many NLP tasks BIBREF11 , BIBREF12 . Following the syntactic tree structure, Tree-NNs assign a fixed-length vector to each word at the leaves of the tree, and combine word and phrase pairs recursively to create intermediate node vectors, eventually obtaining one final vector to represent the whole sentence.
However, these models have a major limitation in their inability to fully capture the richness of compositionality BIBREF13 . The same parameters are used for all kinds of semantic compositions, even though the compositions have different characteristics in nature. For example, the composition of the adjective and the noun differs significantly from the composition of the verb and the noun. Moreover, many semantic phenomena, such as semantic idiomaticity or transparency, call for more powerful compositional mechanisms BIBREF14 . Therefore, Tree-NNs suffer from the underfitting problem.
To alleviate this problem, some researchers propose to use multiple compositional functions, which are arranged beforehand according to some partition criterion BIBREF11 , BIBREF13 , BIBREF15 . Intuitively, using different parameters for different types of compositions has the potential to greatly reduce underfitting. BIBREF13 [ BIBREF13 ] defined different compositional functions in terms of syntactic categories, and a suitable compositional function is selected based on the syntactic categories. BIBREF15 [ BIBREF15 ] introduced multiple compositional functions and during compositional phase, a proper one is selected based on the input information. Although these models accomplished their mission to a certain extent, they still suffer from the following three challenges. First, the predefined compositional functions cannot cover all the compositional rules; Second, they require more learnable parameters, suffering from the problem of overfitting; Third, it is difficult to determine a universal criterion for semantic composition based solely on syntactic categories.
In this paper, we propose dynamic compositional neural networks over tree structure, in which a meta network is used to generate the context-specific parameters of a dynamic compositional network. Specifically, we construct our models based on two kinds of tree-structured neural networks: recursive neural network (Tree-RecNN) BIBREF11 and tree-structure long short-term memory neural network (Tree-LSTM) BIBREF9 . Our work is inspired by recent work on dynamic parameter prediction BIBREF16 , BIBREF17 , BIBREF18 . The meta network is used to extract the shared meta-knowledge across different compositional rules and to dynamically generate the context-specific compositional function. Thus, the compositional function of our models varies with positions, contexts and samples. The dynamic compositional network then applies those context-specific parameters to the current input information. Both meta and dynamic networks are differentiable such that the overall networks can be trained in an end-to-end fashion. Additional, to reduce the complexity of the whole networks, we define the dynamic weight matrix in a manner simulating low-rank matrix decomposition.
We evaluate our models on two typical tasks: text classification and text semantic matching. The results show that our models are more expressive due to their learning to learn nature, yet without increasing the number of model's parameters. Moreover, we find certain composition operations can be learned implicitly by meta TreeNN, such as the composition of noun phrases and verb phrases.
The contributions of the paper can be summed up as follows.
Specifically, we construct our models based on two kinds of tree-structured neural networks: recursive neural network (Tree-RecNN) and tree-structure long short-term memory neural network (Tree-LSTM) .
|
In constructing models, which two kinds of tree-structured neural networks do the authors based on?
|
Recursive neural network (Tree-RecNN) and tree-structure long short-term memory neural network (Tree-LSTM).
|
null | false
| null |
How many full-length "The X-Files" movies are there?
|
Two: "The X-Files: Fight the Future" and "The X-Files: I Want to Believe"
|
|
null | false
| null |
According to its co-founders, Doug Cutting and Mike Cafarella, the genesis of Hadoop was the Google File System paper that was published in October 2003. This paper spawned another one from Google – "MapReduce: Simplified Data Processing on Large Clusters". Development started on the Apache Nutch project, but was moved to the new Hadoop subproject in January 2006. Doug Cutting, who was working at Yahoo! at the time, named it after his son's toy elephant. The initial code that was factored out of Nutch consisted of about 5,000 lines of code for HDFS and about 6,000 lines of code for MapReduce.
In March 2006, Owen O'Malley was the first committer to add to the Hadoop project; Hadoop 0.1.0 was released in April 2006. It continues to evolve through contributions that are being made to the project. The very first design document for the Hadoop Distributed File System was written by Dhruba Borthakur in 2007.
|
From the passage provided, extract the names of the co-founders of Hadoop. Separate them with a comma.
|
Doug Cutting, Mike Cafarella
|
null | false
| null |
Fußballclub Gelsenkirchen-Schalke 04 e. V., commonly known as FC Schalke 04 (German: [ɛf tseː ˈʃalkə nʊl fiːɐ̯] (listen)), Schalke 04 (German: [ˌʃalkə nʊl ˈfiːɐ̯] (listen)), or abbreviated as S04 (German: [ˈɛs nʊl fiːɐ̯] (listen)), is a professional German football and multi-sports club originally from the Schalke district of Gelsenkirchen, North Rhine-Westphalia. The "04" in the club's name derives from its formation in 1904. Schalke have been one of the most popular professional football teams in Germany, even though the club's heyday was in the 1930s and 1940s. Schalke have played in the Bundesliga, the top tier of the German football league system, since 2022, following promotion from the 2. Bundesliga in 2021–22. As of 2022, the club has 160,000 members, making it the second-largest football club in Germany and the fourth-largest club in the world in terms of membership. Other activities offered by the club include athletics, basketball, handball, table tennis, winter sports and eSports.
|
What is Schalke 04 famous for?
|
FC Schalke 04 (full name Fußballclub Gelsenkirchen-Schalke 04 e. V. ) is a German football and multi-sports club that was founded in 1904 in Gelsenkirchen, North Rhine-Westphalia. The club won seven times the German Championship, five times the German Cup / DFB-Pokal and 1997 the UEFA-CUP, when it beat Inter Milan in the finals. The club receives from football fans all over the world renown for its youth academy, which constantly develops famous players. Benedikt Höwedes, Manuel Neuer and Mesut Özil, who won the World Cup with Germany in 2014, were formed in the Schalke 04 youth academy. Since 2011 the youth academy is called Knappenschmiede in reference to the mining industry that was very strong in the past in the area of Western Germany. A Knappe is a miner after completion of his apprenticeship.
|
1905.13413
| false
| null |
Our training method in sec:ours could potentially be used with any probabilistic open IE model, since we make no assumptions about the model and only the likelihood of the extraction is required for iterative rank-aware learning. As a concrete instantiation in our experiments, we use RnnOIE BIBREF3 , BIBREF9 , a stacked BiLSTM with highway connections BIBREF10 , BIBREF11 and recurrent dropout BIBREF12 . Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $ \mathbf {x}_t = [\mathbf {W}_{\text{emb}}(w_t), \mathbf {W}_{\text{mask}}(w_t = v)]. $
Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $ \mathbf {x}_t = [\mathbf {W}_{\text{emb}}(w_t), \mathbf {W}_{\text{mask}}(w_t = v)]. $
|
What's the input representation of OpenIE tuples into the model?
|
word embeddings
|
null | false
| null |
Sean Patrick Conley (born 1980) is an American physician and United States Navy officer who served as the Physician to the President from 2018 to 2021. Conley served as physician to president Donald Trump during the COVID-19 pandemic, often serving as the president's medical advisor, and treating the president when he tested positive for COVID-19.
|
Based on this paragraph, who was Donald Trump's physician during 2018 to 2021?
|
Sean Patrick Conley
|
null | false
| null |
Categorize each of the following as a golf term or soccer term: slice, nutmeg, goal, stroke, putt, drive, tackle, striker.
|
Golf: slice, stroke, putt, drive
Soccer: nutmeg, goal, tackle, striker
|
|
null | false
| null |
Was Jesus and actual person
|
Yes. Jesus was an historical person.
|
|
null | false
| 140
|
We consider two datasets, representing related but slightly different forecasting tasks. The first dataset is an expanded version of the annotated Wikipedia conversations dataset from BIBREF9. This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use post-hoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future.
Wikipedia data. BIBREF9's `Conversations Gone Awry' dataset consists of 1,270 conversations that took place between Wikipedia editors on publicly accessible talk pages. The conversations are sourced from the WikiConv dataset BIBREF59 and labeled by crowdworkers as either containing a personal attack from within (i.e., hostile behavior by one user in the conversation directed towards another) or remaining civil throughout.
A series of controls are implemented to prevent models from picking up on trivial correlations. To prevent models from capturing topic-specific information (e.g., political conversations are more likely to derail), each attack-containing conversation is paired with a clean conversation from the same talk page, where the talk page serves as a proxy for topic. To force models to actually capture conversational dynamics rather than detecting already-existing toxicity, human annotations are used to ensure that all comments preceding a personal attack are civil.
To the ends of more effective model training, we elected to expand the `Conversations Gone Awry' dataset, using the original annotation procedure. Since we found that the original data skewed towards shorter conversations, we focused this crowdsourcing run on longer conversations: ones with 4 or more comments preceding the attack. Through this additional crowdsourcing, we expand the dataset to 4,188 conversations, which we are publicly releasing as part of the Cornell Conversational Analysis Toolkit (ConvoKit).
We perform an 80-20-20 train/dev/test split, ensuring that paired conversations end up in the same split in order to preserve the topic control. Finally, we randomly sample another 1 million conversations from WikiConv to use for the unsupervised pre-training of the generative component.
Reddit CMV data. The CMV dataset is constructed from conversations collected via the Reddit API. In contrast to the Wikipedia-based dataset, we explicitly avoid the use of post-hoc annotation. Instead, we use as our label whether a conversation eventually had a comment removed by a moderator for violation of Rule 2: “Don't be rude or hostile to other users”.
Though the lack of post-hoc annotation limits the degree to which we can impose controls on the data (e.g., some conversations may contain toxic comments not flagged by the moderators) we do reproduce as many of the Wikipedia data's controls as we can. Namely, we replicate the topic control pairing by choosing pairs of positive and negative examples that belong to the same top-level post, following BIBREF12; and enforce that the removed comment was made by a user who was previously involved in the conversation. This process results in 6,842 conversations, to which we again apply a pair-preserving 80-20-20 split. Finally, we gather over 600,000 conversations that do not include any removed comment, for unsupervised pre-training.
We consider two datasets, representing related but slightly different forecasting tasks. The first dataset is an expanded version of the annotated Wikipedia conversations dataset from Zhang et al. (2018a). This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use posthoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future.
|
How many datasets are considered in their work?
|
Two. The first is an expanded version of the annotated Wikipedia conversations dataset from Zhang et al. (2018a) and the second is one constructed from the subreddit ChangeMyView (CMV) that does not use posthoc annotations.
|
null | false
| null |
What was Lieutenant Columbo's catch phrase on his TV show?
|
Columbo frequently said "just one more thing" in episodes of the show.
|
|
null | false
| null |
Classify each tree as an evergreen or fruit tree: Blue Spruce, Jack Pine, Sweet Cherry, Coconut, Mountain Hemlock.
|
Blue Spruce, Jack Pine, and Mountain Hemlock are all evergreen trees. Sweet Cherry and Coconut trees are fruit trees.
|
|
null | false
| null |
The Apache Software Foundation /əˈpætʃi/ (ASF) is an American nonprofit corporation (classified as a 501(c)(3) organization in the United States) to support a number of open source software projects. The ASF was formed from a group of developers of the Apache HTTP Server, and incorporated on March 25, 1999. As of 2021, it includes approximately 1000 members.
The Apache Software Foundation is a decentralized open source community of developers. The software they produce is distributed under the terms of the Apache License, a permissive open-source license for free and open-source software (FOSS). The Apache projects are characterized by a collaborative, consensus-based development process and an open and pragmatic software license, which is to say that it allows developers who receive the software freely, to re-distribute it under nonfree terms. Each project is managed by a self-selected team of technical experts who are active contributors to the project. The ASF is a meritocracy, implying that membership of the foundation is granted only to volunteers who have actively contributed to Apache projects. The ASF is considered a second generation open-source organization,[citation needed] in that commercial support is provided without the risk of platform lock-in.
Among the ASF's objectives are: to provide legal protection to volunteers working on Apache projects; to prevent the Apache brand name from being used by other organizations without permission.
The ASF also holds several ApacheCon conferences each year, highlighting Apache projects and related technology.
|
From the passage provided, extract the year Apache Software Foundation was incorporated on.
|
Apache Software Foundation was incorporated on March 25, 1999
|
null | false
| null |
Create an itinerary for 3 days in Park City Utah that is good for children and adults. Include hotels, food options, one day of skiing and activities.
|
Ok, here are three days for a family trip to Park city.
Day one:
- Go grocery shopping at Whole Food Market to pick up snacks, and food for lunches.
- Pick up rental skiing at Christy's for all adults and children whole are skiing.
- Eat dinner at River Horse Tavern on Main street.
Day two:
- Go skiing at Park City Resort. Purchase day passes in person or pre-purchase Epic passes online.
- Stop with the kids at Rocky Mountain Chocolate Company for a treat.
- Eat dinner at High Saloon on Park Avenue.
Day three:
- Go to Park City Library which has a kids' section.
- Shop at galleries, thrift stores, or gift shops on Main Street.
- Eat dinner at Butcher House Chops and Grill.
|
|
1804.08139
| false
| null |
We use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets.
We use CoNLL 2000 BIBREF22 sequence labeling dataset for both POS Tagging and Chunking tasks. There are 8774 sentences in training data, 500 sentences in development data and 1512 sentences in test data. The average sentence length is 24 and has a total vocabulary size as 17k.
We use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets.
We use CoNLL 2000 BIBREF22 sequence labeling dataset for both POS Tagging and Chunking tasks.
|
What dataset did they use?
|
The answers are shown as follows:
* 16 different datasets from several popular review corpora used in BIBREF20
* CoNLL 2000 BIBREF22
|
null | false
| null |
tell me the category for these movies "Star Trek, Star Wars, John Wick, Indiana Jones"
|
Star Trek and Star wars are sci-fi movies. John Wick is an action movie, while Indiana Jones is an adventure genre.
|
|
null | false
| null |
Russia's invasion of Ukraine prompted Finland and Sweden to apply for NATO membership in May 2022, and the ratification process for Sweden is in progress. Finland officially joined on 4 April 2023. Ukraine applied for NATO membership in September 2022 after Russia claimed to annex part of its territory. Two other states have formally informed NATO of their membership aspirations: Bosnia and Herzegovina and Georgia. Kosovo also aspires to join NATO. Joining the alliance is a debate topic in several other European countries outside the alliance, including Austria, Ireland, Malta, Moldova, and Serbia.
|
When did Finland and Sweden join NATO?
|
Finland officially joined NATO on 4 April 2023, while the ratification process for Sweden is in progress.
|
null | false
| null |
John Quincy Adams (/ˈkwɪnzi/ (listen); July 11, 1767 – February 23, 1848) was an American politician, diplomat, lawyer, and diarist who served as the sixth president of the United States, from 1825 to 1829. He previously served as the eighth United States Secretary of State from 1817 to 1825. During his long diplomatic and political career, Adams also served as an ambassador, and as a member of the United States Congress representing Massachusetts in both chambers. He was the eldest son of John Adams, who served as the second president of the United States from 1797 to 1801, and First Lady Abigail Adams. Initially a Federalist like his father, he won election to the presidency as a member of the Democratic-Republican Party, and in the mid-1830s became affiliated with the Whig Party.
|
Who is John Quincy Adams?
|
John Quincy Adams (/ˈkwɪnzi/ (listen); July 11, 1767 – February 23, 1848) was an American politician, diplomat, lawyer, and diarist who served as the sixth president of the United States, from 1825 to 1829. He previously served as the eighth United States Secretary of State from 1817 to 1825. During his long diplomatic and political career, Adams also served as an ambassador, and as a member of the United States Congress representing Massachusetts in both chambers. He was the eldest son of John Adams, who served as the second president of the United States from 1797 to 1801, and First Lady Abigail Adams. Initially a Federalist like his father, he won election to the presidency as a member of the Democratic-Republican Party, and in the mid-1830s became affiliated with the Whig Party.
Born in Braintree, Massachusetts, Adams spent much of his youth in Europe, where his father served as a diplomat. After returning to the United States, Adams established a successful legal practice in Boston. In 1794, President George Washington appointed Adams as the U.S. ambassador to the Netherlands, and Adams would serve in high-ranking diplomatic posts until 1801, when Thomas Jefferson took office as president. Federalist leaders in Massachusetts arranged for Adams's election to the United States Senate in 1802, but Adams broke with the Federalist Party over foreign policy and was denied re-election. In 1809, President James Madison, a member of the Democratic-Republican Party, appointed Adams as the U.S. ambassador to Russia. Multilingual, Adams held diplomatic posts for the duration of Madison's presidency, and he served as part of the American delegation that negotiated an end to the War of 1812. In 1817, President James Monroe selected Adams as his Secretary of State. In that role, Adams negotiated the Adams–Onís Treaty, which provided for the American acquisition of Florida. He also helped formulate the Monroe Doctrine, which became a key tenet of U.S. foreign policy. In 1818, Adams was elected a member of the American Philosophical Society in Philadelphia.
Adams, Andrew Jackson, William H. Crawford, and Henry Clay—all members of the Democratic-Republican Party—competed in the 1824 presidential election. Because no candidate won a majority of electoral votes, the House of Representatives held a contingent election, which Adams won with the support of Speaker of the House Clay, whom Adams would go on to controversially appoint as his Secretary of State. As president, Adams called for an ambitious agenda that included federally funded infrastructure projects, the establishment of a national university, and engagement with the countries of Latin America, but Congress refused to pass many of his initiatives. During Adams's presidency, the Democratic-Republican Party split into two major camps: the National Republican Party, which supported President Adams, and Andrew Jackson's Democratic Party. The Democrats proved to be more effective political organizers than Adams and his National Republican supporters, and Jackson soundly defeated Adams in the 1828 presidential election, making Adams the second president to fail to win re-election (his father being the first).
Rather than retiring from public service, Adams won election to the House of Representatives, where he would serve from 1831 until his death in 1848. He remains the only former president to be elected to the chamber. After narrowly losing his bids for Governor of Massachusetts and Senate re-election, Adams joined the Anti-Masonic Party in the early 1830s before joining the Whig Party, which united those opposed to President Jackson. During his time in Congress, Adams became increasingly critical of slavery and of the Southern leaders whom he believed controlled the Democratic Party. He was particularly opposed to the annexation of Texas and the Mexican–American War, which he saw as a war to extend slavery and its political grip on Congress. He also led the repeal of the "gag rule", which had prevented the House of Representatives from debating petitions to abolish slavery. Historians concur that Adams was one of the greatest diplomats and secretaries of state in American history; they typically rank him as an average president, as he had an ambitious agenda but could not get it passed by Congress. By contrast, historians also view Adams in a more positive light during his post-presidency because of his vehement stance against slavery, as well as his fight for the rights of women and Native Americans.
|
null | false
| null |
List top 5 people with most medals in the olympics
|
1. Michael Phelps - 28 medals
2. Larisa Latynina - 18 medals
3. Marit Bjorgen - 15 medals
4. Nikolai Andrianov - 15 medals
5. Ole Einar Bjorndalen - 13 medals
|
|
null | false
| null |
Write a haiku about how much I love pickles.
|
Caressed by sweet brine,
you are crispy and crunchy.
Oh, won’t you be mine?
|
|
null | false
| null |
List the best toys for toddlers?
|
1. Tonies Box
2. Fisher Price Little People Sets
3. Dress up
4. Music, drums, keyboard, microphones etc
5. Play food
6. Play Dough
7. Doctor
8. Fishing
9. Sandbox
10. Slides
|
|
1910.00912
| false
| null |
Following BIBREF7, we then evaluated a metric that combines intent and entities, computed by simply summing up the two confusion matrices (Table TABREF23). Results highlight the contribution of the entity tagging task, where HERMIT outperforms the other approaches. Paired-samples t-tests were conducted to compare the HERMIT combined F1 against the other systems. The statistical analysis shows a significant improvement over Rasa $[Z=-2.803, p = .005]$, Dialogflow $[Z=-2.803, p = .005]$, LUIS $[Z=-2.803, p = .005]$ and Watson $[Z=-2.803, p = .005]$.
FLOAT SELECTED: Table 4: Comparison of HERMIT with the results in (Liu et al., 2019) by combining Intent and Entity.
In this section we report the experiments performed on the ROMULUS dataset (Table TABREF27). Together with the evaluation metrics used in BIBREF7, we report the span F1, computed using the CoNLL-2000 shared task evaluation script, and the Exact Match (EM) accuracy of the entire sequence of labels. It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions – e.g., a match is when all the three sequences are correct.
Results in terms of EM reflect the complexity of the different tasks, motivating their position within the hierarchy. Specifically, dialogue act identification is the easiest task ($89.31\%$) with respect to frame ($82.60\%$) and frame element ($79.73\%$), due to the shallow semantics it aims to catch. However, when looking at the span F1, its score ($89.42\%$) is lower than the frame element identification task ($92.26\%$). What happens is that even though the label set is smaller, dialogue act spans are supposed to be longer than frame element ones, sometimes covering the whole sentence. Frame elements, instead, are often one or two tokens long, that contribute in increasing span based metrics. Frame identification is the most complex task for several reasons. First, lots of frame spans are interlaced or even nested; this contributes to increasing the network entropy. Second, while the dialogue act label is highly related to syntactic structures, frame identification is often subject to the inherent ambiguity of language (e.g., get can evoke both Commerce_buy and Arriving). We also report the metrics in BIBREF7 for consistency. For dialogue act and frame tasks, scores provide just the extent to which the network is able to detect those labels. In fact, the metrics do not consider any span information, essential to solve and evaluate our tasks. However, the frame element scores are comparable to the benchmark, since the task is very similar.
Following BIBREF7, we then evaluated a metric that combines intent and entities, computed by simply summing up the two confusion matrices (Table TABREF23). Results highlight the contribution of the entity tagging task, where HERMIT outperforms the other approaches. Paired-samples t-tests were conducted to compare the HERMIT combined F1 against the other systems.
FLOAT SELECTED: Table 4: Comparison of HERMIT with the results in (Liu et al., 2019) by combining Intent and Entity.
Together with the evaluation metrics used in BIBREF7, we report the span F1, computed using the CoNLL-2000 shared task evaluation script, and the Exact Match (EM) accuracy of the entire sequence of labels. It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions – e.g., a match is when all the three sequences are correct.
We also report the metrics in BIBREF7 for consistency. For dialogue act and frame tasks, scores provide just the extent to which the network is able to detect those labels. In fact, the metrics do not consider any span information, essential to solve and evaluate our tasks.
|
What metrics other than entity tagging are compared?
|
The answers are shown as follows:
* We also report the metrics in BIBREF7 for consistency
* we report the span F1
* Exact Match (EM) accuracy of the entire sequence of labels
* metric that combines intent and entities
|
null | false
| null |
Tell me whether these are artists whose music appears in "In The Groove" or "Dance Dance Revolution:" NAOKI, Monolithic, Inspector K, Lady Gaga, Julie Ann Frost, MC Frontalot, DJ Doo, Banzai, m-flo
|
In The Groove: Inspector K, MC Frontalot, Banzai, DJ Doo
Dance Dance Revolution: m-flo, NAOKI, Monolithic, Lady Gaga, Julie Ann Frost
|
|
null | false
| null |
Where is the home country of Joan of Arc?
|
France
|
|
null | false
| null |
Is it possible to grow muscle after an age of 35?
|
As we age, our bodies undergo various changes. One of the most significant changes is the decrease in muscle mass and strength. This loss of muscle mass is known as sarcopenia and can start as early as age 30, with a more rapid decline after age 60. This may lead many people to believe that gaining muscle after 35 is impossible, but is that really the case?
Yes, it's possible to build muscle after 35
The short answer is yes, it is possible to build muscle after 35. In fact, strength training can be just as effective for older adults as it is for younger ones. It's never too late to start working on your fitness goals and building the body you want.
The benefits of building muscle after 35
As we age, building muscle becomes increasingly important for several reasons. First and foremost, it can help us maintain our independence and mobility as we age. Strong muscles can help us with everyday activities like carrying groceries or getting up from a chair. Building muscle can also help prevent injuries and falls, which can be more dangerous as we get older.
Additionally, building muscle can help improve our overall health. Muscle is more metabolically active than fat, which means it burns more calories at rest. This can help us maintain a healthy weight and lower our risk of developing chronic diseases like type 2 diabetes, heart disease, and certain types of cancer.
How to build muscle after 35
So, how exactly do you go about building muscle after 35? The key is to focus on strength training exercises that target multiple muscle groups. Compound exercises like squats, deadlifts, and bench presses are great for building overall strength and muscle mass.
It's also important to gradually increase the weight and intensity of your workouts over time. This will help you continue to make progress and avoid plateauing. Additionally, be sure to give your muscles time to rest and recover between workouts.
Finally, proper nutrition is crucial for building muscle at any age. Make sure you're getting enough protein, as this is essential for muscle growth and repair. Aim for at least 1 gram of protein per pound of body weight per day.
Building muscle after 35 is definitely possible, and it comes with a variety of benefits. Strength training can help improve our overall health, prevent injury, and maintain our independence as we age. The key is to focus on compound exercises, gradually increase the intensity of your workouts, and make sure you're getting enough protein. So, what are you waiting for? It's time to hit the gym and start building the body you want!
|
|
null | false
| 43
|
Deep learning, a sub-field of machine learning research, has driven the rapid progress in artificial intelligence research, leading to astonishing breakthroughs on long-standing problems in a plethora of fields such as computer vision and natural language processing. Tools powered by deep learning are changing the way movies are made, diseases are diagnosed, and play a growing role in understanding and communicating with humans.
Such development is made possible by deep learning frameworks, such as Caffe BIBREF0 , Chainer BIBREF1 , CNTK BIBREF2 , Apache (incubating) MXNet BIBREF3 , PyTorch BIBREF4 , TensorFlow BIBREF5 , and Theano BIBREF6 . These frameworks have been crucial in disseminating ideas in the field. Specifically, imperative tools, arguably spearheaded by Chainer, are easy to learn, read, and debug. Such benefits make imperative programming interface quickly adopted by the Gluon API of MXNet (while can be seamlessly switched to symbolic programming for high performance), PyTorch, and TensorFlow Eager.
Leveraging the imperative Gluon API of MXNet, we design and develop the GluonCV and GluonNLP (referred to as GluonCV/NLP hereinafter) toolkits for deep learning in computer vision and natural language processing. To the best of our knowledge, GluonCV/NLP are the first open source toolkits for deep learning in both computer vision and natural language processing that simultaneously i) provide modular APIs to allow customization by re-using efficient building blocks; ii) provide pre-trained state-of-the-art models, training scripts, and training logs to enable fast prototyping and promote reproducible research; iii) leverage the MXNet ecosystem so that models can be deployed in a wide variety of programming languages including C++, Clojure, Java, Julia, Perl, Python, R, and Scala.
Leveraging the imperative Gluon API of MXNet, we design and develop the GluonCV and GluonNLP (referred to as GluonCV/NLP hereinafter) toolkits for deep learning in computer vision and natural language processing. GluonCV/NLP simultaneously i) provide modular APIs to allow customization by re-using efficient building blocks; ii) provide pre-trained state-of-the-art models, training scripts, and training logs to enable fast prototyping and promote reproducible research; iii) provide models that can be deployed in a wide variety of programming languages including C++, Clojure, Java, Julia, Perl, Python, R, and Scala (via the MXNet ecosystem).
|
Are there any pretrained models available in the GluonCV/NLP toolkits?
|
Yes.
|
null | false
| null |
Who were the "Big Four" of Thrash Metal?
|
Anthrax, Metallica, Megadeth, and Slayer
|
|
null | false
| null |
Give me a list of the major food allergens I should be aware of when cooking.
|
You should be aware of milk, eggs, shellfish, fish, peanuts, tree nuts, wheat and soy.
|
|
null | false
| null |
The song was written by Sumney with American singer-songwriter Tom Gallo. Sumney also solely produced the track.
|
Who produced the song Polly?
|
The song was written and produced by American singer-songwriter Moses Sumney.
|
null | false
| null |
What type of music are you into?
|
I'm into a variety of genres of music, such as rock, pop, hip-hop, alternative, electronic, and country.
|
|
null | false
| null |
Who was declared as Person of the Year by Time Magazine in 2019?
|
Greta Thunberg, Swedish environmental activist who is known for challenging world leaders to take immediate action for climate change mitigation.
|
|
null | false
| null |
What did Special Agent Fox Mulder study when he was in university?
|
Fox Mulder, a main character on the TV show "The X-Files," studied at Oxford University and majored in psychology.
|
|
null | false
| null |
What is the difference between coalesce and repartition when it comes to shuffle partitions in spark
|
Coalesce essentially groups multiple partitions into a larger partitions. So use coalesce when you want to reduce the number of partitions (and also tasks) without impacting sort order. Ex:- when you want to write-out a single CSV file output instead of multiple parts
Use repartition when you want to cause a shuffle that changes the number of partitions. A common use-case for repartition is to remove skew in file sizes or to start out with a smaller/different number of partitions than the default in Spark
|
|
1907.10738
| false
| null |
The OpenBookQA dataset has a collection of questions and four answer choices for each question. The dataset comes with 1326 facts representing an open book. It is expected that answering each question requires at least one of these facts. In addition it requires common knowledge. To obtain relevant common knowledge we use an IR system BIBREF6 front end to a set of knowledge rich sentences. Compared to reading comprehension based QA (RCQA) setup where the answers to a question is usually found in the given small paragraph, in the OpenBookQA setup the open book part is much larger (than a small paragraph) and is not complete as additional common knowledge may be required. This leads to multiple challenges. First, finding the relevant facts in an open book (which is much bigger than the small paragraphs in the RCQA setting) is a challenge. Then, finding the relevant common knowledge using the IR front end is an even bigger challenge, especially since standard IR approaches can be misled by distractions. For example, Table 1 shows a sample question from the OpenBookQA dataset. We can see the retrieved missing knowledge contains words which overlap with both answer options A and B. Introduction of such knowledge sentences increases confusion for the question answering model. Finally, reasoning involving both facts from open book, and common knowledge leads to multi-hop reasoning with respect to natural language text, which is also a challenge.
Compared to reading comprehension based QA (RCQA) setup where the answers to a question is usually found in the given small paragraph, in the OpenBookQA setup the open book part is much larger (than a small paragraph) and is not complete as additional common knowledge may be required.
|
How is OpenBookQA different from other natural language QA?
|
The answers are shown as follows:
* in the OpenBookQA setup the open book part is much larger
* the open book part is much larger (than a small paragraph) and is not complete as additional common knowledge may be required
|
null | false
| null |
What is C++?
|
C++ (pronounced "C plus plus") is a high-level, general-purpose programming language created by Danish computer scientist Bjarne Stroustrup. First released in 1985 as an extension of the C programming language, it has since expanded significantly over time; modern C++ currently has object-oriented, generic, and functional features, in addition to facilities for low-level memory manipulation. It is almost always implemented as a compiled language, and many vendors provide C++ compilers, including the Free Software Foundation, LLVM, Microsoft, Intel, Embarcadero, Oracle, and IBM.
|
|
null | false
| 309
|
We train our MTNet with Adam optimizer for 20 epochs. We set batch size as 8, and weight decay as 1e-5, we adopt 3 branched layers in MTNet. Detailed configuration is shown in Table TABREF21 . We use ReLU in shared layers, and relu6 in branched layers to prevent information loss. Our proposed MTNet achieves 0.2250 on MAE and 0.3216 on RMSE, respectively.
We compare MTNet with other mainstream regression algorithms BIBREF14 (linear regression, KNN, SVR, Random Forest and MLP). The architecture of MLP is 15-16-8-8-1, where each number represents the number of neurons in each layer. We try three kinds of kernels (RBF kernel, linear kernel, and poly kernel) with SVR in our experiments for fair comparison.
The results are listed in Table TABREF37 . Our method achieves the best performance in contrast to the compared baseline regressors.
We train our MTNet with Adam optimizer for 20 epochs. We set batch size as 8, and weight decay as 1e-5, we adopt 3 branched layers in MTNet.
|
What batch size do they set as?
|
8.
|
null | false
| 489
|
Problem. We apply HARI to modeling a human-assisted navigation (HAN) problem. In HAN, a human requests an agent to find an object in an indoor environment. Each task request asks the agent to go to a room of type r and find an object of type o (e.g., find a mug in a kitchen). The agent is equipped with a camera and shares its camera view with the human. We assume that the human is sufficiently familiar with the environment that they can recognize the agent's location by looking at its current view. Before issuing a task request, the human imagines a goal location (but do not reveal it to the agent). We are primarily interested in evaluating success in goal-finding, i.e. whether the agent can arrive at the human's intended goal location. Even though there could be multiple locations that match a request, the agent only succeeds if it arrives exactly at the chosen goal location. We also determine success in request-fulfilling, where the agent successfully fulfills a request if it navigates to any node that is within two meters of an object that matches the request.
While an agent is performing a task, it may request the human to provide additional information via telecommunication (e.g., a chat app). Specifically, it can ask for a description of its current location (CUR), the goal location (GOAL), or a subgoal location that is on the path from its current location to the goal location (SUB). Detail about how the subgoals are determined is in the Appendix.
Operation Environment. We construct the operation environments using the environment graphs provided by the Matterport3D simulator. Each environment graph is generated from a 3D model of a house where each node is a location in the house and each edge connects two nearby unobstructed locations. Each operation state s corresponds to a node in the graph. At any time, the agent's operation action space A consists of traversing to any of the nodes that are adjacent to its current node.
We employ a discrete bag-of-features representation for state descriptions. 3 A bag of features represents the information that the agent extracts from the raw input that the agent perceives (e.g., an image, a language sentence). Working with this intermediate input allows us to easily vary the type and amount of information given to the agent. Specifically, we simulate two settings of descriptions: dense and sparse. At evaluation time, the agent perceives sparse descriptions and request the assistant for dense descriptions. A dense description of a current location contains the room name at the location, and the features of M objects restricted to be within δ meters of the location. The features of each object consists of (i) its name, (ii) horizontal and vertical angles (relative to the current viewpoint), and (iii) distance (in meters) from the object to the current location. A dense description of a goal follows the same representation scheme. In the sparse setting, the current-location description does not include the room name. Moreover, we remove the features of objects that are not in the top 100 most frequent objects, emulating an imperfect object detector module. The sparse goal description (the task request) has only features of the target object and the room name where the object is located at. Especially, if a subgoal location is adjacent or coincides with the agent's current location, instead of describing room and object features, the human specifies the ground-truth action to go to the subgoal (an action is specified by its horizontal and vertical angles, and travel distance). 4 Experimental Procedure. We conduct our experiments in three phases. In the pre-training phase, we learn an operation policy π with dense descriptions of the current location and the goal. In the training phase, the agent perceives a sparse description of its current location and is given a sparse initial goal description. We use advantage actor-critic to learn an interaction policy ψ θ that controls how the agent communicates with the human and navigates in an environment. The human always returns dense descriptions. The interaction policy is trained in environments that are previously seen as well as unseen during pre-training. Finally, in the evaluation phase, the interaction policy is tested on three conditions: seen environment and target object type but starting from a new room (UNSEENSTR), seen environment but new target object type (UNSEENOBJ), and new environment (UNSEENENV).
We introduce an assistant into the operation environment, who can provide information about the environment's states. We assume the agent possesses a pre-learned operation policy π. This policy serves as the common ground between the agent and the assistant, which is a prerequisite for communication between them to occur. For example, this policy represents a set of basic tasks that agent has mastered and the assistant can ask the agent to perform. In general, the more knowledge encoded in this policy, the more effectively the agent can communicate with and leverage help from the assistant. Our goal is to learn an interaction policy ψ θ (parametrized by θ) that controls how the agent communicates with the assistant to gather additional information. The operation policy π will be invoked by the interaction policy if the latter decides that the agent does not need new information and wants to take an operating action.
The assistant aids the agent by giving new (current or goal) state descriptions, connecting the agent to situations on which it can make better decisions. Consider an object-finding navigation problem, where a robot has been trained to reliably navigate to the kitchen from the living room of a house. Suppose the robot is then asked to "find a mug", an object that it has never heard of. The assistant can help the robot accomplish this task by giving a more informative goal description "find a mug in the kitchen", relating the current task to the kitchen-finding task that the robot has been familiar with. The robot may also have problems with localization: it knows how to get the kitchen from the living room but it may not realize that it is currently the living room. In this case, giving a currentstate description that specifies this information provides the robot with a useful hint on what actions to take next.
Our framework allows the assistant to convey any form of information that the agent can incorporate into its input. As discussed in §2, the notion of "state description" in our framework is general, capturing various types of information, including but not limited to visual perception and verbal description. Communication between the agent and the assistant can be flexibly enriched by designing the agent's operation policy to be able to consume the forms of information of interest (e.g., a policy that takes natural language as input).
Communication with the Assistant. The assistant is present all the time and knows the agent's current state s t and the goal state g t . It is specified by two functions: a description function ρ A : S × D → ∆(D) and a subgoal function ω A : S × S → ∆(S). ρ A (d | s, d) specifies the probability of giving d as the new description of state s given a current description d. ω A (g | s, g) indicates the probability of proposing g as a subgoal given a current state s and a goal state g. By selecting among these actions, the interaction policy essentially decides when to ask the assistant for additional information, and what types of information to ask for. Our formulation does not specify the input space of the interaction policy, as this space depends on how the agent implements its goal memory (i.e. how it stores and retrieves the subgoals). In the next section, we introduce an instantiation where the agent uses a stack data structure to manage (sub)goals.
Our model has two components: an encoder and a decoder; both are implemented as Transformer models (Vaswani et al., 2017).****Operation Environment. We construct the operation environments using the environment graphs
provided by the Matterport3D simulator (Anderson et al., 2018). Each environment graph is generated from a 3D model of a house where each node is a location in the house and each edge connects
two nearby unobstructed locations.****We introduce an assistant into the operation environment, who can provide information about the environment’s states. We assume the agent possesses a pre-learned operation policy πˆ.
|
A computational experiment is used to show that the ability to request information was useful to the agent. However, it seems obvious that an agent with less uncertainty will do better. Section 6 describes the model comparison as comparing the agent that requests assistance to an agent that takes random actions. However, with missing annotations a rational agent that can not request information should still be able to rationally search the graph until the object is found - which seems a more reasonable performance benchmark. Alternatively, the authors could compare the current framework to a state-of-the-art agent that can solve the same benchmark problem. Was the indoor object-finding task used in previous work, or as part of benchmark problem sets?
|
We are not sure what “rationally search the graph” means. Our "no assistance" baselines search the graph with pre-learned knowledge. An approach that exhaustively searches the graph overly exploits the structure of the simulator and may not scale to real continuous environments. We impose a time constraint so the agent cannot simply visit an entire house for each task. Even if the agent performed an exhaustive search, it would still need to detect objects (i.e. learning when to call DONE). This navigation problem is proposed by our paper. We implement a transformer-based model trained with imitation learning, which is a strong baseline for the popular VLN problem (https://arxiv.org/abs/2011.13922). Our task is harder than VLN in that the task command is high-level (“find O in room R” vs “turn left... go to X... stop there”).
|
null | false
| 72
|
We tested the proposed UTCNN on two different datasets: FBFans and CreateDebate. FBFans is a privately-owned, single-topic, Chinese, unbalanced, social media dataset, and CreateDebate is a public, multiple-topic, English, balanced, forum dataset. Results using these two datasets show the applicability and superiority for different topics, languages, data distributions, and platforms.
The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen’s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero.
To test whether the assumption of this paper – posts attract users who hold the same stance to like them – is reliable, we examine the likes from authors of different stances. Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts. As the numbers of authors in the Sup, Neu and Uns stances are largely imbalanced, these numbers are normalized by the number of users of each stance. Table TABREF13 shows the results. Posts with stances (i.e., not neutral) attract users of the same stance. Neutral posts also attract both supportive and neutral users, like what we observe in supportive posts, but just the neutral posts can attract even more neutral likers. These results do suggest that users prefer posts of the same stance, or at least posts of no obvious stance which might cause annoyance when reading, and hence support the user modeling in our approach.
The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 .
The FBFans dataset has more integrated functions than the CreateDebate dataset; thus our model can utilize all linguistic and extra-linguistic features. For the CreateDebate dataset, on the other hand, the like and comment features are not available (as there is a stance label for each reply, replies are evaluated as posts as other previous work) but we still implemented our model using the content, author, and topic information.
To test whether the assumption of this paper – posts attract users who hold the same stance to like them – is reliable, we examine the likes from authors of different stances.
|
What is the assumption of this paper?
|
Posts attract users who hold the same stance to like them – is reliable.
|
null | false
| null |
You are a product manager at an enterprise software company Y. Your CEO has asked you to write a report on how well the company supports X capabilities. Provide a template email to send to my CEO that includes a TLDR and a link to the report. Use “<” and “>” to denote placeholder text I need to customize.
|
<CEO name>:
Please see the following overview of Y’s support on X capabilities:
<link to report>
TLDR
* <The most important takeaway you want the CEO to get from this report>
* <A critical gap you identified in Y’s ability to support X capabilities>
* <A key learning you were surprised by after performing your research for the report>
Let me know if you have any comments/questions. We aim to use this report to inform our product strategy.
<Your name>
|
|
null | false
| null |
How much did the United States pay for Alaska when it bought the land from Russia?
|
$7.2 million.
|
|
null | false
| null |
What is the difference between a banana and a plantain?
|
While bananas and plantains share a lot of similar physical attributes, their uses are quite different. For example, bananas are typically used in sweeter dishes as they are considered to be a fruit and contain higher sugar amounts than plantains. Plantains are more starchy which is why they are not typically eaten raw and are often used in more savory dishes. Plantains are also typically larger in size than bananas. Bananas contain high levels of potassium which are excellent for our health and can help reduce the risk of heart disease and high blood pressure. Plantains also have health benefits but the benefits greatly depend on how they are prepared.
|
|
null | false
| null |
With the onset of the Industrial Revolution in Britain, an unprecedented growth in urbanisation took place, and the number of High Streets (the primary street for retail in Britain) rapidly grew. London was the world's largest city from about 1831 to 1925, with a population density of 325 per hectare. In addition to the growing number of stores selling goods such as Harding, Howell & Co. on Pall Mall—a contender for the first department store—the streets had scores of street sellers loudly advertising their goods and services. London's overcrowded conditions led to cholera epidemics, claiming 14,000 lives in 1848, and 6,000 in 1866. Rising traffic congestion led to the creation of the world's first local urban rail network. The Metropolitan Board of Works oversaw infrastructure expansion in the capital and some surrounding counties; it was abolished in 1889 when the London County Council was created out of county areas surrounding the capital.
|
Given this paragraph about London, give me one reason why epidemics were spread in London.
|
London's overcrowded conditions is one reason why epidemics spread in London.
|
null | false
| null |
I'm looking to learn a new programming language. Can you recommend one?
|
Which programming language to learn depends on what type of programming you want to do. Python is an easy to learn, general purpose language that is quite popular. If you are aspiring to do front end development work, JavaScript would be a good choice. If you are an experienced developer looking for a high performance language, Rust might be an appropriate choice as it's gaining popularity.
|
|
null | false
| null |
What is IMPS?
|
Immediate Payment Service (IMPS) is an instant payment inter-bank electronic funds transfer system in India. IMPS offers an inter-bank electronic fund transfer service through mobile phones. The service is available 24x7 throughout the year including bank holidays. NEFT was also made available 24x7 from December 2019. RTGS was also made available 24x7 from 14th December 2020.
It is managed by the National Payments Corporation of India (NPCI) and is built upon the existing National Financial Switch network. In 2010, the NPCI initially carried out a pilot for the mobile payment system with 4 member banks (State Bank of India, Bank of India, Union Bank of India and ICICI Bank), and expanded it to include Yes Bank, Axis Bank and HDFC Bank later that year. IMPS was publicly launched on 22 November 2010. Currently, there are 53 commercial banks, 101 Rural/District/Urban and cooperative banks, and 24 PPIi signed up for the IMPS service.
Around 200 million IMPS transactions amounting to roughly US$20 billion of transaction amount happen every month in India. The sender requires to know the bank account number and the Indian Financial System Code of the beneficiary to transfer money.
|
|
null | false
| null |
What are some examples of countries to visit for an amazing culinary experience?
|
Italy, India, Peru, Japan, Singapore, Thailand, Spain, Vietnam and Portugal
|
|
null | false
| 462
|
Theoretical Analysis of the DG Setting and Algorithms The DG problem setting was first analysed in. Since then there have been some attempts to analyse DG algorithms from a generalisation bound perspective. However these studies have theoretical results that are either restricted to specific model classes, such as kernel machines, or make strong assumptions about how the domains seen during training will resemble those seen at test time-e.g., that all domains are convex combinations of a finite pre-determined set of prototypical domains. In contrast, our Rademacher complexity approach can be applied to a broad range of model classes (including neural networks), and makes comparatively milder assumptions about the relationship between domains-i.e., they are i.i.d. samples from another arbitrary distribution over domains.
The majority of the existing work investigating the theoretical foundations of DG follow the initial formalisation of the domain generalisation problem put forth by, where the goal is to minimise the expected error over unseen domains. However, several recent works have also explored the idea of bounding the error on a single unseen domain with the most pathological distribution shift. This type of analysis is typically rooted in methods from causal inference, rather than statistical learning theory. As a consequence, they are able to make stronger claims for the problems they address, but the scope of their analysis is necessarily limited to the scenarios where their assumptions about the underlying causal structures are valid. For example, Janzing (2019) provides bounds that assume problems conform to a specific class of structural equation models, and the analysis is performed under the assumption that infinite training data is available within each of the observed training domains. Throughout the work we address the stan-dard DG formalisation given by, where one is concerned with the expected performance of a model on domains sampled from some distribution over domains.
Others rely on trying to link between domain adaptation objectives (where target domains are observable for alignment to source domains) and domain generalisation (where target domains are not observable and thus cannot correctly be used in a learning objective). proceed by making assumptions on the structure of the distribution over possible domains (i.e., that it has support determined by the convex hull of a finite set of prototypical domains), which allows them to upper bound the domain alignment metric. provide a bound that depends on an unobservable domain distance quantity, which they then approximate in experiments using kernel density estimates. is another piece of work that theoretically investigates the generalisation of ERM in a DG setting. They deal with online DG, where each time-step corresponds to observing a new domain, and the learner must produce a new model capable of generalising to novel domains. Another point of difference between their work and the standard DG problem setting of is that the domain at each time-step is chosen by an adversary. They analyse this game for a finite number of time-steps, but they assume each domain has an infinite amount of data. They also put some limitations on the adversary: e.g., it must choose a domain that is a convex combination of a finite number of pre-determined domains. In contrast, our theoretical analysis is in the more realistic setting where one has a finite amount of data per domain, and the domains we consider are not limited to convex combinations of a set of prototypical domains. Possibly the most similar work to our theoretical contributions is due to, who also provide learningtheoretic generalisation bounds for DG. However, their analysis only applies to finite hypothesis classes (which does not include, e.g., linear models or neural networks), whereas ours can be applied to any class amenable to analysis with Rademacher complexity.
The main existing empirical analysis on DG is, who compared several state of the art DG methods under a common evaluation and hyper-parameter tuning protocol called DomainBed. They ultimately defend Empirical Risk Minimization (ERM) over more sophisticated alternatives on the grounds that no competitor consistently beats it across the benchmark suite. We also broadly defend ERM, and build on the same benchmark, but differently we provide a much deeper analysis into when and why ERM works. More specifically: (i) We provide a new theoretical analysis of ERM's generalisation quality unlike the prior purely empirical evaluation, (ii) We re-use the DomainBed benchmark to directly corroborate this theory under controlled conditions using linear models where model complexity can be tractably and accurately tuned. (iii) We use our complexity-based analysis to explain the previously erratic results of prior DomainBed competitors in terms of model complexity. (iv) We identify, and empirically validate, the preferred model selection criterion for DG, a point which was inconclusive in.
However, several recent works have also explored the idea of bounding the error on a single unseen domain with the most pathological distribution shift (Janzing, 2019). This type of analysis is typically rooted in methods from causal inference, rather than statistical learning theory. As a consequence, they are able to make stronger claims for the problems they address, but the scope of their analysis is necessarily limited to the scenarios where their assumptions about the underlying causal structures are valid. For example, Janzing (2019) provides bounds that assume problems conform to a specific class of structural equation models, and the analysis is performed under the assumption that infinite training data is available within each of the observed training domains. Throughout the work we address the stan2 Under review as a conference paper at ICLR 2022 dard DG formalisation given by Blanchard et al. (2011), where one is concerned with the expected performance of a model on domains sampled from some distribution over domains.
|
How does this work relate to Janzing (2019)?
|
There are several significant differences to the scope of our work compared to Janzing. Janzing assumes access to infinite data in each domain (we assume a finite training set), uses a non-standard measure of model complexity (we use Rademacher complexity), and restricts analysis to problems with a specific causal structure, while we make no causal structure assumption. The former means that the wide range of existing tools to measure or constrain Rademacher complexity for diverse model classes can be used with ours, but not Janzing’s. The latter means that our analysis is more generally applicable to diverse problems. Moreover, it is unclear whether the object they bound (expected risk under the interventional distribution) is equivalent to what we bound (expected risk across all domains). We have added discussion of this paper (and others) to our related work section.
|
null | false
| null |
The Instituto de Enseñanza Secundaria Lope de Vega (IES Lope de Vega) is a public high school in Madrid, Spain. It is located in the calle de San Bernardo. It is operated by the regional administration of the Community of Madrid.
|
In which city is this high school located?
|
Madrid
|
null | false
| null |
Richard Louis Proenneke (/ˈprɛnəkiː/; May 4, 1916 – April 20, 2003) was an American self-educated naturalist, conservationist, writer, and wildlife photographer who, from the age of about 51, lived alone for nearly thirty years (1968–1998) in the mountains of Alaska in a log cabin that he constructed by hand near the shore of Twin Lakes. Proenneke hunted, fished, raised and gathered much of his own food, and also had supplies flown in occasionally. He documented his activities in journals and on film, and also recorded valuable meteorological and natural data. The journals and film were later used by others to write books and produce documentaries about his time in the wilderness.
|
When he lived in the mountains of Alaska, was Richard Proenneke in full autonomy?
|
No, he was getting some of his food by plane.
|
null | false
| null |
Frost became interested in sport at a young age and was an all-round athlete. He won medals for running and swimming and was a prominent member of the Winchester Harriers sports club. He particularly excelled as a cyclist, entering competitive events at an early age and holding several amateur cycling records. In 1898, Frost won, within the space of a fortnight, Britain's two leading cycle racing trophies, the Bath Road 100 and the Cawardine Challenge Cup, as well as the national championship. By the time he retired he had won every important amateur cycling trophy in the nation.
He is particularly remembered for winning the Carwardine Challenge Cup, a 100-mile race held annually at the Herne Hill race track. This huge cup was donated by the vice-president of the Anerley Bicycle Club in 1895 and was made from 650 ounces of silver, with ornamentation of solid gold. The tradition at the time was that if someone won a trophy three times they could keep it. Frost, riding for the Bath Road club in Hayes, first won it in 1896, repeated the victory in 1898 and was victorious for the third time in 1900. The cup is occasionally exhibited in Winchester.
|
Which trophy was Frost able to keep and why?
|
Frost was a prominent cyclist. In the late 1800s and early 1900s when he was active he was able to win the Carwardine Cup trophy and keep it. The tradition at the time was that if one could win a trophy 3 times, one could keep the trophy. Frost won this trophy in 1896, 1898 and 1900.
|
null | false
| null |
Among those animals, what are mammals? platypus, Greenland shark, puffer fish, rattlesnake, orca, kangaroo, ostrich
|
From above, those are mammals: platypus, orca, kangaroo. Greenland shark and puffer fish are fishes, not mammals. Rattlesnake is a reptile. Ostrich is a bird. Note though orca lives in the sea and looks like big fishes, e.g. sharks, orca is in fact a mammal.
|
|
1911.03243
| false
| null |
Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations.
Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback.
|
How are workers trained?
|
The answers are shown as follows:
* extensive personal feedback
|
null | false
| null |
Warren Edward Buffett (/ˈbʌfɪt/ BUF-it; born August 30, 1930) is an American business magnate, investor, and philanthropist. He is currently the chairman and CEO of Berkshire Hathaway. He is one of the best-known fundamental investors in the world as a result of his immense investment success possessing a net worth of $104 billion as of March 2023, making him the fifth-richest person in the world.
Buffett was born in Omaha, Nebraska. He developed an interest in business and investing during his youth, eventually entering the Wharton School of the University of Pennsylvania in 1947 before transferring to and graduating from the University of Nebraska at 19. He went on to graduate from Columbia Business School, where he molded his investment philosophy around the concept of value investing pioneered by Benjamin Graham. He attended New York Institute of Finance to focus on his economics background and soon after began various investment business partnerships, including one with Graham. He created Buffett Partnership, Ltd in 1956 and his investment firm eventually acquired a textile manufacturing firm called Berkshire Hathaway, assuming its name to create a diversified holding company, and later as the company's chairman and majority shareholder in 1970. In 1978, Charlie Munger joined Buffett as vice-chairman.
Since 1970, Buffett has presided as the chairman and largest shareholder of Berkshire Hathaway, one of America's largest holding companies and world's leading corporate conglomerates. He has been referred to as the "Oracle" or "Sage" of Omaha by global media as a result of having accumulated a massive fortune derived from his business and investment success. He is noted for his adherence to the principles of value investing, and his personal frugality despite his vast wealth.
Buffett is also a noted philanthropist, having pledged to give away 99 percent of his fortune to philanthropic causes, primarily via the Bill & Melinda Gates Foundation. He founded The Giving Pledge in 2010 with Bill Gates, whereby billionaires pledge to give away at least half of their fortunes.
|
Who is Warren Buffet?
|
Warren Edward Buffett is an American business magnate, investor, and philanthropist. He is currently the chairman and CEO of Berkshire Hathaway. He is one of the best-known fundamental investors in the world as a result of his immense investment success possessing a net worth of $104 billion as of March 2023, making him the fifth-richest person in the world.
Buffett was born in Omaha, Nebraska. He developed an interest in business and investing during his youth, eventually entering the Wharton School of the University of Pennsylvania in 1947 before transferring to and graduating from the University of Nebraska at 19. He went on to graduate from Columbia Business School, where he molded his investment philosophy around the concept of value investing pioneered by Benjamin Graham. He attended New York Institute of Finance to focus on his economics background and soon after began various investment business partnerships, including one with Graham. He created Buffett Partnership, Ltd in 1956 and his investment firm eventually acquired a textile manufacturing firm called Berkshire Hathaway, assuming its name to create a diversified holding company, and later as the company's chairman and majority shareholder in 1970. In 1978, Charlie Munger joined Buffett as vice-chairman.
Since 1970, Buffett has presided as the chairman and largest shareholder of Berkshire Hathaway, one of America's largest holding companies and world's leading corporate conglomerates.
Buffett is also a noted philanthropist, having pledged to give away 99 percent of his fortune to philanthropic causes, primarily via the Bill & Melinda Gates Foundation. He founded The Giving Pledge in 2010 with Bill Gates, whereby billionaires pledge to give away at least half of their fortunes.
|
null | false
| null |
Identify which instrument is string or percussion: Handbell, Dobro
|
Dobro is string, Handbell is percussion.
|
|
null | false
| null |
Why is Santa Cruz, California a great place to live?
|
Santa Cruz, California has one of the best climates in the continental U.S. – being close to the Pacific Ocean, temperatures stay warmer in the winters and cooler in the summers, but without the dreaded fog that nearby cities like San Francisco or Monterey experience. The small city is world-famous for surfing, and the relaxed vibe that comes with it. Natural beauty is abundant, with the mountains and beach only a few miles apart.
|
|
null | false
| 267
|
In this work we present an ensemble classifier that is detecting hate-speech in short text, such as tweets. The input to the base-classifiers consists of not only the standard word uni-grams, but also a set of features describing each user's historical tendency to post abusive messages. Our main innovations are: i) a deep learning architecture that uses word frequency vectorisation for implementing the above features, ii) an experimental evaluation of the above model on a public dataset of labeled tweets, iii) an open-sourced implementation built on top of Keras.
The results show that our approach outperforms the current state of the art, and to the best of our knowledge, no other model has achieved better performance in classifying short messages. The approach does not rely on pre-trained vectors, which provides a serious advantage when dealing with short messages of this kind. More specifically, users will often prefer to obfuscate their offensive terms using shorter slang words or create new words by `inventive' spelling and word concatenation. For instance, the word `Islamolunatic' is not available in the popular pre-trained word embeddings (Word2Vec or GloVe), even though it appears with a rather high frequency in racist postings. Hence, word frequency vectorization is preferable to the pre-trained word embeddings used in prior works if one aims to build a language-agnostic solution.
We believe that deep learning models have a high potential wrt. classifying text or analyzing the sentiment in general. In our opinion there is still space for further improving the classification algorithms.
For future work we plan to investigate other sources of information that can be utilized to detect hateful messages. In addition, we intend to generalize the output received in the current experiment, with evaluation over other datasets, including analyzing texts written in different languages.
Our main innovations are: i) a deep learning architecture that uses word frequency vectorisation for implementing the above features, ii) an experimental evaluation of the above model on a public dataset of labeled tweets, iii) an open-sourced implementation built on top of Keras.
|
What are their main innovations?
|
i) a deep learning architecture that uses word frequency vectorisation for implementing the above features, ii) an experimental evaluation of the above model on a public dataset of labeled tweets, iii) an open-sourced implementation built on top of Keras.
|
null | false
| null |
What are recommendations for some European cities to visit.
|
Berlin, Amsterdam and Vienna are all considered great European vacation spots to visit.
|
|
null | false
| null |
Renal compensation is a mechanism by which the kidneys can regulate the plasma pH. It is slower than respiratory compensation, but has a greater ability to restore normal values. Kidneys maintain the acid-base balance through two mechanisms,- the secretion of H+ ions and the reabsorption of bicarbonate HCO3 from urine which is then released into the blood. The regulation of H+ ions and bicarbonate HCO3 is determined by the concentration of the two released within the urine. These mechanisms of secretion and reabsorption balance the pH of the bloodstream. A restored acid-base balanced bloodstream thus leads to a restored acid-base balance throughout the entire body.
|
What is renal compensation?
|
Renal compensation is the mechanism that the kidneys use to regulate plasma pH. It is a slower process than respiratory compensation, but increases the body's ability to restore normal values.
|
1910.04887
| false
| null |
For training, we aggregated (query, image) pairs using the region descriptions from the VG dataset and referring expressions from the ReferIt dataset. Our VG training set consists of 85% of the data: 16k images and 740k corresponding region descriptions. The Referit training data consists of 9k images and 54k referring expressions.
Our VG training set consists of 85% of the data: 16k images and 740k corresponding region descriptions.
|
How big is data provided by this research?
|
The answers are shown as follows:
* 16k images and 740k corresponding region descriptions
|
null | false
| null |
Tell me which of the following activities are types of workouts: walking, running, weightlifting, HIIT, cardio, yoga, sleeping, sitting at your chair, standing, looking into the distance, watching TV.
|
Working out typically involves movement of the body and muscles. it often involves the stretching or contracting of muscles and can have a change in heart rate. Walking, running, weightlifting, HIIT, and cardio, and yoga are different types of workouts.
|
|
null | false
| null |
what is more soothing - greenery or water bodies
|
There is no one right answer to this. It depends. No ones finds rough ocean or windy tree soothing. Some people like still water bodies especially when big, it gives a sense that despite being enormous (ocean, lakes), there is calmness behind it. Just because you are calm does not mean you are weak. People find still water relaxing. On the other hand, nature with greenery everywhere , like spring season gives a good feeling. Winters are chilly with shedded tree and no colors, whereas when spring starts, you see fresh green leaves everywhere, the temperature improves, nature looks more colorful. This brings in happy feeling to people. The sounds of birds, small water canals, and gentle wind from the trees, brings distraction in a good way for people used to living in cities surrounded by tall buildings, noise and traffic. This is one of the reason why climate change is increasingly becoming a hot topic
|
|
null | false
| null |
Give me a bulleted list of movie ideas.
|
- The President of the United States and a gorilla become best friends. The gorilla lives in the White House and causes all sorts of scandals. Eventually the American people learn to love the gorilla and elect him as the next president.
- An undercover cop gets has a great idea for an invention (a voice box that makes you talk in an accent). He tries to get his idea patented and marketed all while not blowing his cover.
- A war movie where everyone in a platoon deserts together and goes and starts a new community on an isolated island.
|
|
1909.03544
| false
| null |
The POS tagging and lemmatization results are presented in Table TABREF44. The word2vec word embeddings (WE) considerably increase performance compared to the baseline, especially in POS tagging. When only Flair embeddings are added to the baseline, we also observe an improvement, but not as high. We hypothesise that the lower performance (in contrast with the results reported in BIBREF2) is caused by the size of the training data, because we train the word2vec WE on considerably larger dataset than the Czech Flair model. However, when WE and Flair embeddings are combined, performance moderately increases, demonstrating that the two embedding methods produce at least partially complementary representations.
Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation). To our best knowledge, research on PDT parsing was performed mostly in the first decade of this century, therefore even our baseline model substantially surpasses previous works. Our best model with contextualized embeddings achieves nearly 50% error reduction both in UAS and LAS.
Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank. This treebank is derived from PDT 3.5 a-layer, with original POS tags kept in XPOS, and the dependency trees and lemmas modified according to UD guidelines.
Table TABREF47 shows NER results (F1 score) on CNEC 1.1 and CNEC 2.0. Our sequence-to-sequence (seq2seq) model which captures the nested entities, clearly surpasses the current Czech NER state of the art. Furthermore, significant improvement is gained when adding the contextualized word embeddings (BERT and Flair) as optional input to the LSTM encoder. The strongest model is a combination of the sequence-to-sequence architecture with both BERT and Flair contextual word embeddings.
FLOAT SELECTED: Table 2. POS tagging and lemmatization results (accuracy) on PDT 3.5. Bold indicates the best result, italics related work. †Reported on PDT 2.0, which has the same underlying corpus, with minor changes in morphological annotation (our model results differ at 0.1% on PDT 2.0).
FLOAT SELECTED: Table 4. Dependency tree parsing results on PDT 2.0 a-layer. Bold indicates the best result, italics related work. †Possibly using gold POS tags. ‡Results as of 23 Mar 2019.
FLOAT SELECTED: Table 5. Czech PDT UD 2.3 results for POS tagging (UPOS: universal POS, XPOS: languagespecific POS, UFeats: universal morphological features), lemmatization and dependency parsing (UAS, LAS, MLAS, and BLEX scores). Bold indicates the best result, italics related work.
FLOAT SELECTED: Table 6. Named entity recognition results (F1) on the Czech Named Entity Corpus. Bold indicates the best result, italics related work.
The POS tagging and lemmatization results are presented in Table TABREF44.
Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation).
Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank.
Table TABREF47 shows NER results (F1 score) on CNEC 1.1 and CNEC 2.0.
FLOAT SELECTED: Table 2. POS tagging and lemmatization results (accuracy) on PDT 3.5. Bold indicates the best result, italics related work. †Reported on PDT 2.0, which has the same underlying corpus, with minor changes in morphological annotation (our model results differ at 0.1% on PDT 2.0).
FLOAT SELECTED: Table 4. Dependency tree parsing results on PDT 2.0 a-layer. Bold indicates the best result, italics related work. †Possibly using gold POS tags. ‡Results as of 23 Mar 2019.
FLOAT SELECTED: Table 5. Czech PDT UD 2.3 results for POS tagging (UPOS: universal POS, XPOS: languagespecific POS, UFeats: universal morphological features), lemmatization and dependency parsing (UAS, LAS, MLAS, and BLEX scores). Bold indicates the best result, italics related work.
FLOAT SELECTED: Table 6. Named entity recognition results (F1) on the Czech Named Entity Corpus. Bold indicates the best result, italics related work.
|
What previous approaches did this method outperform?
|
The answers are shown as follows:
* Table TABREF44
* Table TABREF44
* Table TABREF47
* Table TABREF47
|
null | false
| null |
What is the fastest or speediest animal?
|
The fastest or speediest animal is the peregrine falcon. The peregrine falcon can achieve speeds of 242 mph (389 km/h) in a dive. However the fastest land animal is the cheetah. The cheetah can achieve speeds of 75 mph (120 km/h)
|
|
null | false
| null |
What are the main components of Apache Kafka?
|
Apache Kafka is a distributed system that is composed of several components. The main components of Apache Kafka are:
- Topics: A topic is a category or feed name to which producers can send messages and read by consumers. It’s append-only and immutable. Consumers can use an offset to track read messages.
- Brokers: A Kafka cluster is composed of one or more brokers that manage the storage of messages in topics and serve as intermediaries between producers and consumers.
- Producers: Applications that produce data and publish it to one or more topics in a Kafka cluster. They control how events are assigned to partitions within a topic.
- Consumers: Applications that subscribe to one or more topics and consume published messages. The only metadata retained on a per-consumer basis is the offset or position of that consumer in a topic. This offset is controlled by the consumer.
- Partitions: Kafka uses partitions to split a topic's messages across multiple brokers for scalability and performance. Events with the same event key, such as the same customer identifier or vehicle ID, are written to the same partition, and Kafka guarantees that any consumer of a given topic partition will always read that partition’s events in exactly the same order as they were written.
- Replication: Multiple brokers always have a copy of the data to guarantee high availability and fault tolerance. This replication is performed at the topic partition level.
|
|
null | false
| null |
Which one of these do not belong to the group: Feeling, Joy, Anxiety, Anger, Sorrow
|
Feeling is the odd one out, the other terms are specific emotions.
|
|
null | false
| null |
Why do people fly on airplanes?
|
People fly on airplanes to go between places faster. Driving is an alternative to flying but when time is an issue flying is the quickest way to get to your destination. Flying is also necessary when you need to get to an island or another continent which is not attached to the continent where you live. Taking a boat may be too costly and long in some cases even if you don't like flying. Flying is used to take trips, visit family and friends, meeting new people, interacting with co-workers, and sight-seeing.
|
|
null | false
| 250
|
Learning semantic representations (embeddings) of different entities, such as textual, commercial, and physical, has been a recent and active area of research. Such representations can facilitate applications that rely on a notion of similarity, for example recommendation systems and ranking algorithms in e-commerce.
In natural language processing, word2vec BIBREF0 learns vector representations of words from large quantities of text, where each word is mapped to a $d$-dimensional vector such that semantically similar words have geometrically closer vectors. This is achieved by predicting either the context words appearing in a window around a given target word (skip-gram model), or the target word given the context (CBOW model). The main assumption is that words appearing frequently in similar contexts share statistical properties (the distributional hypothesis). Crucially, word2vec models, like many other word embedding models, preserve sequential information encoded in text so as to leverage word co-occurrence statistics. The skip-gram model has been adapted to other domains in order to learn dense representations of items other than words. For example, product embeddings in e-commerce BIBREF1 or vacation rental embeddings in the hospitality domain BIBREF2 can be learned by treating purchase histories or user click sequences as sentences and applying a word2vec approach.
Most of the prior work on item embedding exploit the co-occurrence of items in a sequence as the main signal for learning the representation. One disadvantage of this approach is that it fails to incorporate rich structured information associated with the embedded items. For example, in the travel domain, where we seek to embed hotels and other travel-related entities, it could be helpful to encode explicit information such as user ratings, star ratings, hotel amenities, and location in addition to implicit information encoded in the click-stream.
In this work, we propose an algorithm for learning hotel embeddings that combines sequential user click information in a word2vec approach with additional structured information about hotels. We propose a neural architecture that adopts and extends the skip-gram model to accommodate arbitrary relevant information of embedded items, including but not limited to geographic information, ratings, and item attributes. In experimental results, we show that enhancing the neural network to jointly encode click and supplemental structured information outperforms a skip-gram model that encodes the click information alone. The proposed architecture also naturally handles the cold-start problem for hotels with little or no historical clicks. Specifically, we can infer an embedding for these properties by leveraging their supplemental structured metadata.
Compared to previous work on item embeddings, the novel contributions of this paper are as follows:
We propose a novel framework for fusing multiple sources of information about an item (such as user click sequences and item-specific information) to learn item embeddings via self-supervised learning.
We generate an interpretable embedding which can be decomposed into sub-embeddings for clicks, location, ratings, and attributes, and employed either as separate component embeddings or a single, unified embedding.
It is also dynamic, meaning it is easy to reflect future changes in attributes such as star-rating or addition of amenities in the embedding vectors without retraining.
We address the cold-start problem by including hotel metadata which are independent of user click-stream interactions and available for all hotels. This helps us to better impute embeddings for sparse items/hotels.
We show significant gains over previous work based on click-embedding in several experimental studies.
The structure of the remainder of this paper is as follows. Section 2 gives an overview of some of the recent works on neural embedding. Section 3 provides details of the proposed framework, including the neural network architecture, training methodology, and how the cold-start problem is addressed. In Section 4, we present experimental results on several different tasks and a comparison with previous state-of-the-art work. Section 5 concludes the paper.
We propose a neural architecture that adopts and extends the skip-gram model to accommodate arbitrary relevant information of embedded items, including but not limited to geographic information, ratings, and item attributes.
|
What model does the neural architecture adopt and extend?
|
the skip-gram model.
|
null | false
| null |
What is the oldest musical composition in the world?
|
The Seikilos Epitaph is the oldest musical composition in the world.
|
|
1810.06743
| true
| null |
Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation.
Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources.
|
Do they look for inconsistencies between different languages' annotations in UniMorph?
|
Yes.
|
null | false
| 183
|
Even though machine translation has improved considerably with the advent of neural machine translation (NMT) BIBREF0 , BIBREF1 , the translation of pronouns remains a major issue. They are notoriously hard to translate since they often require context outside the current sentence.
As an example, consider the sentences in Figure FIGREF1 . In both languages, there is a pronoun in the second sentence that refers to the European Central Bank. When the second sentence is translated from English to German, the translation of the pronoun it is ambiguous. This ambiguity can only be resolved with context awareness: if a translation system has access to the previous English sentence, the previous German translation, or both, it can determine the antecedent the pronoun refers to. In this German sentence, the antecedent Europäische Zentralbank dictates the feminine gender of the pronoun sie.
It is unfortunate, then, that current NMT systems generally operate on the sentence level BIBREF2 , BIBREF3 , BIBREF4 . Documents are translated sentence-by-sentence for practical reasons, such as line-based processing in a pipeline and reduced computational complexity. Furthermore, improvements of larger-context models over baselines in terms of document-level metrics such as BLEU or RIBES have been moderate, so that their computational overhead does not seem justified, and so that it is hard to develop more effective context-aware architectures and empirically validate them.
To address this issue, we present an alternative way of evaluating larger-context models on a test set that allows to specifically measure a model's capability to correctly translate pronouns. The test suite consists of pairs of source and target sentences, in combination with contrastive translation variants (for evaluation by model scoring) and additional linguistic and contextual information (for further analysis). The resource is freely available. Additionally, we evaluate several context-aware models that have recently been proposed in the literature on this test set, and extend existing models with parameter tying.
The main contributions of our paper are:
Section SECREF2 explains how our paper relates to existing work on context-aware models and the evaluation of pronoun translation. Section SECREF3 describes our test suite. The context-aware models we use in our experiments are detailed in Section SECREF4 . We discuss our experiments in Section SECREF5 and the results in Section SECREF6 .
To address this issue, we present an alternative way of evaluating larger-context models on a test set that allows to specifically measure a model's capability to correctly translate pronouns.
|
What is the alternative way presented in this paper?
|
The alternative way of evaluating larger-context models on a test set that allows to specifically measuring a model's capability to correctly translate pronouns.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.