paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
1910.11768
false
null
The functional dissimilarity score was computed using sentences from the test set in CoNLL 2017 Universal Dependencies task BIBREF20 for the relevant languages with the provided UPOS sequences. Furthermore, none of the evaluated models, including the proposed method, were trained with CoNLL2017 data. We computed the nearest neighbours experiment for all languages in the training data for the above models. The results are shown in Table TABREF27. The results show that general purpose language models do capture syntax information, which varies greatly across languages and models. The functional dissimilarity score was computed using sentences from the test set in CoNLL 2017 Universal Dependencies task BIBREF20 for the relevant languages with the provided UPOS sequences. We computed the nearest neighbours experiment for all languages in the training data for the above models.
Which evaluation metrics do they use for language modelling?
The answers are shown as follows: * functional dissimilarity score * nearest neighbours experiment
null
false
null
What are some good Indian dishes?
There are many popular Indian dishes. Tandoori chicken, chicken tikka masala, and biryani are popular main courses. Samosas and naan bread are popular side dishes. Gulab jamun is a popular dessert.
null
false
null
Who was the original singer of Static-X?
The original singer of metal band, Static-X, was Wayne Static.
1905.10044
false
null
Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
how was the dataset built?
Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no"
null
false
145
Language modeling is a probabilistic description of language phenomenon. It provides essential context to distinguish words which sound similar and therefore has one of the most useful applications in Natural Language Processing (NLP) especially in downstreaming tasks like Automatic Speech Recognition (ASR). Recurrent Neural Networks (RNN) especially Long Short Term Memory (LSTM) networks BIBREF0 have been the typical solution to language modeling which do achieve strong results. In spite of these results, their fundamental sequential computation constraint has restricted their use in the modeling of long-term dependencies in sequential data. To address these issues Transformer architecture was introduced. Transformers relies completely on an attention mechanism to form global dependencies between input and output. It also offers more parallelization and has achieved SOTA results in language modeling outperforming LSTM models BIBREF1. In recent years,we have seen a lot of development based on this standard transformer models particularly on unsupervised pre-training(BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 which have set state-of-the art results on multiple NLP benchmarks. One such model architecture has been the Bidirectional Encoder Representations from Transformers (BERT) model which uses a deep bidirectional transformer architecture. Another architecture of interest would be the Transformer-XL, which introduces the notion of recurrence in a self-attention model. The primary research focus though has been mostly on English language for which abundant data is present. It is interesting to see the performance of these models for an agglutinative language like Finnish, which is morphologically richer than English. In this project, we explore the implementation of Transformer-based models (BERT and Transformer-XL) in language modeling for Finnish. We will use the same training data as in BIBREF8 so that we can do fair comparisons with the performance of the LSTM models. Also, as the BERT model is a bi-directional transformer, we will have to approximate the conditional probabilities given a sequence of words. We also experiment with using sub-word units with Transformer-XL to cope with the large vocabulary problems associated with the Finnish Language. With smaller units, the modeled sequences are longer, and we hope that the recursive XL architecture can allow us to still model long term effects. To the best of our knowledge this is the first work with the Finnish language to use the following: Approximation of perplexity using a BERT architecture Using Transformer-XL architecture with sub-word units. Comparison of Transformer and LSTM models as language models in the same comparable settings with an agglutinative language. To the best of our knowledge this is the first work with the Finnish language to use the following: • Approximation of perplexity using a BERT architecture • Using Transformer-XL architecture with sub-word units. • Comparison of Transformer and LSTM models as language models in the same comparable settings with an agglutinative language.
In how many aspects, this is the first work with the Finnish language to use?
Three.
null
false
105
Since humans amass more and more generally available data in the form of unstructured text it would be very useful to teach machines to read and comprehend such data and then use this understanding to answer our questions. A significant amount of research has recently focused on answering one particular kind of questions the answer to which depends on understanding a context document. These are cloze-style questions BIBREF0 which require the reader to fill in a missing word in a sentence. An important advantage of such questions is that they can be generated automatically from a suitable text corpus which allows us to produce a practically unlimited amount of them. That opens the task to notoriously data-hungry deep-learning techniques which now seem to outperform all alternative approaches. Two such large-scale datasets have recently been proposed by researchers from Google DeepMind and Facebook AI: the CNN/Daily Mail dataset BIBREF1 and the Children's Book Test (CBT) BIBREF2 respectively. These have attracted a lot of attention from the research community BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 with a new state-of-the-art model coming out every few weeks. However if our goal is a production-level system actually capable of helping humans, we want the model to use all available resources as efficiently as possible. Given that we believe that if the community is striving to bring the performance as far as possible, it should move its work to larger data. This thinking goes in line with recent developments in the area of language modelling. For a long time models were being compared on several "standard" datasets with publications often presenting minuscule improvements in performance. Then the large-scale One Billion Word corpus dataset appeared BIBREF15 and it allowed Jozefowicz et al. to train much larger LSTM models BIBREF16 that almost halved the state-of-the-art perplexity on this dataset. We think it is time to make a similar step in the area of text comprehension. Hence we are introducing the BookTest, a new dataset very similar to the Children's Book test but more than 60 times larger to enable training larger models even in the domain of text comprehension. Furthermore the methodology used to create our data can later be used to create even larger datasets when the need arises thanks to further technological progress. We show that if we evaluate a model trained on the new dataset on the now standard Children's Book Test dataset, we see an improvement in accuracy much larger than other research groups achieved by enhancing the model architecture itself (while still using the original CBT training data). By training on the new dataset, we reduce the prediction error by almost one third. On the named-entity version of CBT this brings the ensemble of our models to the level of human baseline as reported by Facebook BIBREF2 . However in the final section we show in our own human study that there is still room for improvement on the CBT beyond the performance of our model. We show that if we evaluate a model trained on the new dataset on the now standard Children's Book Test dataset, we see an improvement in accuracy much larger than other research groups achieved by enhancing the model architecture itself (while still using the original CBT training data).
What do the authors find if evaluating a model trained on the new dataset on the now-standard Children's Book Test dataset?
An improvement in accuracy much larger than other research groups achieved by enhancing the model architecture itself (while still using the original CBT training data).
null
false
null
Classify each of these breads as either a dog or a cat bread: Siamese, Ragdoll, Yorkshire Terrier, Birman, Cocker Spaniel, Saint Bernard.
Siamese = Cat Ragdoll = Cat Yorkshire Terrier = Dog Birman = Cat Cocker Spaniel = Dog Saint Bernard = Dog
null
false
null
Madeirans migrated to the United States, Venezuela, Brazil, Guyana, Saint Vincent and the Grenadines, South Africa and Trinidad and Tobago.
Extract the destination countries of the Madeira diaspora from the text. Separate them with a comma.
United States, Venezuela, Brazil, Guyana, Saint Vincent and the Grenadines, South Africa, Trinidad and Tobago
null
false
null
What are some types of sorting algorithms?
1. Bubble Sort 2. Quicksort 3. Insertion Sort 4. Merge Sort 5. Heap Sort 6. Radix Sort
null
false
16
In the above experiments we initialize the structured syntax components with the pre-trained Gaussian or discrete baseline, which is shown as a useful technique to help train our deep models. We further study the results with fully random initialization. In the POS tagging experiment, we report the results in Table TABREF48 . While the performance with 4 layers is comparable to the pre-trained Gaussian initialization, deeper projections (8 or 16 layers) result in a dramatic drop in performance. This suggests that the structured syntax model with very deep projections is difficult to train from scratch, and a simpler projection might be a good compromise in the random initialization setting. Different from the Markov prior in POS tagging experiments, our parsing model seems to be quite sensitive to the initialization. For example, directed accuracy of our approach on sentences of length INLINEFORM0 is below 40.0 with random initialization. This is consistent with previous work that has noted the importance of careful initialization for DMV-based models such as the commonly used harmonic initializer BIBREF2 . However, it is not straightforward to apply the harmonic initializer for DMV directly in our model without using some kind of pre-training since we do not observe gold POS. We investigate the effect of the choice of pre-trained embedding on performance while using our approach. To this end, we additionally include results using fastText embeddings BIBREF41 – which, in contrast with skip-gram embeddings, include character-level information. We set the context windows size to 1 and the dimension size to 100 as in the skip-gram training, while keeping other parameters set to their defaults. These results are summarized in Table TABREF50 and Table TABREF51 . While fastText embeddings lead to reduced performance with our model, our approach still yields an improvement over the Gaussian baseline with the new observed embeddings space. We investigate the effect of the choice of pre-trained embedding on performance while using our approach. To this end, we additionally include results using fastText embeddings (Bojanowski et al., 2017) – which, in contrast with skip-gram embeddings, include character-level information. We set the context windows size to 1 and the dimension size to 100 as in the skip-gram training, while keeping other parameters set to their defaults.
What is the dimension size set when investigating the effect of the choice of pre-trained embedding on performance while using their approach?
100.
null
false
null
What is the state flower of California?
The state flower of California is the Golden Poppy
1702.06378
true
null
One of the major drawbacks of SCRF models is their high computational cost. In our experiments, the CTC model is around 3–4 times faster than the SRNN model that uses the same RNN encoder. The joint model by multitask learning is slightly more expensive than the stand-alone SRNN model. To cut down the computational cost, we investigated if CTC can be used to pretrain the RNN encoder to speed up the training of the joint model. This is analogous to sequence training of HMM acoustic models, where the network is usually pretrained by the frame-level CE criterion. Figure 2 shows the convergence curves of the joint model with and without CTC pretraining, and we see pretraining indeed improves the convergence speed of the joint model. One of the major drawbacks of SCRF models is their high computational cost. In our experiments, the CTC model is around 3–4 times faster than the SRNN model that uses the same RNN encoder. To cut down the computational cost, we investigated if CTC can be used to pretrain the RNN encoder to speed up the training of the joint model. Figure 2 shows the convergence curves of the joint model with and without CTC pretraining, and we see pretraining indeed improves the convergence speed of the joint model.
Can SCRF be used to pretrain the model?
No.
null
false
null
Is cricket a popular sports in USA?
Cricket is not a popular sports in USA. Although, it is getting popularity due to multiple reasons. First, the immigrants from Asia, Australia and UK have taken initiatives to launch local leagues for their second generation living in USA. Second, the Microsoft has recently invested in an international level cricket ground in Redmond, Washington. The USA also has recently launched a state level cricket league which will represent some known legends of Cricket.
null
false
null
To achieve basic objectives and implement fundamental qualities, GAAP has four basic assumptions, four basic principles, and four basic constraints. Assumptions Business Entity: assumes that the business is separate from its owners or other businesses. Revenue and expense should be kept separate from personal expenses. Going Concern: assumes that the business will be in operation indefinitely. This validates the methods of asset capitalization, depreciation, and amortization. Only when liquidation is certain this assumption is not applicable. The business will continue to exist in the unforeseeable future. Monetary Unit principle: assumes a stable currency is going to be the unit of record. The FASB accepts the nominal value of the US Dollar as the monetary unit of record unadjusted for inflation. Time-period principle: implies that the economic activities of an enterprise can be divided into artificial time periods. Principles Historical cost principle: requires companies to account and report assets and liabilities acquisition costs rather than fair market value. This principle provides information that is reliable (removing opportunity to provide subjective and potentially biased market values), but not very relevant. Thus there is a trend to use fair values. Most debts and securities are now reported at market values. Revenue recognition principle: holds that companies should record revenue when earned but not when received. The flow of cash does not have any bearing on the recognition of revenue. This is the essence of accrual basis accounting. Conversely, however, losses must be recognized when their occurrence becomes probable, whether or not it has actually occurred. This comports with the constraint of conservatism, yet brings it into conflict with the constraint of consistency, in that reflecting revenues/gains is inconsistent with the way in which losses are reflected. Matching principle: expenses have to be matched with revenues as long as it is reasonable to do so. Expenses are recognized not when the work is performed, or when a product is produced, but when the work or the product actually makes its contribution to revenue. Only if no connection with revenue can be established, cost may be charged as expenses to the current period (e.g., office salaries and other administrative expenses). This principle allows greater evaluation of actual profitability and performance (shows how much was spent to earn revenue). Depreciation and Cost of Goods Sold are good examples of application of this principle. Full disclosure principle: Amount and kinds of information disclosed should be decided based on trade-off analysis as a larger amount of information costs more to prepare and use. Information disclosed should be enough to make a judgment while keeping costs reasonable. Information is presented in the main body of financial statements, in the notes or as supplementary information Constraints Objectivity principle: The company financial statements provided by the accountants should be based on objective evidence. Materiality principle: The significance of an item should be considered when it is reported. An item is considered significant when it would affect the decision of a reasonable individual. Consistency principle: The company uses the same accounting principles and methods from period to period. Conservatism principle: When choosing between two solutions, the one which has the less favorable outcome is the solution which should be chosen (see convention of conservatism) Cost Constraint: The benefits of reporting financial information should justify and be greater than the costs imposed on supplying it.
Summarize the four basic principles of GAAP accounting
The four basic principles of GAAP accounting are: 1. Historical Cost Principle: Accounting and reporting assets and liabilities acquisition costs instead of their fair market value 2. Revenue Recognition Principle: Record revenue when earned instead of received 3. Matching Principle: Expenses must be matched with revenues 4. Full Disclosure Principle: The amount of information disclosed must be based on a trade-off analysis between cost and transparency
null
false
null
What are 5 really good golf ball brands for amateur players
Vice, Bridgestone, Taylormade, Srixon, Callaway
null
false
141
In this paper, we propose a novel deep learning model for Thai sentence segmentation. This study makes three main contributions. The first contribution is to integrate a local representation based on n-gram embedding into our deep model. This approach helps to capture word groups near sentence boundaries, allowing the model to identify boundaries more accurately. Second, we integrate a distant representation obtained from self-attention modules to capture sentence contextual information. This approach allows the model to focus on the initial words of dependent clauses (i.e., ”Before”, ”If”, and ”Although”). The last contribution is an adaptation of CVT, which allows the model to utilize unlabeled data to produce effective local and distant representations. The experiment was conducted on two Thai datasets, Orchid and UGWC, and one English punctuation restoration dataset, IWSLT. English punctuation restoration is similar to our Thai sentence segmentation. On the Thai sentence segmentation task, our model achieves F1 scores of 92.5% and 88.9% on the Orchid and UGWC datasets, constituting a relative error reduction of 7.4% and 10.5%, respectively. On the English punctuation task, the 2-class F1 score reached 82.7% when considering only two punctuation classes (making the task similar to sentence segmentation in Thai). Moreover, our model outperforms the model integrated with pretrained word vectors in terms of the overall F1 score on the IWSLT dataset. Based on our contributions, the local representation scheme has the highest impact on the Thai corpus, while the distant representation and CVT result in strong improvements on the English dataset. Moreover, our model can also be applied to elementary discourse unit (EDU) segmentation, which is used as the minimal syntactic unit for downstream tasks such as text summarization and machine translation. However, no experiments have been conducted to determine how different sentences and EDUs affect downstream tasks. Therefore, the evaluation of downstream tasks from different sources needs to be studied. This study makes three main contributions. The first contribution is to integrate a local representation based on n-gram embedding into our deep model. This approach helps to capture word groups near sentence boundaries, allowing the model to identify boundaries more accurately. Second, we integrate a distant representation obtained from self-attention modules to capture sentence contextual information. This approach allows the model to focus on the initial words of dependent clauses (i.e., ”Before”, ”If”, and ”Although”). The last contribution is an adaptation of CVT, which allows the model to utilize unlabeled data to produce effective local and distant representations.
What are the contributions of this study?
The first contribution is to integrate a local representation based on n-gram embedding into their deep model. Second, they integrate a distant representation obtained from self-attention modules to capture sentence contextual information. The last contribution is an adaptation of CVT, which allows the model to utilize unlabeled data to produce effective local and distant representations.
1911.05960
true
null
The experimental results are shown in Table TABREF35. As we mentioned before, all RNNs in these models are bi-directional, because we wonder if our bi-CRU could still give substantial improvements over bi-GRU which could capture both history and future information. As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively. We also found that though we adopt a straightforward classification model, our CRU model could outperform the state-of-the-art systems by 0.6%, 0.7%, and 0.8% gains respectively, which demonstrate its effectiveness. By employing more sophisticated architecture or introducing task-specific features, we think there is still much room for further improvements, which is beyond the scope of this paper. As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively.
Do experiment results show consistent significant improvement of new approach over traditional CNN and RNN models?
Yes.
null
false
null
Identify which instrument is string or percussion: Hydraulophone, Mandriola
Mandriola is string, Hydraulophone is percussion.
1911.02711
false
null
We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. In scenarios where there is no user-written summary for a review, we use pointer-generator network BIBREF9 to generate abstractive summaries. Empirical results show that our model significantly outperforms all strong baselines, including joint modeling, separate encoder and joint encoder methods. In addition, our model achieves new state-of-the-art performance, attaining 2.1% (with generated summary) and 4.8% (with golden summary) absolutely improvements compared to the previous best method on SNAP Amazon review benchmark. We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set. We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set.
Which review dataset do they use?
The answers are shown as follows: * SNAP (Stanford Network Analysis Project)
null
false
104
We use the same baseline as used by lang2011unsupervised which has been shown to be difficult to outperform. This baseline assigns a semantic role to a constituent based on its syntactic function, i.e. the dependency relation to its head. If there is a total of INLINEFORM0 clusters, INLINEFORM1 most frequent syntactic functions get a cluster each, and the rest are assigned to the INLINEFORM2 th cluster. However, we get an F1 score of 85.1% for argument identification on CoNLL 2009 EN data as opposed to 80.7% reported by Titov and Klementiev (2012b).
How much does the author's F1 score improve compared to Titov and Klementiev's?
0.044.
null
false
null
Who are the main songwriters for the Grateful Dead?
The main song writers for the Grateful Dead were Jerry Garcia, Bob Weir, Phil Lesh, Robert Hunter, and John Perry Barlow. Typically, Jerry Garcia and Robert Hunter worked on songs together, and Bob Weir and John Perry Barlow worked together. A lot of song credits are credited as Garcia/Hunter, or Weir/Barlow.
null
false
6
Our motivation for using templates for data synthesis is that seq2seq synthesis models (as discussed in related work) tend to generate irrelevant and repeated words BIBREF17, while templates can produce more coherent and concise output. Also, extracting templates can be done either manually or automatically typically by training a few parameters or even doing no training, then external information in the form of keywords or snippets can be populated into the templates with the help of more sophisticated models. Accordingly, using templates can be very tempting for domains with limited resources such as ours. Model Structure. The model consists of 4 modules: 1. Template extraction: To convert human summaries into templates, we remove keywords in the summary to leave only non-keywords. We use Rapid Automatic Keyword Extraction (RAKE) BIBREF18 to identify keywords. 2. Template clustering: Upon converting human summaries into templates, we cluster them into $N$ clusters with the goal of using any template from the same cluster interchangeably. A template is first converted into embeddings using a pretrained BERT model BIBREF19, where template embedding is constructed by average pooling word embeddings. Templates are then clustered using k-medoid. 3. Summary rewriting: An encoder-attention-decoder with pointer network is trained to perform the rewriting task. The model is trained to inject keywords into a template and perform rewriting into a coherent paragraph. The produced rewrites are considered as candidate summaries. 4. Summary selection: After producing candidate summaries, we need to pick the best ones. We argue that the best candidates are those that are coherent and also convey the same meaning as the original human summary. We thus use a hybrid metric to score candidates, where the metric is a weighted sum of two scores and is calculated using Equations 1, 2, and 3. Eq.1 measures coherency using a language model (LM), Eq.2 measures how close a candidate is to a human summary using ROUGE scores, while Eq.3 picks the highest scored $N$ candidates as the final synthetic set. CS and HS are a candidate and human summary. $P(w)$ is the probability of word $w$ using a language model. $\alpha , \beta $ are weighting parameters. In this work we use $\alpha =\beta =1$ for all experiments. $R_{i}(CS,HS)$ is ROUGE-i score between CS and HS for i=1, 2, and $l$. Model Training. Before using the synthesis model, some of the constructing modules (rewriting module, scoring LM) need training. To train the rewriting model, we use another dataset consisting of a set of samples, where each sample can be a text snippet (sentence, paragraph, etc.). For each sample, keywords are extracted using RAKE, then removed. The keywords plus the sample with no keywords are then passed to the rewriting model. The training objective of this model is to reconstruct the original sample, which can be seen as trying to inject extracted keywords back into a template. Model Usage. To use the synthesis model to generate new samples, the set of human summaries are fed to the model, passing through the sub-modules in the following order: 1. Human summaries first pass through the template extraction module, converting each summary $s_i$ into template $t_i$ and the corresponding keywords $kw_i$. 2. Templates are then passed to the clustering module, producing a set of clusters. Each cluster $C$ contains a number of similar templates. 3. For each template $t_i$ and corresponding keywords $kw_i$ from step 1, find the cluster $C_i$ that contains the template $t_i$, then pass the set of templates within that clusters $\lbrace t_j\rbrace \forall {j},$ if $t_j \in C_i$ alongside the keywords $kw_i$ to the summary rewriting module. This will produce a set of candidate summaries. 4. The summary selection module scores and selects the highest $N$ candidates as the synthetic summaries. Our motivation for using templates for data synthesis is that seq2seq synthesis models (as discussed in related work) tend to generate irrelevant and repeated words (Koehn and Knowles 2017), while templates can produce more coherent and concise output. Also, extracting templates can be done either manually or automatically typically by training a few parameters or even doing no training, then external information in the form of keywords or snippets can be populated into the templates with the help of more sophisticated models.
Why do the authors use templates for data synthesis?
Because using the templates can produce more coherent and concise output and extracting templates can be done either manually or automatically typically by training a few parameters or even doing no training, then external information in the form of keywords or snippets can be populated into the templates with the help of more sophisticated models.
null
false
null
Who is Jon Snow's biological parents?
While the answer has yet to be officially acknowledged in the "A Song of Ice and Fire" series, the "Game of Thrones" television show, which is based on the book series, revealed that Jon Snow is the son of the late Rhaegar Targaryen and Lyanna Stark.
null
false
null
The Man in the Saddle (German: Der Mann im Sattel) is a 1945 German film directed by and starring Harry Piel.
Given this reference paragrah, who directed "The Man in the Saddle?"
Harry Piel
null
false
null
Classify the names of the Presidents based on their country. Droupadi Murmu, Ram Nath Kovind, A. P. J. Abdul Kalam, Ranil Wickremesinghe
India - Droupadi Murmu, Ram Nath Kovind, A. P. J. Abdul Kalam Sri Lanka- Ranil Wickremesinghe
1608.04917
true
null
On Twitter we can see results that are consistent with the RCV results for the left-to-center political spectrum. The exception, which clearly stands out, is the right-wing groups ENL and EFDD that seem to be the most cohesive ones. This is the direct opposite of what was observed in the RCV data. We speculate that this phenomenon can be attributed to the fact that European right-wing groups, on a European but also on a national level, rely to a large degree on social media to spread their narratives critical of European integration. We observed the same phenomenon recently during the Brexit campaign BIBREF38 . Along our interpretation the Brexit was “won” to some extent due to these social media activities, which are practically non-existent among the pro-EU political groups. The fact that ENL and EFDD are the least cohesive groups in the European Parliament can be attributed to their political focus. It seems more important for the group to agree on its anti-EU stance and to call for independence and sovereignty, and much less important to agree on other issues put forward in the parliament. On Twitter we can see results that are consistent with the RCV results for the left-to-center political spectrum. The exception, which clearly stands out, is the right-wing groups ENL and EFDD that seem to be the most cohesive ones. This is the direct opposite of what was observed in the RCV data. We speculate that this phenomenon can be attributed to the fact that European right-wing groups, on a European but also on a national level, rely to a large degree on social media to spread their narratives critical of European integration.
Do the authors mention any possible confounds in their study?
Yes.
null
false
null
The Almoravid dynasty (Arabic: المرابطون, romanized: Al-Murābiṭūn, lit. 'those from the ribats') was an imperial Berber Muslim dynasty centered in the territory of present-day Morocco. It established an empire in the 11th century that stretched over the western Maghreb and Al-Andalus, starting in the 1050s and lasting until its fall to the Almohads in 1147. The Almoravid capital was Marrakesh, a city founded by the Almoravid leader Abu Bakr ibn Umar c. 1070. The dynasty emerged from a coalition of the Lamtuna, Gudala, and Massufa, nomadic Berber tribes living in what is now Mauritania and the Western Sahara, traversing the territory between the Draa, the Niger, and the Senegal rivers. .
Who are the almoravids ?
The Almoravid was imperial Berber Muslim Dynasty. They rulled in Morocco from 1050 until 1147 and Marrakech were their capital. They emerged from a coalition of the Lamtuna, Massufa and Gudala, nomad berbers l.
null
false
null
How can I make every color of paint?
To make paint colors you need the three primary colors: red, yellow, and blue, as well as black and white to adjust shade (black mixed with color) and tint (white mixed with color). The primary colors red and yellow mix to make orange, red and blue mix to make purple, and yellow and blue mix to make green. Other common colors such as pink can be obtained by mixing black white to these base colors. For example, pink can be created by mixing a lot of white with a small amount of red.
null
false
null
Which Dickens novel is considered an autobiography
David Copperfield
null
false
null
What is an emote?
An emote is an in-video game response of a playable character (usually in third person). This is usually a dance move or pop culture reference. Often times, emotes are a paid engagement via a micro-transaction.
null
false
null
Why can we only see one side of the Moon?
On Earth we only see one side of the moon because the Moon and the Earth are tidally locked and the Moon is in a state of synchronous rotation. This means that the Moon's rotational period, the time it takes to make one full rotation, is the same as its orbital period, the time it takes to orbit the Earth. So as the moon orbits the Earth it is also rotating the exact amount to keep one side facing Earth at all times.
null
false
null
Which of the following songs are by the Grateful Dead and which are by The Beatles: Don't Let Me Down, Sugar Magnolia, Hey Jude, Bertha, Throwing Stones, Let It Be, Truckin', Friend of the Devil, Uncle John's Band, and Here Comes the Sun.
The Grateful Dead songs on this list are Sugar Magnolia, Throwing Stones, Truckin', Friend of the Devil, and Uncle John's Band. The Beatles songs on this list are Don't Let Me Down, Hey Jude, Let It Be, and Here Comes the Sun.
null
false
null
Izuku Midoriya is a young man who dreams of becoming a Hero despite being bullied by his violent childhood friend Katsuki Bakugo for lacking a Quirk. After having a few encounters with his idol, All Might, who is Japan's greatest hero, Izuku is chosen by All Might to inherit his Quirk "One For All" (which gives him the ability to endlessly stockpile his raw power) and become his successor, as All Might was severely injured by his arch-nemesis All For One (whose Quirk allows him to steal other Quirks for his use and pass them over to others at his will) and does not have much time left to live. Izuku begins attending the U.A. High School for heroes in training alongside Bakugo and his friends from Class 1-A while keeping his connection with All Might a secret. During the course of the school year, Izuku and his friends work hard to improve themselves and have a few encounters with the League of Villains led by All For One's apprentice Tomura Shigaraki, who desires to kill All Might as part of their plan to take over the world. During one of these encounters, All Might and All For One have one last fight, which ends with All For One defeated and imprisoned, and All Might, having exhausted the last of One For All's power in himself, forced to retire.
In the following initial summary of the plot of the series My Hero Academia, what is the secret that Izuku must keep and from whom?
Izuku keeps his secret that All Might had chosen Izuku to inherit All Might's Quirk "One For All" from Bakugo and Izuku's friends from Class 1-A.
1908.11425
false
null
We use the method of BIBREF5 to train neural sequence-to-sequence Spanish-English ST models. As in that study, before training ST, we pre-train the models using English ASR data from the Switchboard Telephone speech corpus BIBREF7, which consists of around 300 hours of English speech and transcripts. This was reported to substantially improve translation quality when the training set for ST was only tens of hours. Obtaining gold topic labels for our data would require substantial manual annotation, so we instead use the human translations from the 1K (train20h) training set utterances to train the NMF topic model with scikit-learn BIBREF14, and then use this model to infer topics on the evaluation set. These silver topics act as an oracle: they tell us what a topic model would infer if it had perfect translations. NMF and model hyperparameters are described in Appendix SECREF7. We use the method of BIBREF5 to train neural sequence-to-sequence Spanish-English ST models. Obtaining gold topic labels for our data would require substantial manual annotation, so we instead use the human translations from the 1K (train20h) training set utterances to train the NMF topic model with scikit-learn BIBREF14
What is the architecture of the model?
The answers are shown as follows: * BIBREF5 to train neural sequence-to-sequence * NMF topic model with scikit-learn BIBREF14
null
false
null
Roger Federer (German: [ˈrɔdʒər ˈfeːdərər]; born 8 August 1981) is a Swiss former professional tennis player. He was ranked world No. 1 by the Association of Tennis Professionals (ATP) for 310 weeks, including a record 237 consecutive weeks, and finished as the year-end No. 1 five times. He won 103 singles titles on the ATP Tour, the second most of all time, including 20 major men's singles titles, a record eight men's singles Wimbledon titles, an Open Era joint-record five men's singles US Open titles, and a joint-record six year-end championships. In his home country, he is regarded as "the greatest and most successful" Swiss sportsperson in history. A Wimbledon junior champion in 1998 and former ball boy, Federer won his first major singles title at Wimbledon in 2003 at age 21. Between 2003 and 2009, Federer played in 21 out of 28 major singles finals. He won three of the four majors and the ATP Finals in 2004, 2006, and 2007 as well as five consecutive titles at both Wimbledon and the US Open. He completed the career Grand Slam at the 2009 French Open after three consecutive runner-up finishes to Nadal, his main rival until 2010. At age 27, he surpassed Pete Sampras' record of 14 major men's singles titles at Wimbledon in 2009. Federer and Stan Wawrinka led the Switzerland Davis Cup team to their first title in 2014, following their Olympic doubles gold victory at the 2008 Beijing Olympics. Federer also won a silver medal in singles at the 2012 London Olympics, finishing runner-up to Andy Murray. After a half-year hiatus in late 2016 to recover from knee surgery, Federer returned to tennis, winning three more majors over the next two years, including the 2017 Australian Open over Rafael Nadal and an eighth singles title at the 2017 Wimbledon Championships. At the 2018 Australian Open, Federer became the first man to win 20 major singles titles and shortly the oldest ATP world No. 1 at age 36. In September 2022, he retired from professional tennis following the Laver Cup. A versatile all-court player, Federer's perceived effortlessness has made him highly popular among tennis fans. Originally lacking self-control as a junior, he transformed his on-court demeanor to become well-liked for his graciousness, winning the Stefan Edberg Sportsmanship Award 13 times. He has won the Laureus World Sportsman of the Year award a record five times. Outside of competing, he played an instrumental role in the creation of the Laver Cup team competition. He is also an active philanthropist. He established the Roger Federer Foundation, which targets impoverished children in southern Africa, and has raised funds in part through the Match for Africa exhibition series. He is routinely one of the top ten highest-paid athletes in any sport and ranked first among all athletes with $100 million in endorsement income in 2020.
Tell me about Roger Federer
Roger Federer, born 8 August 1981, is a Swiss former professional tennis player. He was ranked world No. 1 by the Association of Tennis Professionals (ATP) for 310 weeks, including a record 237 consecutive weeks, and finished as the year-end No. 1 five times. He won 103 singles titles on the ATP Tour, the second most of all time, including 20 major men's singles titles, a record eight men's singles Wimbledon titles, an Open Era joint-record five men's singles US Open titles, and a joint-record six year-end championships. In his home country, he is regarded as "the greatest and most successful" Swiss sportsperson in history. A Wimbledon junior champion in 1998 and former ball boy, Federer won his first major singles title at Wimbledon in 2003 at age 21. Between 2003 and 2009, Federer played in 21 out of 28 major singles finals. He won three of the four majors and the ATP Finals in 2004, 2006, and 2007 as well as five consecutive titles at both Wimbledon and the US Open. He completed the career Grand Slam at the 2009 French Open after three consecutive runner-up finishes to Nadal, his main rival until 2010. At age 27, he surpassed Pete Sampras' record of 14 major men's singles titles at Wimbledon in 2009. Federer and Stan Wawrinka led the Switzerland Davis Cup team to their first title in 2014, following their Olympic doubles gold victory at the 2008 Beijing Olympics. Federer also won a silver medal in singles at the 2012 London Olympics, finishing runner-up to Andy Murray. After a half-year hiatus in late 2016 to recover from knee surgery, Federer returned to tennis, winning three more majors over the next two years, including the 2017 Australian Open over Rafael Nadal and an eighth singles title at the 2017 Wimbledon Championships. At the 2018 Australian Open, Federer became the first man to win 20 major singles titles and shortly the oldest ATP world No. 1 at age 36. In September 2022, he retired from professional tennis following the Laver Cup. A versatile all-court player, Federer's perceived effortlessness has made him highly popular among tennis fans. Originally lacking self-control as a junior, he transformed his on-court demeanor to become well-liked for his graciousness, winning the Stefan Edberg Sportsmanship Award 13 times. He has won the Laureus World Sportsman of the Year award a record five times. Outside of competing, he played an instrumental role in the creation of the Laver Cup team competition. He is also an active philanthropist. He established the Roger Federer Foundation, which targets impoverished children in southern Africa, and has raised funds in part through the Match for Africa exhibition series. He is routinely one of the top ten highest-paid athletes in any sport and ranked first among all athletes with $100 million in endorsement income in 2020.
null
false
null
Who is Marlon Brando and when he was born?
Marlon Brando was born April 3 1924 in Omaha Nebraska, U.S. He is an iconic American movie actor. One of the most popular movies of Brando is the Godfather. His performance in this movie earned him Oscar for best-actor the second time.
null
false
114
We adopted forward/reverse perplexity BIBREF33 and Self-BLEU BIBREF34 to evaluate the quality of generated texts. Forward perplexity (PPL-F) indicates the perplexity on the generated data provided by a language model trained on real data to measure the fluency of generated samples. Reverse perplexity (PPL-R) switches the roles of generated data and real data to reflect the discrepancy between the generated distribution and the data distribution. Self-BLEU (S-BLEU) regards each sentence in the generated collection as hypothesis and the others as reference to obtain BLEU scores, which evaluates the diversity of generated results. Results are shown in Table TABREF33 . LeakGAN performs best on forward perplexity because it can generate more fluent samples. As for reverse perplexity, our model ARAML beats other baselines, showing that our model can fit the data distribution better. Other GANs, particularly LeakGAN, obtain high reverse perplexity due to mode collapse BIBREF12 , thus they only capture limited fluent expressions, resulting in large discrepancy between the generated distribution and data distribution. ARAML also outperforms the baselines in terms of Self-BLEU, indicating that our model doesn't fall into mode collapse with the help of the MLE training objective and has the ability to generate more diverse sentences. We also provide standard deviation of each metric in Table TABREF33 , reflecting the stability of each model's performance. Our model ARAML nearly achieves the smallest standard deviation in all the metrics, indicating that our framework outperforms policy gradient in the stability of adversarial training. ARAML also outperforms the baselines in terms of Self-BLEU, indicating that our model doesn’t fall into mode collapse with the help of the MLE training objective and has the ability to generate more diverse sentences.
Does ARAML outperform the baselines in terms of Self-BLEU?
Yes, it does.
null
false
null
Let's say you have one million dollars, how are you going to invest that?
With the current financial climate, I would split it with 30% bond, 30% stock, 20% cash and 20% commodity. That's how I will invest until I see something break due to the persistent interest rate.
null
false
null
What is the difference between a piano and a guitar?
A piano is a, usually large, percussive instrument where the player sits down facing the instrument and uses their fingers to press down on keys that cause a hammer to strike strings of varying tone. There are pedals on a piano that allow the strings to sustain their vibrations or dampen the strings to create a quieter playing effect. A guitar is a hand held instrument where the player uses one hand to pick or strum the strings and the other hand to press down on the strings in various positions in order to change the tone of the string.
null
false
161
In the present work, we propose FriendsBERT and ChatBERT for the multi-utterance emotion recognition task on EmotionLines dataset. The proposed models are adapted from BERT BIBREF5 with three main improvement during the model training procedure, which are the causal utterance modeling mechanism, specific model pre-training, and adapt weighted loss. The causal utterance modeling takes the advantages of the sentence-level context information during model inference. The specific model pre-training helps to against the bias in different text domain. The weighted loss avoids our model to only predict on large size sample. The effectiveness and generalizability of the proposed methods are demonstrated from the experiments. In future work, we consider to include the conditional probabilistic constraint $P ({\rm Emo}_{B} | \hat{\rm Emo}_{A})$. Model should predict the emotion based on a certain understanding about context emotions. This might be more reasonable for guiding model than just predicting emotion of ${\rm Sentence}_B$ directly. In addition, due to the limitation of BERT input format, ambiguous number of input sentences is now becoming an important design requirement for our future work. Also, personality embedding development will be another future work of the emotion recognition. The personality embedding will be considered as sentence embedding injected into word embedding, and it seems this additional information can contribute some improvement potentially. In the present work, we propose FriendsBERT and ChatBERT for the multi-utterance emotion recognition task on EmotionLines dataset. The proposed models are adapted from BERT BIBREF5 with three main improvement during the model training procedure, which are the causal utterance modeling mechanism, specific model pre-training, and adapt weighted loss. The causal utterance modeling takes the advantages of the sentence-level context information during model inference. The specific model pre-training helps to against the bias in different text domain. The weighted loss avoids our model to only predict on large size sample. The effectiveness and generalizability of the proposed methods are demonstrated from the experiments. In future work, we consider to include the conditional probabilistic constraint $P ({\rm Emo}_{B} | \hat{\rm Emo}_{A})$. Model should predict the emotion based on a certain understanding about context emotions. This might be more reasonable for guiding model than just predicting emotion of ${\rm Sentence}_B$ directly. In addition, due to the limitation of BERT input format, ambiguous number of input sentences is now becoming an important design requirement for our future work. Also, personality embedding development will be another future work of the emotion recognition. The personality embedding will be considered as sentence embedding injected into word embedding, and it seems this additional information can contribute some improvement potentially. In future work, we consider to include the conditional probabilistic constraint P(EmoB|EmoA). Model should predict the emotion based on a certain understanding about context emotions. This might be more reasonable for guiding model than just predicting emotion of SentenceB directly. In addition, due to the limitation of BERT input format, ambiguous number of input sentences is now becoming an important design requirement for our future work. Also, personality embedding development will be another future work of the emotion recognition. The personality embedding will be considered as sentence embedding injected into word embedding, and it seems this additional information can contribute some improvement potentially.
What are the author's plans for his future work?
They consider to include the conditional probabilistic constraint P(EmoB|EmoA). Ambiguous number of input sentences is also an important design requirement for future work. The third is personality embedding development.
null
false
null
What are three meals that a small child is likely to eat for lunch?
Most small children will enjoy chicken nuggets, macaroni and cheese, or a peanut butter and jelly sandwich as the main course. You can include fruits, such as grapes, berries, apples, or oranges, as well as vegetables, such as carrots, celery, or broccoli, to make it a meal.
1707.07554
false
null
In order to verify the reliability of our technique in coverage expansion for infrequent words we did a set of experiments on the Rare Word similarity dataset BIBREF6 . The dataset comprises 2034 pairs of rare words, such as ulcerate-change and nurturance-care, judged by 10 raters on a [0,10] scale. Table TABREF15 shows the results on the dataset for three pre-trained word embeddings (cf. § SECREF2 ), in their initial form as well as when enriched with additional words from WordNet. In order to verify the reliability of our technique in coverage expansion for infrequent words we did a set of experiments on the Rare Word similarity dataset BIBREF6 . The dataset comprises 2034 pairs of rare words, such as ulcerate-change and nurturance-care, judged by 10 raters on a [0,10] scale. Table TABREF15 shows the results on the dataset for three pre-trained word embeddings (cf. § SECREF2 ), in their initial form as well as when enriched with additional words from WordNet.
How are rare words defined?
The answers are shown as follows: * judged by 10 raters on a [0,10] scale
null
false
null
SH 115 begins at an intersection with Interstate 20 and Farm to Market Road 1927 in Pyote. The route travels north along the eastern edge of town and through Far West Texas oil fields before reaching Wink. The route then turns northeast, next reaching Kermit. The route travels through downtown Kermit, passing the Winkler County Courthouse. The route continues to the northeast, crossing oil fields of Winkler and Andrews Counties, before reaching Andrews. The route travels through Andrews, intersecting U.S. Route 385. The route continues northeast, crossing through the intersection of Andrews, Gaines, Martin, and Dawson Counties before reaching its terminus at SH 349 at the unincorporated community of Patricia, Texas.
Extract a list of all of the communties and counties mentioned in this article about SH 115.
Pyote, Wink, Kermit, Winkler, Andrews, Gaines, Martin, Dawson, Patricia
null
false
null
What is the best way to evaluate the performance of my marketing spend?
The primary purpose for evaluating the performance of marketing spend tends to be toward informing decisions around how to reallocate that marketing spend in future periods, in order to improve business performance overall. In this regard, we can look at this problem as a decision support problem space. Historically, marketers have used a range of approaches ranging from marketing mix modeling, to marketing attribution, to controlled experiments, and more. None of these approaches are perfect, but a combination of these approaches combined with a rigorous decision feedback loop (e.g. in the spirit of the Observe, Oriented, Decide, Act loop) can help marketers improve their marketing performance management, and hence their marketing performance, over time.
1908.07218
false
null
We trained word embeddings using either GloVe BIBREF11 or SGNS BIBREF12 on a small or a large corpus. The small corpus consists of the traditional Chinese part of Chinese Gigaword BIBREF13 and ASBC 4.0 BIBREF9 . The large corpus additionally includes the Chinese part of Wikipedia. We trained word embeddings using either GloVe BIBREF11 or SGNS BIBREF12 on a small or a large corpus.
What types of word representations are they evaluating?
GloVE; SGNS
null
false
144
In this section, we briefly describe a baseline and evaluation scripts that we release, with a detailed documentation, along with the corpus. In order to automatically compare generated concept maps with reference maps, we propose three metrics.9 As a concept map is fully defined by the set of its propositions, we can compute precision, recall and F1-scores between the two proposition set of generated and reference map.
What metrics do they use to automatically compare generated concept maps with reference maps?
Precision, recall and F1-scores.
1911.06191
false
null
The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\mathcal {X}$ to domain $\mathcal {Y}$) and dual task (mapping from domain $\mathcal {Y}$ to $\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. It was integrated into our submitted systems for German$\leftrightarrow $English and German$\leftrightarrow $French translations. The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\mathcal {X}$ to domain $\mathcal {Y}$) and dual task (mapping from domain $\mathcal {Y}$ to $\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models.
How does muli-agent dual learning work?
The answers are shown as follows: * MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models.
null
false
null
What is the difference between interface and abstract class in Java?
Abstract classes can have methods with implementation whereas interface provides absolute abstraction and can't have any method implementations. Note that from Java 8 onwards, it's possible to create default and static methods in interface that contain the method implementations.
null
false
241
The proliferation of opinions expressed in online reviews, blogs, internet forums, and social media has created a pressing need for automated systems which enable customers, companies, or service providers to make informed decisions without having to absorb large amounts of opinionated text. Opinion summarization is the task of automatically generating summaries for a set of opinions about a specific target BIBREF0. Figure FIGREF1 shows various reviews about the movie “Coach Carter” and example summaries generated by humans and automatic systems. The vast majority of previous work BIBREF1 views opinion summarization as the final stage of a three-step process involving: (1) aspect extraction (i.e., finding features pertaining to the target of interest, such as battery life or sound quality); (2) sentiment prediction (i.e., determining the sentiment of the extracted aspects); and (3) summary generation (i.e., presenting the identified opinions to the user). Textual summaries are created following mostly extractive methods which select representative segments (usually sentences) from the source text BIBREF2, BIBREF3, BIBREF4, BIBREF5. Despite being less popular, abstractive approaches seem more appropriate for the task at hand as they attempt to generate summaries which are maximally informative and minimally redundant without simply rearranging passages from the original opinions BIBREF6, BIBREF7, BIBREF8, BIBREF9. General-purpose summarization approaches have recently shown promising results with end-to-end models which are data-driven and take advantage of the success of sequence-to-sequence neural network architectures. Most approaches BIBREF10, BIBREF11, BIBREF12, BIBREF13 encode documents and then decode the learned representations into an abstractive summary, often by attending to the source input BIBREF14 and copying words from it BIBREF15. Under this modeling paradigm, it is no longer necessary to identify aspects and their sentiment for the opinion summarization task, as these are learned indirectly from training data (i.e., sets of opinions and their corresponding summaries). These models are usually tested on domains where the input is either one document or a small set of documents. However, the number of opinions tends to be very large (150 for the example in Figure FIGREF1). It is therefore practically unfeasible to train a model in an end-to-end fashion, given the memory limitations of modern hardware. As a result, current approaches BIBREF16, BIBREF17, BIBREF18, BIBREF19 sacrifice end-to-end elegance in favor of a two-stage framework which we call Extract-Abstract: an extractive model first selects a subset of opinions and an abstractive model then generates the summary while conditioning on the extracted subset (see Figure FIGREF5). The extractive pass unfortunately has two drawbacks. Firstly, on account of having access to a subset of opinions, the summaries can be less informative and inaccurate, as shown in Figure FIGREF1. And secondly, user preferences cannot be easily taken into account (e.g., the reader may wish to obtain a summary focusing on the acting or plot of a movie as opposed to a general-purpose summary) since more specialized information might have been removed. In this paper, we propose Condense-Abstract, an alternative two-stage framework which uses all input documents when generating the summary (see Figure FIGREF5). We view the opinion summarization problem as an instance of multi-source transduction BIBREF20; we first represent the input documents as multiple encodings, aiming to condense their meaning and distill information relating to sentiment and various aspects of the target being reviewed. These condensed representations are then aggregated using a multi-source fusion module based on which an opinion summary is generated using an abstractive model. We also introduce a zero-shot customization technique allowing users to control important aspects of the generated summary at test time. Our approach enables controllable generation while leveraging the full spectrum of opinions available for a specific target. We perform experiments on a dataset consisting of movie reviews and opinion summaries elicited from the Rotten Tomatoes website (BIBREF16; see Figure FIGREF1). Our framework outperforms state-of-the-art models by a large margin using automatic metrics and in a judgment elicitation study. We also verify that our zero-shot customization technique can effectively generate need-specific summaries. In this paper, we propose Condense-Abstract, an alternative two-stage framework which uses all input documents when generating the summary (see Figure FIGREF5).
What framework do they propose?
Condense-Abstract,an alternative two-stage framework.
1909.13375
false
null
MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable. MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable
What approach did previous models use for multi-span questions?
Only MTMSM specifically tried to tackle the multi-span questions. Their approach consisted of two parts: first train a dedicated categorical variable to predict the number of spans to extract and the second was to generalize the single-span head method of extracting a span
null
false
null
What is it like being the last person on earth?
You would think the world would be yours to explore and enjoy. In actuality, it's incredibly lonely. The quiet is almost deafening. It's also much harder to get to and from as gas stations no longer work, as there is no electricity. I never thought I would miss so desperately to wait in line and go to a crowded restaurant.
null
false
null
Give me a list of the best comedy podcast to listen to
Smartless, Conan O'brien Needs A Friend, Tim Dillon Show, The Toast, Girls Gotta Eat
null
false
null
William Eldred Toye CM (born June 19, 1926) is a Canadian editor, writer and literary critic. Biography Toye grew up in Toronto, Ontario. He graduated from the University of Toronto in 1948 where he majored in journalism. He obtained a job with the Canadian branch of Oxford University Press which at the time was nothing more than a desk in a warehouse. He worked at the job for the next 43 years and in 1969 became the Editorial Director. He retired in 1991 and continued with them on a freelance basis after that. Known for his astute editing abilities, he was one of the first people to usher in an expanded and flourishing Canadian publishing business. He was one of the founding editors of the Tamarack Review in 1956 which ended publication in 1982. The early work of many notable Canadian authors appeared in its pages. Toye has authored or edited several books including the Oxford Companion to Canadian Literature, the Oxford Anthology of Canadian Literature and A Book of Canada. He has also written several children's books. In 1993 he was made a member of the Order of Canada which said in its award that Toye "has been a highly regarded editor and publisher, well-liked by writers for his patience, fine judgement, tact and unerring taste. An award-winning children's author, he was also one of the founders of The Tamarack Review, where he was influential for many years in the development and encouragement of new Canadian talent." Works A Picture History of Canada (1956) by Clarke Hutton [Contributor, also Ivon Owen] The St. Lawrence (1959) A Book of Canada (1962) The Mountain Goats of Temlaham (1969) with Elizabeth Cleaver How Summer Came to Canada (1969) with Elizabeth Cleaver Cartier discovers the St. Lawrence (1970) Supplement to the Oxford companion to Canadian history and literature (1973) Toronto (1975) by John de Visser [Contributor] Simon and the Golden Sword (1976) by Frank Newfeld [Contributor, also Kevin W. Macdonald] The Loon's Necklace (1977) with Elizabeth Cleaver The fire stealer (1979) with Elizabeth Cleaver The Oxford Anthology of Canadian literature (1981) with Robert Weaver The Oxford Companion to Canadian Literature (1983) with Eugene Benson, 2nd edition, 1997 City light: a Portrait of Toronto (1983) Letters of Marshall Mcluhan, 1911-1980 (1987) with Corinne McLuhan & Mattie Molinaro The concise Oxford companion to Canadian literature (2001) William Toye on Canadian literature (2005)
Please give me a bulleted list of things William Eldred Toye CM worked on
A Picture History of Canada (1956) by Clarke Hutton [Contributor, also Ivon Owen] The St. Lawrence (1959) A Book of Canada (1962) The Mountain Goats of Temlaham (1969) with Elizabeth Cleaver How Summer Came to Canada (1969) with Elizabeth Cleaver Cartier discovers the St. Lawrence (1970) Supplement to the Oxford companion to Canadian history and literature (1973) Toronto (1975) by John de Visser [Contributor] Simon and the Golden Sword (1976) by Frank Newfeld [Contributor, also Kevin W. Macdonald] The Loon's Necklace (1977) with Elizabeth Cleaver The fire stealer (1979) with Elizabeth Cleaver The Oxford Anthology of Canadian literature (1981) with Robert Weaver The Oxford Companion to Canadian Literature (1983) with Eugene Benson, 2nd edition, 1997 City light: a Portrait of Toronto (1983) Letters of Marshall Mcluhan, 1911-1980 (1987) with Corinne McLuhan & Mattie Molinaro The concise Oxford companion to Canadian literature (2001) William Toye on Canadian literature (2005)
null
false
null
Identify which animal species is alive or extinct: Great White Shark, Patagotitan
Patagotitan is extinct, Great White Shark is alive.
null
false
null
When protecting your company from a security breach, categorize each of the following as "Enhancing Security" or "Diminishing Security": Multi-factor Authentication, VPN, Unlocked Computers, Shared Passwords, Anti-Malware Solution
Multi-factor Authentication: Enhancing Security VPN: Enhancing Security Unlocked Computers: Diminishing Security Shared Passwords: Diminishing Security Anti-Malware Solution: Enhancing Security
null
false
null
Van Strij was born and died in Dordrecht. According to the RKD he was the son of Leendert and the brother of Jacob. Both he and his brother were pupils of his father, and after taking lessons from Joris Ponse, Abraham joined his father's workshop. He was a member of the Dordrecht Guild of St. Luke and later its director. His son Abraham II also became a painter, and besides his son, he and his brother took on pupils after taking over their father's workshop. Their pupils were Pieter Rudolph Kleijn, Johannes van Lexmond, Jacob de Meijer, Johannes Rutten, Johannes Schoenmakers, Johannes Christiaan Schotel, and Gillis Smak Gregoor.
Extract the pupils of Abraham van Strij from the text. Separate them with a comma.
Pieter Rudolph Kleijn, Johannes van Lexmond, Jacob de Meijer, Johannes Rutten, Johannes Schoenmakers, Johannes Christiaan Schotel, Gillis Smak Gregoor.
null
false
null
What is Redis?
Redis is an in-memory data structure store, used as a distributed, in-memory key-value database, cache and message broker, with optional durability.
null
false
null
What is a good approach to grocery shopping?
Grocery shopping can be a difficult task because grocery stores are often crowded. A good approach to grocery shopping is to go during hours when not a lot of people are grocery shopping. An example would be during a lunch break instead of going at night when most people are off of work or going during the work week instead of going on the weekend. Avoid Sundays at all costs! Sundays are the busiest day for grocery shopping. Once you've selected your day and before you've gone to the store, make a list. A list will make you more efficient once you get to the grocery store. Organize your list by aisles. An example of this is organizing your list by produce, meats, household supplies, snacks, and drinks. This way you can go aisle-to-aisle skipping aisles that don't have what you are shopping for.
null
false
111
We have demonstrated that using word vectors that capture only semantic and syntactic characteristics may be improved by taking into account their sentimental aspects as well. Our approaches are cross-lingual and cross-domain. They can be applied to other domains and other languages than Turkish and English with minor changes. Our study is one of the few ones that perform sentiment analysis in Turkish and leverages sentimental characteristics of words in generating word vectors and outperforms all the others. Any of the approaches we propose can be used independently of the others. Our approaches without using sentiment labels can be applied to other classification tasks, such as topic classification and concept mining. The experiments show that even unsupervised approaches, as in the corpus-based approach, can outperform supervised approaches in classification tasks. Combining some approaches, which can compensate for what others lack, can help us build better vectors. Our word vectors are created by conventional machine learning algorithms; however, they, as in the corpus-based model, produce state-of-the-art results. Although we preferred to use a classical machine learning algorithm, which is SVM, over a neural network classifier to predict the labels of reviews, we achieved accuracies of over 90 per cent for the Turkish movie corpus and about 88 per cent for the English Twitter dataset. We performed only binary sentiment classification in this study as most of the studies in the literature do. We will extend our system in future by using neutral reviews as well. We also plan to employ Turkish WordNet to enhance the generalisability of our embeddings as another future work. We will extend our system in future by using neutral reviews as well. We also plan to employ Turkish WordNet to enhance the generalisability of our embeddings as another future work.
Are there any work plans to improve the approach for the future?
They will extend their system in future by using neutral reviews as well. They also plan to employ Turkish WordNet to enhance the generalisability of their embeddings as another future work.
null
false
null
Bloomberg L.P. is a privately held financial, software, data, and media company headquartered in Midtown Manhattan, New York City. It was co-founded by Michael Bloomberg in 1981, with Thomas Secunda, Duncan MacMillan, Charles Zegar, and a 12% ownership investment by Bank of America through their brokerage subsidiary Merrill Lynch.
From the passage provided, extract the founders of Bloomberg L.P. Separate them with a comma.
Michael Bloomberg, Thomas Secunda, Duncan MacMillan, Charles Zega
1703.08885
false
null
The learning model for retrieval is trained by an oracle constructed using distant supervision. Using the answer labels in the training set, we can find appropriate articles that include the information requested in the question. For example, for x_to_movie question type, the answer movie articles are the correct articles to be retrieved. On the other hand, for questions in movie_to_x type, the movie in the question should be retrieved. Having collected the labels, we train a retrieval model for classifying a question and article pair as relevant or not relevant. Figure 5 gives an overview of the model, which uses a Word Level Attention (WLA) mechanism. First, the question and article are embedded into vector sequences, using the same method as the comprehension model. We do not use anonymization here, to retain simplicity. Otherwise, the anonymization procedure would have to be repeated several times for a potentially large collection of documents. These vector sequences are next fed to a Bi-GRU, to produce the outputs $v$ (for the question) and $H_c$ (for the document) similar to the previous section. The learning model for retrieval is trained by an oracle constructed using distant supervision. Using the answer labels in the training set, we can find appropriate articles that include the information requested in the question. First, the question and article are embedded into vector sequences, using the same method as the comprehension model. We do not use anonymization here, to retain simplicity. Otherwise, the anonymization procedure would have to be repeated several times for a potentially large collection of documents. These vector sequences are next fed to a Bi-GRU, to produce the outputs $v$ (for the question) and $H_c$ (for the document) similar to the previous section.
How can a neural model be used for a retrieval if the input is the entire Wikipedia?
The answers are shown as follows: * Using the answer labels in the training set, we can find appropriate articles that include the information requested in the question.
null
false
231
Analysis of controversy in Wikipedia, online news and social media has attracted considerable attention in recent years. Exploiting the collaborative structure of Wikipedia, estimators of the level of controversy in a Wikipedia article were developed based on the edit-history of the article BIBREF0, BIBREF3. Along these lines, BIBREF4 detect controversy based on mutual reverts, bi-polarity in the collaboration network, and mutually-reinforced scores for editors and articles. Similarly, BIBREF1 classify whether a Wikipedia page is controversial through the combined evaluation of the topically neighboring set of pages. Content analysis of controversial Wikipedia articles has been used to evaluate the level of controversy of other documents (e.g., web pages) by mapping them to related Wikipedia articles BIBREF5. BIBREF6 further build a language model, which enhances predictions made by existing classifiers, by inferring word probabilities from Wikipedia articles prominent in Wikipedia controversy features (mainly signals in edit history as discussed above) and from articles retrieved by manually selected query terms, believed to indicate controversy. BIBREF7 detect controversy in news items by inspecting terms with excessive frequency in contexts containing sentiment words, and BIBREF8 study controversy in user comments of news articles using lexicons. Finally, BIBREF9 suggest that controversy is not a universal but rather a community-related concept, and, therefore, should be studied in context. Here we measure a concept's controversiality from the explicit sentence-level context in which it is mentioned. In this, our approach is reminiscent of BIBREF10, who suggest a similar approach to detect abstract concepts. Choi et al. (2010) detect controversy in news items by inspecting terms with excessive frequency in contexts containing sentiment words, and Sree et al. (2015) study controversy in user comments of news articles using lexicons.
How does Choi et al. detect controversy in news items?
By inspecting terms with excessive frequency in contexts containing sentiment words.
null
false
null
Identify which instrument is string or percussion: Jawbone, Grand Stick
Grand Stick is string, Jawbone is percussion.
null
false
null
Taylor Harry Fritz (born October 28, 1997) is an American professional tennis player. He has a career-high singles ranking of world No. 5 by the Association of Tennis Professionals (ATP), achieved on February 27, 2023, and a doubles ranking of world No. 104, achieved on July 26, 2021. Fritz has won five ATP Tour singles titles, including a Masters 1000 title at the 2022 Indian Wells Masters. His best result in a Grand Slam tournament was reaching the quarterfinals of the 2022 Wimbledon Championships. He is currently the No. 1 American player. Fritz reached his maiden ATP final in only his third career event, the 2016 Memphis Open. Only one other American, John Isner, has reached an ATP final in fewer career events. He won a junior major singles title at the 2015 US Open, and was the runner-up in junior singles at the 2015 French Open.
Given a reference text about Taylor Fritz, tell me what country he's from and his best finish at a Grand Slam tournament?
Taylor Fritz is from the United States and his best finish in a Grand Slam tournament was reaching the quarterfinals at Wimbledon in 2022.
null
false
null
Quest of the Ancients is an independent role-playing game published in the late 1980s by Unicorn Game Publications. It is mainly a Dungeons & Dragons clone, in that it was character class- and level-based, used a Dungeon & Dragons style Vancian Magic system, and used a skill system similar to the Advanced Dungeons & Dragons thief skill system. Most notably, it used a thirty sided die (or d30) for combat resolution. Written by Vince Garcia of Fresno, California, the game system implements a large number of character classes.
What device is used to resolve combat outcomes in Quest of the Ancients?
A thirty sided die (or d30) is used for combat resolution.
null
false
null
West Ham have been winners of the FA Cup three times (1964, 1975 and 1980) and runners-up twice (1923 and 2006). The club have reached two major European finals, winning the European Cup Winners' Cup in 1965 and finishing runners-up in the same competition in 1976. West Ham also won the Intertoto Cup in 1999. They are one of eight clubs never to have fallen below the second tier of English football, spending 63 of 95 league seasons in the top flight, up to and including the 2020–21 season. The club's highest league position to date came in 1985–86, when they achieved third place in the then First Division.
Which trophies have West Ham won?
West Ham won the FA Cup in 1964, 1975 and 1980. They won the European Cup Winner's Cup in 1965 and the Intertoto cup in 1999.
null
false
null
The Chhatrapati Shivaji Maharaj International Airport (formerly Sahar International Airport) is the main aviation hub in the city and the second busiest airport in India in terms of passenger traffic. It handled 36.6 million (3.66 crore) passengers and 694,300 tonnes of cargo during FY 2014–2015. An upgrade plan was initiated in 2006, targeted at increasing the capacity of the airport to handle up to 40 million (4 crore) passengers annually and the new terminal T2 was opened in February 2014. The proposed Navi Mumbai International airport to be built in the Kopra-Panvel area has been sanctioned by the Indian Government and will help relieve the increasing traffic burden on the existing airport. The Juhu Aerodrome was India's first airport, and now hosts the Bombay Flying Club and a heliport operated by state-owned Pawan Hans.
From the passage provided, extract information about the new proposed airport in Mumbai
The proposed Navi Mumbai International airport to be built in the Kopra-Panvel area has been sanctioned by the Indian Government
null
false
null
What is the capital of Arizona?
Phoenix is the capital
null
false
null
What is a sleeping bag?
A sleeping bag is a warm blanket that can be zipped up to enclose the sleeper. Sleeping bags are typically filled with either down or synthetic material (down is lighter but stays wet longer) and is often used for staying at night while camping.
null
false
null
Cricket is a bat-and-ball game played between two teams of eleven players on a field at the centre of which is a 22-yard (20-metre) pitch with a wicket at each end, each comprising two bails balanced on three stumps. The batting side scores runs by striking the ball bowled at one of the wickets with the bat and then running between the wickets, while the bowling and fielding side tries to prevent this (by preventing the ball from leaving the field, and getting the ball to either wicket) and dismiss each batter (so they are "out"). Means of dismissal include being bowled, when the ball hits the stumps and dislodges the bails, and by the fielding side either catching the ball after it is hit by the bat, but before it hits the ground, or hitting a wicket with the ball before a batter can cross the crease in front of the wicket. When ten batters have been dismissed, the innings ends and the teams swap roles. The game is adjudicated by two umpires, aided by a third umpire and match referee in international matches. They communicate with two off-field scorers who record the match's statistical information.
Kindly provide a bullet list of key aspects of Cricket sport from the given passage.
1. Cricket is a game played with bat and ball between 2 teams of 11 players each. 2. The game is played on a pitch of 22 yard which has a wicket (2 bails on 3 stumps) on both ends. 3. One team does batting another does bowling. 4. The batting side scores runs by striking the bowl and then running between wickets. 5. Bowling sides tries to prevent batting side from getting the ball leaving the field along with getting the bowl to either wicket so that batter can be dismissed. 6. Dismissal can happen when ball hits the stumps and throws the bails, fielders on bowling side catching the ball after hitting the bat but before hitting the ground or by hitting the wicket with the bowl before the batter can cross the crease in front of wicket. 7. When 10 batters are dismissed, the innings ends and roles of the teams are reversed. 8. The game is supervised by 2 umpires, a third umpire and match referee in international matches. 9. The umpires communicate with two off-field scores who records the match's statistical information
null
false
335
The data for this study was gathered from the ted.com website on November 15, 2017. We removed the talks published six months before the crawling date to make sure each talk has enough ratings for a robust analysis. More specifically, we filtered any talk that— The data for this study was gathered from the ted.com website on November 15, 2017.
What was the data for this study gathered from?
The data for this study was gathered from the ted.com website on November 15, 2017.
null
false
null
The name "Busan" is the Revised Romanization of the city's Korean name since the late 15th century. It officially replaced the earlier McCune-Reischauer romanization Pusan in 2000. During the Japanese period it was spelled "Fuzan". The name 釜山 (now written 부산 using the Korean alphabet) is Sino-Korean for "Cauldron Mountain", believed to be a former name of Mt Hwangryeong (황령산, 荒嶺山, Hwangryeong-san) west of the city center. The area's ancient state Mt Geochil (거칠산국, 居柒山國, Geochilsan-guk, "Rough-Mountain Land") is similarly thought to refer to the same mountain, which towers over the town's harbor on the Suyeong. (The later Silla district of Geochilsan-gun was renamed Dongnae in 757.)
Where does the name Busan (city in Korea) come from?
"Busan" is the romanization of the city's Korean name - 부산. Previously, the name was romanized as "Pusan" until it was officially replaced in 2000. The meaning of the name in Sino-Korean is "Cauldron Mountain", believed to be the former name of a nearby mountain now known as Mt Hwangryeong (황령산).
null
false
null
What is the largest pollutant?
Carbon dioxide (CO2) - a greenhouse gas emitted by burning fossil fuels, deforestation, and other human activities that contribute to climate change. Particulate matter - tiny particles of pollutants such as dust, smoke, and soot that can be inhaled and cause respiratory problems. Nitrogen oxides (NOx) - gases emitted by vehicle exhausts and industrial processes that contribute to air pollution and can also react with other pollutants to form ozone. Sulfur dioxide (SO2) - a gas emitted by burning fossil fuels and industrial processes that can cause respiratory problems and acid rain. Methane (CH4) - a potent greenhouse gas emitted by agriculture, waste management, and natural gas production. Chlorofluorocarbons (CFCs) - chemicals used in refrigeration, air conditioning, and industrial processes that contribute to the depletion of the ozone layer.
1901.03866
false
null
1) HAS-QA outperforms traditional RC baselines with a large gap, such as GA, BiDAF, AQA listed in the first part. For example, in QuasarT, it improves 16.8% in EM score and 20.4% in F1 score. As RC task is just a special case of OpenQA task. Some experiments on standard SQuAD dataset(dev-set) BIBREF9 show that HAS-QA yields EM/F1:0.719/0.798, which is comparable with the best released single model Reinforced Mnemonic Reader BIBREF25 in the leaderboard (dev-set) EM/F1:0.721/0.816. Our performance is slightly worse because Reinforced Mnemonic Reader directly use the accurate answer span, while we use multiple distantly supervised answer spans. That may introduce noises in the setting of SQuAD, since only one span is accurate. 2) HAS-QA outperforms recent OpenQA baselines, such as DrQA, R ${}^3$ and Shared-Norm listed in the second part. For example, in QuasarT, it improves 4.6% in EM score and 3.5% in F1 score. FLOAT SELECTED: Table 2: Experimental results on OpenQA datasets QuasarT, TriviaQA and SearchQA. EM: Exact Match. HAS-QA outperforms traditional RC baselines with a large gap, such as GA, BiDAF, AQA listed in the first part. For example, in QuasarT, it improves 16.8% in EM score and 20.4% in F1 score. As RC task is just a special case of OpenQA task. Some experiments on standard SQuAD dataset(dev-set) BIBREF9 show that HAS-QA yields EM/F1:0.719/0.798, which is comparable with the best released single model Reinforced Mnemonic Reader BIBREF25 in the leaderboard (dev-set) EM/F1:0.721/0.816. HAS-QA outperforms recent OpenQA baselines, such as DrQA, R ${}^3$ and Shared-Norm listed in the second part. For example, in QuasarT, it improves 4.6% in EM score and 3.5% in F1 score. FLOAT SELECTED: Table 2: Experimental results on OpenQA datasets QuasarT, TriviaQA and SearchQA. EM: Exact Match.
How much does HAS-QA improve over baselines?
The answers are shown as follows: * For example, in QuasarT, it improves 16.8% in EM score and 20.4% in F1 score. * For example, in QuasarT, it improves 4.6% in EM score and 3.5% in F1 score.
null
false
null
What is connectomics?
Connectomics is the production and study of connectomes: comprehensive maps of connections within an organism's nervous system. More generally, it can be thought of as the study of neuronal wiring diagrams with a focus on how structural connectivity, individual synapses, cellular morphology, and cellular ultrastructure contribute to the make up of a network. The nervous system is a network made of billions of connections and these connections are responsible for our thoughts, emotions, actions, memories, function and dysfunction. Therefore, the study of connectomics aims to advance our understanding of mental health and cognition by understanding how cells in the nervous system are connected and communicate. Because these structures are extremely complex, methods within this field use a high-throughput application of functional and structural neural imaging, most commonly magnetic resonance imaging (MRI), electron microscopy, and histological techniques in order to increase the speed, efficiency, and resolution of these nervous system maps. To date, tens of large scale datasets have been collected spanning the nervous system including the various areas of cortex, cerebellum, the retina, the peripheral nervous system and neuromuscular junctions. Generally speaking, there are two types of connectomes; macroscale and microscale. Macroscale connectomics refers to using functional and structural MRI data to map out large fiber tracts and functional gray matter areas within the brain in terms of blood flow (functional) and water diffusivity (structural). Microscale connectomics is the mapping of small organisms' complete connectome using microscopy and histology. That is, all connections that exist in their central nervous system.
null
false
null
How to raise a kitten?
When it comes to raise a kitten, it's like raising child. You need extra care and treat it with extra care. The first few months are a crucial time in shaping your kitten’s emotional and physical well-being. In the first 8 week, there're 4 things you should pay attention to, 1) Always keep kittens warm. Because kittens under four weeks of age do not have the ability to thermoregulate, you must help them maintain body warmth. A warmed Snuggle Safe disk at the opening of the cage or crate is a good idea. 2) Provide kittens with adequate nutrition. Kittens should gain about ½ ounce (14 grams) per day or 4 ounces (113 grams) per week. When feeding, The temperature of the formula before feeding should be warm to around 100oF or 38oC, but not too hot.3) Keep kittens clean. If kittens are orphaned and do not have a mom to regularly groom then, it is important to teach them to groom and keep them clean. It is okay to wash a kitten with warm water but focus only on the areas needing cleaned.4) Provide socialisation with people and with other kittens. It is good to get them used to contact with people when they're 3 weeks old. They could learn from their littermates and learning from their mom. Play is the best method to help them physically and socially develop.
null
false
null
How often do Germans vote?
Federal elections in Germany are held every four years. Germans also vote in the European election, which is conducted every five years. Furthermore, state elections are typically held every five years, but the dates are staggered between different states. Finally, referendums might occur irregularly. The voter turnout varies; an individual voter might not participate in an election.
null
false
null
Give me six examples of common productivity myths.
- Waking up early will make you more productive - Everyone has "the same" 24 hours each day in which to be productive - Being busy means being productive - Multi-tasking and task shifting will allow you to be more productive - If you're happy with your work, you will be more productive - Hard work inevitably leads to productivity
null
false
null
What should I think about when buying a car (summarization)
There are many aspects to consider when choosing a car. Your budget and your needs of the vehicle (seats, luggage space, ability to drive on rough terrain) are probably the main two. These factors probably then influence how you prioritise between age of the vehicle and the mileage you're willing for it to have covered already, the type of car (convertible, estate, coupe, 4x4/SUV etc.) and the brands you consider desirable. Buying a car is typically somebody's second-largest expense, so proceed with caution; if you're planning on buying a used car, you should thoroughly research its history via online 'vehicle history check' reports and conduct the relevant checks to make sure that the person selling the car is entitled to do so.
null
false
104
We propose a Bayesian model of semantic role induction (SRI) that uses crosslingual latent variables to capture role alignments in parallel corpora. The crosslingual latent variables capture correlations between roles in different languages, and regularize the parameter estimates of the monolingual models. Because this is a joint Bayesian model of multilingual SRI, we can apply the same model to a variety of training scenarios just by changing the inference procedure appropriately. We evaluate monolingual SRI with a large unlabeled dataset, bilingual SRI with a parallel corpus, bilingual SRI with annotations available for the source language, and monolingual SRI with a small labeled dataset. Increasing the amount of monolingual unlabeled data significantly improves SRI in German but not in English. Adding word alignments in parallel sentences results in small, non significant improvements, even if there is some labeled data available in the source language. This difficulty in showing the usefulness of parallel corpora for SRI may be due to the current assumptions about role alignments, which mean that only a small percentage of roles are aligned. Further analyses reveals that annotating small amounts of data can easily outperform the performance gains obtained by adding large unlabeled dataset as well as adding parallel corpora. Future work includes training on different language pairs, on more than two languages, and with more inclusive models of role alignment. We propose a Bayesian model of semantic role induction (SRI) that uses crosslingual latent variables to capture role alignments in parallel corpora.
What does the proposed model use to capture role alignments in parallel corpora?
Crosslingual latent variables.
null
false
458
Table 4: Computational cost of our models at test time to tag s new structures. In Odeen r=24,794, b=32, s=1,176. Notice how CRNs offer a good balance between computational efficiency and performance, this trade-off is regulated by a single parameter, the number of beams. Table 5: Computational cost of our models at test time to produce the textual rule in output
Inferring whether an instance x satisfies a concept P is surprisingly involved as it requires to generate a (presumably large) number of candidate descriptions for P and then counting, for each description, whether x satisfies it. Presumably this scales poorly with language complexity. Is this efficient at all?
Please note that this operation is fully parallelizable, thus several conjectures can be tested in the same GPU batch. Tables 3 and 4 in appendix C display the computational cost and the execution time in seconds, which in Odeen remains in the same order of magnitude with respect to the traditional end-to-end approach with t=300 generated conjectures. Moreover, CRNs exhibit a parameter t at test time which controls the trade-off between computational cost and performance. As shown in the inset figure in the “Adjustable thinking time” paragraph in section 5, very cheap models with t=50 or even 10 still express a significant gap in performance vs the traditional empiricist models.
null
false
null
What is mass spectrometry?
Mass spectrometry (MS) is an analytical technique that is used to measure the mass-to-charge ratio of ions. The results are presented as a mass spectrum, a plot of intensity as a function of the mass-to-charge ratio. Mass spectrometry is used in many different fields and is applied to pure samples as well as complex mixtures. A mass spectrum is a type of plot of the ion signal as a function of the mass-to-charge ratio. These spectra are used to determine the elemental or isotopic signature of a sample, the masses of particles and of molecules, and to elucidate the chemical identity or structure of molecules and other chemical compounds. In a typical MS procedure, a sample, which may be solid, liquid, or gaseous, is ionized, for example by bombarding it with a beam of electrons. This may cause some of the sample's molecules to break up into positively charged fragments or simply become positively charged without fragmenting. These ions (fragments) are then separated according to their mass-to-charge ratio, for example by accelerating them and subjecting them to an electric or magnetic field: ions of the same mass-to-charge ratio will undergo the same amount of deflection. The ions are detected by a mechanism capable of detecting charged particles, such as an electron multiplier. Results are displayed as spectra of the signal intensity of detected ions as a function of the mass-to-charge ratio. The atoms or molecules in the sample can be identified by correlating known masses (e.g. an entire molecule) to the identified masses or through a characteristic fragmentation pattern.
null
false
null
What are some dishes that avocados can be used in?
Avocados are a delicious creamy fruit that can be used in a variety of dishes and in cooking. Avocados can be sliced up and added into salads, or they can be mashed up and spread on toast to make avocado toast. Avocados are also the main ingredient in guacamole. They can also be diced into small cubes and added into poke. Some people also blend avocados and put them into smoothies, or use it in desserts as a healthier alternative ingredient.
null
false
186
Story-telling is on the frontier of current text generation technology: stories must remain thematically consistent across the complete document, requiring modeling very long range dependencies; stories require creativity; and stories need a high level plot, necessitating planning ahead rather than word-by-word generation BIBREF0 . We tackle the challenges of story-telling with a hierarchical model, which first generates a sentence called the prompt describing the topic for the story, and then conditions on this prompt when generating the story. Conditioning on the prompt or premise makes it easier to generate consistent stories because they provide grounding for the overall plot. It also reduces the tendency of standard sequence models to drift off topic. We find that standard sequence-to-sequence (seq2seq) models BIBREF1 applied to hierarchical story generation are prone to degenerating into language models that pay little attention to the writing prompt (a problem that has been noted in other domains, such as dialogue response generation BIBREF2 ). This failure is due to the complex and underspecified dependencies between the prompt and the story, which are much harder to model than the closer dependencies required for language modeling (for example, consider the subtle relationship between the first sentence and prompt in Figure FIGREF1 ). To improve the relevance of the generated story to its prompt, we introduce a fusion mechanism BIBREF3 where our model is trained on top of an pre-trained seq2seq model. To improve over the pre-trained model, the second model must focus on the link between the prompt and the story. For the first time, we show that fusion mechanisms can help seq2seq models build dependencies between their input and output. Another major challenge in story generation is the inefficiency of modeling long documents with standard recurrent architectures—stories contain 734 words on average in our dataset. We improve efficiency using a convolutional architecture, allowing whole stories to be encoded in parallel. Existing convolutional architectures only encode a bounded amount of context BIBREF4 , so we introduce a novel gated self-attention mechanism that allows the model to condition on its previous outputs at different time-scales. To train our models, we gathered a large dataset of 303,358 human generated stories paired with writing prompts from an online forum. Evaluating free form text is challenging, so we also introduce new evaluation metrics which isolate different aspects of story generation. Experiments show that our fusion and self-attention mechanisms improve over existing techniques on both automated and human evaluation measures. Our new dataset and neural architectures allow for models which can creatively generate longer, more consistent and more fluent passages of text. Human judges prefer our hierarchical model's stories twice as often as those of a non-hierarchical baseline. To train our models, we gathered a large dataset of 303,358 human generated stories paired with writing prompts from an online forum.
How many human-generated stories does the gathered dataset consist of?
303358.
1911.02711
false
null
Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary. FLOAT SELECTED: Table 4: Experimental results. Predicted indicates the use of system-predicted summaries. Star (*) indicates that hard attention model is trained with golden summaries but does not require golden summaries during inference. FLOAT SELECTED: Table 5: Experimental results. Golden indicates the use of user-written (golden) summaries. Noted that joint modeling methods, such as HSSC (Ma et al., 2018) and SAHSSC (Wang and Ren, 2018), cannot make use of golden summaries during inference time, so their results are excluded in this table. Experiments ::: Datasets We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set. Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. FLOAT SELECTED: Table 4: Experimental results. Predicted indicates the use of system-predicted summaries. Star (*) indicates that hard attention model is trained with golden summaries but does not require golden summaries during inference. FLOAT SELECTED: Table 5: Experimental results. Golden indicates the use of user-written (golden) summaries. Noted that joint modeling methods, such as HSSC (Ma et al., 2018) and SAHSSC (Wang and Ren, 2018), cannot make use of golden summaries during inference time, so their results are excluded in this table. Experiments ::: Datasets We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies.
What is the performance difference of using a generated summary vs. a user-written one?
2.7 accuracy points
null
false
null
What are some good kids activities on Lake Norman, NC?
Lake Norman offers activities for a range of different ages/interests. If you're looking to spend time near the waters edge, you can visit one of the many town and state parks that border the lake: Jetton Park - Cornelius, Lake Norman State Park - Troutman, Ramsey Creek Park - Cornelius are a few options. You can also hire a boat for the day and enjoy fishing, water sports, or cruising. There are several small islands on the lake that are free for public use, where you can find small beaches to pull up and picnic/play in the sand. The water temperature of the lake during the summer months is generally warm and children should enjoy swimming and splashing in the water.
1809.04686
false
null
For the MT task, we use the WMT 2014 En $\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously. We generated a shared sub-word vocabulary BIBREF37 , BIBREF38 of 32K units from all source and target training data. We use this sub-word vocabulary for all of our experiments below. For the MT task, we use the WMT 2014 En $\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously.
What data were they used to train the multilingual encoder?
WMT 2014 En-Fr parallel corpus
null
false
null
How large a cluster can Spark scale to?
Many organizations run Spark on clusters of thousands of nodes. The largest cluster we know has 8000 of them. In terms of data size, Spark has been shown to work well up to petabytes. It has been used to sort 100 TB of data 3X faster than Hadoop MapReduce on 1/10th of the machines, winning the 2014 Daytona GraySort Benchmark, as well as to sort 1 PB. Several production workloads use Spark to do ETL and data analysis on PBs of data.
null
false
null
Which spices should every cook have on hand?
Salt and Pepper are essential spices. Cumin, Chilli, Turmeric, Garlic and Cinnamon are also popular.
null
false
null
There are many different types, styles and genres of dance. Categorize these dances based on the their style; Mambo, Kizomba, Robot, Jazz dance, Lady hop, Rumba, west coat swing, Boogaloo, blues
I would like to do that. Here is the list of these dances grouped by their style; Latin dances: Salsa, Bachta, Rumba, Mambo African - American: Kizomba, Jazz dance Disco: Boogaloo, Robot Swing dance: Lady hop, west coat swing, blues.
null
false
null
Who are the longest running cast members on Saturday Night Live?
The longest running cast member on Saturday Night Live is Kenan Thompson. Kenan Thompson has starred in 20 seasons. Next, Darrel Hammon starred in 14 seasons. Seth Meyers appeared in 13 seasons.
null
false
158
In this question answering task, a reading passage , a query and several answer choices are given. P denotes the passage, Q denotes query and C denotes one of the multiple choices. The target of the model is to choose a correct answer A from multiple choices based on informations of P and Q. Fig. FIGREF1 is the pipeline overview of QACNN. First, we use embedding layer to transform P, Q, and C into word embedding. Then the compare layer generates passage-query similarity map INLINEFORM0 and passage-choice similarity map INLINEFORM1 . The following part is the main component of QACNN. It consists of two-staged CNN architecture. The first stage projects word-level feature into sentence-level, and the second stage projects sentence-level feature into passage-level. Moreover, we apply query-based attention mechanism to each stage on the basis of INLINEFORM2 feature at word level and sentence level respectively. After QACNN Layer, we obtain each choice answer feature. Finally, a prediction layer collects output information from every choice feature and returns the most possible answer. After word embedding step, We want to acquire similarity map which tells us location relationship between passage and query, passage and choices.
What do they want to acquire after the word embedding step?
Similarity map which tells them location relationship between passage and query, passage and choices.
2003.12738
false
null
An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16. An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16. An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16. CVAE. An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16
What baselines other than standard transformers are used in experiments?
The answers are shown as follows: * attention-based sequence-to-sequence model * CVAE
null
false
null
Classify each of the following as water-soluble vitamins and fat-soluble vitamins and which food come under them?
Classifying each of the following as water-soluble vitamins and fat-soluble vitamins: before getting into that we will first know about them in a detailed way water-soluble vitamins: dissolve in water and are easily absorbed into the bloodstream. They are not stored in the body for long periods of time and any excess amounts are excreted in the urine, which means that a regular intake is needed to maintain optimal levels. Water-soluble vitamins include: Vitamin C: an antioxidant that helps protect cells from damage and supports immune function. It also plays a role in collagen synthesis and helps the body absorb iron from plant-based foods. Vitamins B: There are eight B vitamins, including thiamin (B1), riboflavin (B2), niacin (B3), pantothenic acid (B5), pyridoxine (B6), biotin (B7), folate (B9), and cobalamin (B12). B vitamins play a crucial role in energy metabolism, as well as the production of red blood cells, brain function, and the maintenance of healthy skin, hair, and eyesight. Fat-soluble vitamins: soluble in fat and are absorbed with dietary fat in the small intestine. These vitamins are stored in the liver and fatty tissue for long periods of time, and excess amounts can accumulate and potentially cause toxicity. The fat-soluble vitamins include: Vitamin A: essential for vision, skin health, and immune function. It is also important for the growth and development of bones and teeth. Vitamin D: important for bone health as it helps the body absorb calcium. It also plays a role in immune function and may have other health benefits such as reducing the risk of certain types of cancer and autoimmune diseases. Vitamin E: an antioxidant that helps protect cells from damage and supports immune function. It may also have a role in reducing the risk of heart disease. Vitamin K: important for blood clotting and bone health. It also plays a role in regulating calcium levels in the body. Food that comes under water-soluble vitamins: Vitamin A: found in liver, fish oil, eggs, milk, cheese, and dark leafy greens like spinach and kale. Vitamin D: found in fatty fish such as salmon, mackerel, and tuna, as well as egg yolks and fortified foods like milk, orange juice, and cereals. Vitamin E: found in nuts and seeds like almonds, sunflower seeds, and hazelnuts, as well as vegetable oils like wheat germ oil, sunflower oil, and safflower oil. Vitamin K: found in leafy greens like spinach, kale, and collard greens, as well as cruciferous vegetables like broccoli and Brussels sprouts, and vegetable oils like soybean oil and canola oil. Food that comes under Fat -soluble vitamins: Vitamin C: found in citrus fruits like oranges, grapefruits, and lemons, as well as kiwi, strawberries, bell peppers, broccoli, and spinach. Thiamin (B1): found in whole grains like brown rice, whole wheat bread, and oats, as well as pork, legumes, and nuts. Riboflavin (B2): found in dairy products like milk and yogurt, as well as eggs, meat, leafy greens, and whole grains. Niacin (B3): found in meat, poultry, fish, whole grains, and legumes. Pantothenic acid (B5): found in meat, poultry, fish, whole grains, and legumes. Pyridoxine (B6): found in meat, poultry, fish, bananas, whole grains, and legumes. Biotin (B7): found in egg yolks, liver, nuts, and seeds. Folate (B9): found in leafy greens, legumes, citrus fruits, and fortified grains and cereals. Cobalamin (B12): found in animal products like meat, poultry, fish, eggs, and dairy.
null
false
null
Besides a putter, what other golf clubs would suffice for a putting stroke?
Any low iron (1 thru 4) or fairway wood which a flat face.
null
false
null
The platypus (Ornithorhynchus anatinus), sometimes referred to as the duck-billed platypus, is a semiaquatic, egg-laying mammal endemic to eastern Australia, including Tasmania. The platypus is the sole living representative or monotypic taxon of its family (Ornithorhynchidae) and genus (Ornithorhynchus), though a number of related species appear in the fossil record.
What is an egg laying mammal?
Duck-billed platypus is an egg-laying mammal found in eastern Australia. It is the sole living representative in its genus Ornithorhynchus.
null
false
119
Automatic classification of sentiment has mainly focused on categorizing tweets in either two (binary sentiment analysis) or three (ternary sentiment analysis) categories BIBREF0 . In this work we study the problem of fine-grained sentiment classification where tweets are classified according to a five-point scale ranging from VeryNegative to VeryPositive. To illustrate this, Table TABREF3 presents examples of tweets associated with each of these categories. Five-point scales are widely adopted in review sites like Amazon and TripAdvisor, where a user's sentiment is ordered with respect to its intensity. From a sentiment analysis perspective, this defines a classification problem with five categories. In particular, Sebastiani et al. BIBREF1 defined such classification problems whose categories are explicitly ordered to be ordinal classification problems. To account for the ordering of the categories, learners are penalized according to how far from the true class their predictions are. Although considering different scales, the various settings of sentiment classification are related. First, one may use the same feature extraction and engineering approaches to represent the text spans such as word membership in lexicons, morpho-syntactic statistics like punctuation or elongated word counts BIBREF2 , BIBREF3 . Second, one would expect that knowledge from one task can be transfered to the others and this would benefit the performance. Knowing that a tweet is “Positive” in the ternary setting narrows the classification decision between the VeryPositive and Positive categories in the fine-grained setting. From a research perspective this raises the question of whether and how one may benefit when tackling such related tasks and how one can transfer knowledge from one task to another during the training phase. Our focus in this work is to exploit the relation between the sentiment classification settings and demonstrate the benefits stemming from combining them. To this end, we propose to formulate the different classification problems as a multitask learning problem and jointly learn them. Multitask learning BIBREF4 has shown great potential in various domains and its benefits have been empirically validated BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 using different types of data and learning approaches. An important benefit of multitask learning is that it provides an elegant way to access resources developed for similar tasks. By jointly learning correlated tasks, the amount of usable data increases. For instance, while for ternary classification one can label data using distant supervision with emoticons BIBREF9 , there is no straightforward way to do so for the fine-grained problem. However, the latter can benefit indirectly, if the ternary and fine-grained tasks are learned jointly. The research question that the paper attempts to answer is the following: Can twitter sentiment classification problems, and fine-grained sentiment classification in particular, benefit from multitask learning? To answer the question, the paper brings the following two main contributions: (i) we show how jointly learning the ternary and fine-grained sentiment classification problems in a multitask setting improves the state-of-the-art performance, and (ii) we demonstrate that recurrent neural networks outperform models previously proposed without access to huge corpora while being flexible to incorporate different sources of data. Our focus in this work is to exploit the relation between the sentiment classification settings and demonstrate the benefits stemming from combining them.
What is the author's focus in this task?
To exploit the relation between the sentiment classification settings and demonstrate the benefits stemming from combining them.
null
false
117
In this section, we identify various linguistic activities on Quora and propose quantifications of the language usage patterns in this Q&A site. In particular, we show that there exists significant differences in the linguistic structure of the open and the answered questions. Note that most of the measures that we define are simple, intuitive and can be easily obtained automatically from the data (without manual intervention). Therefore the framework is practical, inexpensive and highly scalable. Content of a question text is important to attract people and make them engage more toward it. The linguistic structure (i.e., the usage of POS tags, the use of Out-of-Vocabulary words, character usage etc.) one adopts are key factors for answerability of questions. We shall discuss the linguistic structure that often represents the writing style of a question asker. In fig 1 (a), we observe that askers of open questions generally use more no. of words compared to answered questions. To understand the nature of words (standard English words or chat-like words frequently used in social media) used in the text, we compare the words with GNU Aspell dictionary to see whether they are present in the dictionary or not. We observe that both open questions and answered questions follow similar distribution (see fig 1 (b)). Part-of-Speech (POS) tags are indicators of grammatical aspects of texts. To observe how the Part-of-Speech tags are distributed in the question texts, we define a diversity metric. We use the standard CMU POS tagger BIBREF8 for identifying the POS tags of the constituent words in the question. We define the POS tag diversity (POSDiv) of a question $q_i$ as follows: $POSDiv(q_i) = -\sum _{j \in pos_{set}}p_j\times \log (p_j)$ where $p_j$ is the probability of the $j^{th}$ POS in the set of POS tags. Fig 1 (c) shows that the answered questions have lower POS tag diversity compared to open questions. Question texts undergo several edits so that their readability and the engagement toward them are enhanced. It is interesting to identify how far such edits can make the question different from the original version of it. To capture this phenomena, we have adopted ROUGE-LCS recall BIBREF9 from the domain of text summarization. Higher the recall value, lesser are the changes in the question text. From fig 1 (d), we observe that open questions tend to have higher recall compared to the answered ones which suggests that they have not gone through much of text editing thus allowing for almost no scope of readability enhancement. Psycholinguistic analysis: The way an individual talks or writes, give us clue to his/her linguistic, emotional, and cognitive states. A question asker's linguistic, emotional, cognitive states are also revealed through the language he/she uses in the question text. In order to capture such psycholinguistic aspects of the asker, we use Linguistic Inquiry and Word Count (LIWC) BIBREF10 that analyzes various emotional, cognitive, and structural components present in individuals' written texts. LIWC takes a text document as input and outputs a score for the input for each of the LIWC categories such as linguistic (part-of-speech of the words, function words etc.) and psychological categories (social, anger, positive emotion, negative emotion, sadness etc.) based on the writing style and psychometric properties of the document. In table 1 , we perform a comparative analysis of the asker's psycholinguistic state while asking an open question and an answered question. Askers of open questions use more function words, impersonal pronouns, articles on an average whereas asker of answered questions use more personal pronouns, conjunctions and adverbs to describe their questions. Essentially, open questions lack content words compared to answered questions which, in turn, affects the readability of the question. As far as the psychological aspects are concerned, answered question askers tend to use more social, family, human related words on average compared to an open question asker. The open question askers express more positive emotions whereas the answered question asker tend to express more negative emotions in their texts. Also, answered question askers are more emotionally involved and their questions reveal higher usage of anger, sadness, anxiety related words compared to that of open questions. Open questions, on the other hand, contains more sexual, body, health related words which might be reasons why they do not attract answers. In table 2 , we show a collection of examples of open questions to illustrate that many of the above quantities based on the linguistic activities described in this section naturally correspond to the factors that human judges consider responsible for a question remaining unanswered. This is one of the prime reasons why these quantities qualify as appropriate indicators of answerability. Askers of open questions use more function words, impersonal pronouns, articles on an average whereas asker of answered questions use more personal pronouns, conjunctions and adverbs to describe their questions.Essentially, open questions lack content words compared to answered questions which, in turn, affects the readability of the question. As far as the psychological aspects are concerned, answered question askers tend to use more social, family, human related words on average compared to an open question asker. The open question askers express more positive emotions whereas the answered question asker tend to express more negative emotions in their texts. Also, answered question askers are more emotionally involved and their questions reveal higher usage of anger, sadness, anxiety related words compared to that of open questions. Open questions, on the other hand, contains more sexual, body, health related words which might be reasons why they do not attract answers.
What is the linguistic difference between open questions and answered questions?
They are obviously different in word frequency and emotion. and contains content with different levels of attractivity.
null
false
null
Why do we have seasons, and what are they?
As the earth spins on its axis, producing night and day, it also moves about the sun in an elliptical (elongated circle) orbit that requires about 365 1/4 days to complete. The earth's spin axis is tilted with respect to its orbital plane. This is what causes the seasons. There are four primary seasons: Spring, Summer, Autumn & Winter. The start dates of the seasons themselves are different depending on what hemisphere you're located in. Spring & Summer are the warmer seasons. Autumn is typically mild, and Winter is the coldest season of the year.
null
false
null
What is SAP?
SAP is a multinational software vendor headquartered in Walldorf, Germany. SAP stands for "System Analyse und Programmentwicklung" which are the German words for System Analysis and Program Development. SAP was founded in 1972 by Dietmar Hopp, Klaus Tschira, Hans-Werner Hector, Hasso Plattner, and Claus Wellenreuther in Germany. Today SAP has 112.000 employees and generated a yearly revenue of over 30 billion Euro in 2022.