paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Which of these are languages in India and which are not Indian Languages: English, Hindi, Ukranian, French, Spanish, Hindi, Kannada, Tulu, Mandrian, Tamil, Taiwanese, Korean, Japanese, Assamese, Sanskrit, Urdu, Telugu, Russian, German, Italian, Arabic, Dutch, Welsh, Gujrati, Somali, Pahari, punjabi
|
Hindi, Kannada, Tulu, Tamil, Assamese, Sanskrit, Telugu, Gujrati, Pahari, Punjabi are some of Indian languages
|
|
1904.02815
| true
| null |
We compare the performance of our model (Table 2 ) with traditional Bag of Words (BoW), TF-IDF, and n-grams features based classifiers. We also compare against averaged Skip-Gram BIBREF29 , Doc2Vec BIBREF30 , CNN BIBREF23 , Hierarchical Attention (HN-ATT) BIBREF24 and hierarchical network (HN) models. HN it is similar to our model HN-SA but without any self attention.
Analysis: As is evident from the experiments on both the versions of SWBD, our model (HN-SA) outperforms traditional feature based topic spotting models and deep learning based document classification models. It is interesting to see that simple BoW and n-gram baselines are quite competitive and outperform some of the deep learning based document classification model. Similar observation has also been reported by BIBREF31 ( BIBREF31 ) for the task of sentiment analysis. The task of topic spotting is arguably more challenging than document classification. In the topic spotting task, the number of output classes (66/42 classes) is much more than those in document classification (5/6 classes), which is done mainly on the texts from customer reviews. Dialogues in SWBD have on an average 200 utterances and are much longer texts than customer reviews. Additionally, the number of dialogues available for training the model is significantly lesser than customer reviews. We further investigated the performance on SWBD2 by examining the confusion matrix of the model. Figure 2 shows the heatmap of the normalized confusion matrix of the model on SWBD2. For most of the classes the classifier is able to predict accurately. However, the model gets confused between the classes which are semantically close (w.r.t. terms used) to each other, for example, the model gets confused between pragmatically similar topics e.g. HOBBIES vs GARDENING, MOVIES vs TV PROGRAMSâ, RIGHT TO PRIVACY vs DRUG TESTING.
We compare the performance of our model (Table 2 ) with traditional Bag of Words (BoW), TF-IDF, and n-grams features based classifiers. We also compare against averaged Skip-Gram BIBREF29 , Doc2Vec BIBREF30 , CNN BIBREF23 , Hierarchical Attention (HN-ATT) BIBREF24 and hierarchical network (HN) models. HN it is similar to our model HN-SA but without any self attention.
We further investigated the performance on SWBD2 by examining the confusion matrix of the model. Figure 2 shows the heatmap of the normalized confusion matrix of the model on SWBD2.
|
Do the authors do manual evaluation?
|
No.
|
null | false
| 5
|
Grapheme-to-phoneme conversion (g2p) is necessary for text-to-speech and automatic speech recognition systems. Most g2p systems are monolingual: they require language-specific data or handcrafting of rules. Such systems are difficult to extend to low resource languages, for which data and handcrafted rules are not available. As an alternative, we present a neural sequence-to-sequence approach to g2p which is trained on spelling--pronunciation pairs in hundreds of languages. The system shares a single encoder and decoder across all languages, allowing it to utilize the intrinsic similarities between different writing systems. We show an 11% improvement in phoneme error rate over an approach based on adapting high-resource monolingual g2p models to low-resource languages. Our model is also much more compact relative to previous approaches.
Most g2p systems are monolingual: they require language-specific data or handcrafting of rules. Such systems are difficult to tend to low resource languages, for which data and handcrafted rules are not available. As an alternative, we present a neu_x005f_x001f_ral sequence-to-sequence approach to g2p which is trained on spelling-pronunciation pairs in hundreds of languages.
|
What aspects of the g2p task is the model specially developed for?
|
For low resource languages.
|
null | false
| null |
Write a polite e-mail from my cat to the neighbor's cat, asking it to please stay out of his special area of the lawn.
|
Dear Mittens,
How are you? It's been a lovely summer so far, and I can see that you are enjoying the great outdoors like I am. Our backyard has some lovely sunny spots to warm up in. You are most welcome to visit whenever you like. However, may I make one request? The southwest corner of the lawn beyond the oak tree is my special warming spot. I regularly use it and would be grateful if you could instead use another area if you visit, like the concrete slab immediately west.
Gratefully,
Grumpers
|
|
null | false
| null |
The Human Development Index (HDI) is a statistic composite index of life expectancy, education (mean years of schooling completed and expected years of schooling upon entering the education system), and per capita income indicators, which is used to rank countries into four tiers of human development. A country scores a higher level of HDI when the lifespan is higher, the education level is higher, and the gross national income GNI (PPP) per capita is higher. It was developed by Pakistani economist Mahbub ul Haq and was further used to measure a country's development by the United Nations Development Programme (UNDP)'s Human Development Report Office.
The 2010 Human Development Report introduced an Inequality-adjusted Human Development Index (IHDI). While the simple HDI remains useful, it stated that "the IHDI is the actual level of human development (accounting for inequality), while the HDI can be viewed as an index of 'potential' human development (or the maximum level of HDI) that could be achieved if there were no inequality."
The index is based on the human development approach, developed by Mahbub ul Haq, anchored in Amartya Sen's work on human capabilities, and often framed in terms of whether people are able to "be" and "do" desirable things in life. Examples include – being: well fed, sheltered, and healthy; doing: work, education, voting, participating in community life. The freedom of choice is central – someone choosing to be hungry (e.g. when fasting for religious reasons) is quite different from someone who is hungry because they cannot afford to buy food, or because the country is in a famine.
The index does not take into account several factors, such as the net wealth per capita or the relative quality of goods in a country. This situation tends to lower the ranking of some of the most developed countries, such as the G7 members and others.
|
Please give a bulleted list of what incorporates the Human Development Index
|
Human Development Index (HDI) is a statistic composite index of the following metrics for a country:
1. life expectancy of the population
2. education (years of schooling completed and expected years of schooling upon entering the education system)
3. per capita income indicators
|
null | false
| null |
To some extent, European and the US traditions diverged after World War II. Among the most influential composers in Europe were Pierre Boulez, Luigi Nono, and Karlheinz Stockhausen. The first and last were both pupils of Olivier Messiaen. An important aesthetic philosophy as well as a group of compositional techniques at this time was serialism (also called "through-ordered music", "'total' music" or "total tone ordering"), which took as its starting point the compositions of Arnold Schoenberg and Anton Webern (but was opposed to traditional twelve-tone music), and was also closely related to Le Corbusier's idea of the modulor. However, some more traditionally based composers such as Dmitri Shostakovich and Benjamin Britten maintained a tonal style of composition despite the prominent serialist movement.
In America, composers like Milton Babbitt, John Cage, Elliott Carter, Henry Cowell, Philip Glass, Steve Reich, George Rochberg, and Roger Sessions, formed their own ideas. Some of these composers (Cage, Cowell, Glass, Reich) represented a new methodology of experimental music, which began to question fundamental notions of music such as notation, performance, duration, and repetition, while others (Babbitt, Rochberg, Sessions) fashioned their own extensions of the twelve-tone serialism of Schoenberg.
|
Extract the unique names of composers from the text. Separate them with a comma and a space.
|
Pierre Boulez, Luigi Nono, Karlheinz Stockhausen, Olivier Messiaen, Arnold Schoenberg, Anton Webern, Le Corbusier, Dmitri Shostakovich, Benjamin Britten, Milton Babbitt, John Cage, Elliott Carter, Henry Cowell, Philip Glass, Steve Reich, George Rochberg, Roger Sessions
|
null | false
| 220
|
Several recent papers have described approaches for learning deep communicating policies (DCPs): decentralized representations of behavior that enable multiple agents to communicate via a differentiable channel that can be formulated as a recurrent neural network. DCPs have been shown to solve a variety of coordination problems, including reference games BIBREF0 , logic puzzles BIBREF1 , and simple control BIBREF2 . Appealingly, the agents' communication protocol can be learned via direct backpropagation through the communication channel, avoiding many of the challenging inference problems associated with learning in classical decentralized decision processes BIBREF3 .
But analysis of the strategies induced by DCPs has remained a challenge. As an example, fig:teaser depicts a driving game in which two cars, which are unable to see each other, must both cross an intersection without colliding. In order to ensure success, it is clear that the cars must communicate with each other. But a number of successful communication strategies are possible—for example, they might report their exact $(x, y)$ coordinates at every timestep, or they might simply announce whenever they are entering and leaving the intersection. If these messages were communicated in natural language, it would be straightforward to determine which strategy was being employed. However, DCP agents instead communicate with an automatically induced protocol of unstructured, real-valued recurrent state vectors—an artificial language we might call “neuralese,” which superficially bears little resemblance to natural language, and thus frustrates attempts at direct interpretation.
We propose to understand neuralese messages by translating them. In this work, we present a simple technique for inducing a dictionary that maps between neuralese message vectors and short natural language strings, given only examples of DCP agents interacting with other agents, and humans interacting with other humans. Natural language already provides a rich set of tools for describing beliefs, observations, and plans—our thesis is that these tools provide a useful complement to the visualization and ablation techniques used in previous work on understanding complex models BIBREF4 , BIBREF5 .
While structurally quite similar to the task of machine translation between pairs of human languages, interpretation of neuralese poses a number of novel challenges. First, there is no natural source of parallel data: there are no bilingual “speakers” of both neuralese and natural language. Second, there may not be a direct correspondence between the strategy employed by humans and DCP agents: even if it were constrained to communicate using natural language, an automated agent might choose to produce a different message from humans in a given state. We tackle both of these challenges by appealing to the grounding of messages in gameplay. Our approach is based on one of the core insights in natural language semantics: messages (whether in neuralese or natural language) have similar meanings when they induce similar beliefs about the state of the world.
Based on this intuition, we introduce a translation criterion that matches neuralese messages with natural language strings by minimizing statistical distance in a common representation space of distributions over speaker states. We explore several related questions:
Our translation model and analysis are general, and in fact apply equally to human–computer and human–human translation problems grounded in gameplay. In this paper, we focus our experiments specifically on the problem of interpreting communication in deep policies, and apply our approach to the driving game in fig:teaser and two reference games of the kind shown in fig:bird-examples. We find that this approach outperforms a more conventional machine translation criterion both when attempting to interoperate with neuralese speakers and when predicting their state.
We find that this approach outperforms a more conventional machine translation criterion both when attempting to interoperate with neuralese speakers and when predicting their state.
|
When do the authors find that this approach outperforms a more conventional machine translation criterion?
|
Both when attempting to interoperate with neuralese speakers and when predicting their state.
|
null | false
| null |
What is your favorite fast-casual restaurant and why?
|
Chipotle is my favorite place to grab a quick bite to eat. The food is fresh and delicious. Plus, they offer a simple menu and have high quality ingredients. It is also very affordable. The Barbacoa bowl tastes amazing and is my favorite thing to order
|
|
null | false
| null |
During which event, Korea was separated into 2 nations?
|
World War II
|
|
null | false
| null |
Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, sociology, weather science, climate change, and information systems including the transmission of information in telecommunication.
|
what is entropy
|
Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, sociology, weather science, climate change, and information systems including the transmission of information in telecommunication.
The thermodynamic concept was referred to by Scottish scientist and engineer William Rankine in 1850 with the names thermodynamic function and heat-potential. In 1865, German physicist Rudolf Clausius, one of the leading founders of the field of thermodynamics, defined it as the quotient of an infinitesimal amount of heat to the instantaneous temperature. He initially described it as transformation-content, in German Verwandlungsinhalt, and later coined the term entropy from a Greek word for transformation. Referring to microscopic constitution and structure, in 1862, Clausius interpreted the concept as meaning disgregation.
A consequence of entropy is that certain processes are irreversible or impossible, aside from the requirement of not violating the conservation of energy, the latter being expressed in the first law of thermodynamics. Entropy is central to the second law of thermodynamics, which states that the entropy of isolated systems left to spontaneous evolution cannot decrease with time, as they always arrive at a state of thermodynamic equilibrium, where the entropy is highest.
Austrian physicist Ludwig Boltzmann explained entropy as the measure of the number of possible microscopic arrangements or states of individual atoms and molecules of a system that comply with the macroscopic condition of the system. He thereby introduced the concept of statistical disorder and probability distributions into a new field of thermodynamics, called statistical mechanics, and found the link between the microscopic interactions, which fluctuate about an average configuration, to the macroscopically observable behavior, in form of a simple logarithmic law, with a proportionality constant, the Boltzmann constant, that has become one of the defining universal constants for the modern International System of Units (SI).
In 1948, Bell Labs scientist Claude Shannon developed similar statistical concepts of measuring microscopic uncertainty and multiplicity to the problem of random losses of information in telecommunication signals. Upon John von Neumann's suggestion, Shannon named this entity of missing information in analogous manner to its use in statistical mechanics as entropy, and gave birth to the field of information theory. This description has been identified as a universal definition of the concept of entropy.
|
null | false
| 254
|
Every day textual information is being added/updated on Wikipedia, as well as other social media platforms like Facebook, Twitter, etc. These platforms receive a huge amount of unverified textual data from all its users such as News Channels, Bloggers, Journalists, Field-Experts which ought to be verified before other users start consuming it. This information boom has increased the demand of information verification also known as Fact Checking. Apart from the encyclopedia and other platforms, domains like scientific publications and e-commerce also require information verification for reliability purposes. Generally, Wikipedia authors, bloggers, journalists and scientists provide references to support their claims. Providing referenced text against the claims makes the fact checking task a little easier as the verification system no longer needs to search for the relevant documents.
Wikipedia manages to verify all this new information with a number of human reviewers. Manual review processes introduce delays in publishing and is not a well scalable approach. To address this issue, researchers have launched relevant challenges, such as the Fake News Challenge (BIBREF0), Fact Extraction and VERification (FEVER) (BIBREF1) challenge along with the datasets. Moreover, Thorne and Vlachos (BIBREF2) released a survey on the current models for automated fact-checking. FEVER is the largest dataset and contains around 185k claims from the corpus of 5.4M Wikipedia articles. The claims are labeled as “SUPPORTS”, “REFUTES”, or “NOT ENOUGH INFO”, based on the evidence set.
In this paper, we propose an unsupervised question-answering based approach for solving the fact-checking problem. This approach is inspired from the memory-based reading comprehension task that humans perform at an early age. As we know that kids in schools, first read and learn the syllabus content so that they can answer the questions in the exam. Similarly, our model learns a language model and linguistics features in unsupervised fashion from the provided Wikipedia pages.
To transform the FEVER dataset into the above-mentioned task, we first generate the questions from the claims. In literature, there are majorly two types of Question Generation systems: Rule-based and Neural Question Generation (NQG) model based. Ali et al. (BIBREF3) proposed a rule-based pipeline to automate the question generation using POS (Part-of-speech) tagging and Named Entity Recognition (NER) tagging from the sentences. Recently, many NQG models have been introduced to generate questions in natural language. Serban et al. (BIBREF4) achieved better performance for question generation utilizing (passage, question, answer) triplets as training data and an encoder-decoder based architecture as their learning model.
Du et al. (BIBREF5) introduced a sequence-to-sequence model with an attention mechanism, outperforming rule-base question generation systems. Although the models proposed in (BIBREF6; BIBREF7) are effective, they require a passage to generate the plausible questions which is not readily available in the FEVER dataset. To resolve the issues and to keep the system simple but effective, we chose to generate questions similar to a Cloze-task or masked language modeling task. Such a task makes the problem more tractable as the masked entities are already known (i.e. named entities) and tight as there is only one correct answer for a given question. Later when the answers are generated, due to the question generation process, it becomes very easy to identify the correct answers.
We use the BERT's (Bidirectional Encoder Representations from Transformers) (BIBREF8) masked language model, that is pre-trained on Wikipedia articles for predicting the masked entities. Currently, neither the claim verification process nor the question generation process mandates explicit reasoning. For the same reason, it is difficult to put “REFUTES” or “NOT ENOUGH INFO” labels. To resolve this issue, we classify the unsupported claims as “MANUAL_REVIEW” instead of labeling them as “NOT ENOUGH INFO” or “REFUTES”.
In the literature, the shared task has been tackled using pipeline-based supervised models (BIBREF9; BIBREF10; BIBREF11). To our knowledge, only BIBREF10 has provided the confusion matrix for each of the labels for their supervised system. For the same reason, we are only providing the comparison of the label accuracy on the “SUPPORTS” label in the results section.
In this paper, we propose an unsupervised question-answering based approach for solving the fact-checking problem.
|
What is their approach based on?
|
Unsupervised question-answering.
|
null | false
| null |
What are some ways to get promoted at your job?
|
To get promoted at work you have to first be succeeding at your current role, make sure you are on time, have a good attitude, and are completing the tasks assigned to you. To get promoted you should take on more responsibilities- start a side project, help your teammates with challenges, spearhead a new initiative, etc.
|
|
null | false
| null |
Name some of the countries who have won the cricket world cup
|
Cricket is a sport invented by England. The countries who have won the cricket world cup so far are - India, West Indies, Australia, Sri Lanka, Pakistan, England
|
|
null | false
| null |
Nirvana was an American rock band formed in Aberdeen, Washington, in 1987. Founded by lead singer and guitarist Kurt Cobain and bassist Krist Novoselic, the band went through a succession of drummers, most notably Chad Channing, and then recruited Dave Grohl in 1990. Nirvana's success popularized alternative rock, and they were often referenced as the figurehead band of Generation X. Their music maintains a popular following and continues to influence modern rock culture.
In the late 1980s, Nirvana established itself as part of the Seattle grunge scene, releasing its first album, Bleach, for the independent record label Sub Pop in 1989. They developed a sound that relied on dynamic contrasts, often between quiet verses and loud, heavy choruses. After signing to major label DGC Records in 1991, Nirvana found unexpected mainstream success with "Smells Like Teen Spirit", the first single from their landmark second album Nevermind (1991). A cultural phenomenon of the 1990s, Nevermind was certified Diamond by the RIAA and is credited for ending the dominance of hair metal.
Characterized by their punk aesthetic, Nirvana's fusion of pop melodies with noise, combined with their themes of abjection and social alienation, brought them global popularity. Following extensive tours and the 1992 compilation album Incesticide and EP Hormoaning, the band released their highly anticipated third studio album, In Utero (1993). The album topped both the US and UK album charts, and was acclaimed by critics. Nirvana disbanded following Cobain's suicide in April 1994. Further releases have been overseen by Novoselic, Grohl, and Cobain's widow, Courtney Love. The live album MTV Unplugged in New York (1994) won Best Alternative Music Performance at the 1996 Grammy Awards.
Nirvana is one of the best-selling bands of all time, having sold more than 75 million records worldwide. During their three years as a mainstream act, Nirvana received an American Music Award, Brit Award and Grammy Award, as well as seven MTV Video Music Awards and two NME Awards. They achieved five number-one hits on the Billboard Alternative Songs chart and four number-one albums on the Billboard 200. In 2004, Rolling Stone named Nirvana among the 100 greatest artists of all time. They were inducted into the Rock and Roll Hall of Fame in their first year of eligibility in 2014.
|
Extract a list of album names from this paragraph on the history of the rock band Nirvana, as a comma separated list
|
- Bleach
- Nevermind
- Incesticide
- In Utero
- MTV Unplugged in New York
|
null | false
| null |
A column-oriented DBMS or columnar DBMS is a database management system (DBMS) that stores data tables by column rather than by row. Benefits include more efficient access to data when only querying a subset of columns (by eliminating the need to read columns that are not relevant), and more options for data compression. However, they are typically less efficient for inserting new data.
Practical use of a column store versus a row store differs little in the relational DBMS world. Both columnar and row databases can use traditional database query languages like SQL to load data and perform queries. Both row and columnar databases can become the backbone in a system to serve data for common extract, transform, load (ETL) and tools.
|
Given this reference text, what are column stores used for?
|
A column store database stores data as any other database management system. The data itself is stored sequentially to improve the performance of scans on a subset of individual columns.
|
1707.03904
| false
| null |
Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solutions, and the correct solution. In this section, we describe how each of these fields was generated for each Quasar variant.
The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question.
Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the correct answer is present in the context as Search Accuracy. The performance of the baseline among these instances is referred to as the Reading Accuracy, and the overall performance (which is a product of the two) is referred to as the Overall Accuracy. In Figure 4 we compare how these three vary as the number of context documents is varied. Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents. Hence, simply retrieving more documents is not sufficient – finding the few most relevant ones will allow the reader to work best.
Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solutions, and the correct solution.
The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question.
Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the correct answer is present in the context as Search Accuracy.
Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents.
|
Which retrieval system was used for baselines?
|
The dataset comes with a ranked set of relevant documents. Hence the baselines do not use a retrieval system.
|
null | false
| 16
|
For the neural projector, we employ rectified networks as coupling function INLINEFORM0 following BIBREF16 . We use a rectified network with an input layer, one hidden layer, and linear output units, the number of hidden units is set to the same as the number of input units. The number of coupling layers are varied as 4, 8, 16 for both tasks. We optimize marginal data likelihood directly using Adam BIBREF22 . For both tasks in the fully unsupervised setting, we do not tune the hyper-parameters using supervised data.
We use a rectified network with an input layer, one hidden layer, and linear output units, the number of hidden units is set to the same as the number of input units. The number of coupling layers are varied as 4, 8, 16 for both tasks.
|
What are the numbers of coupling layers?
|
4, 8, 16.
|
null | false
| null |
What is the nickname for West Ham United?
|
West Ham United, a UK Premier League soccer team based in East London, are known as both the Irons and the Hammers. Both names refer to the origins of the club as the team of the Thames Ironworks. The club's badge includes a pair of crossed hammers. Whilst many know West Ham as the Hammers most fans tend to refer to them as the Irons particularly in the chant "Come on You Irons".
|
|
null | false
| null |
Tell me whether the following politicians are associated with California or Florida: Kamala Harris, Nancy Pelosi, Ron DeSantis, Marco Rubio, Charlie Christ, Diane Feinstein, Gavin Newsom
|
Kamala Harris: California
Nancy Pelosi: California
Ron DeSantis: Florida
Marco Rubio: Florida
Charlie Christ: Florida
Diane Feinstein: California
Gavin Newsom: California
|
|
null | false
| null |
Pukeiti Pukeiti (c. 1948 - 18 October 2012) was a Cook Islands politician and Member of the Cook Islands Parliament. He was a member of the Cook Islands Party. He represented the electorate of Tamarua.
Pukeiti was elected to Parliament in the 2009 Tamarua by-election following the death of MP Mii Parima. He was re-elected at the 2010 election.
|
How old was Pukeiti when he died?
|
64 years old. He was born in 1948 and died in 2012, so 2012 - 1948 = 64.
|
null | false
| null |
List the ten episode titles from the fourth season of "Game of Thrones" in broadcast order.
|
Two Swords, The Lion and the Rose, Breaker of Chains, Oathkeeper, First of His Name, The Laws of Gods and Men, Mockingbird, The Mountain and the Viper, The Watchers on the Wall, The Children
|
|
null | false
| 187
|
Datasets: We use two recent benchmark datasets WN18RR BIBREF17 and FB15k-237 BIBREF18 . These two datasets are created to avoid reversible relation problems, thus the prediction task becomes more realistic and hence more challenging BIBREF18 . Table TABREF7 presents the statistics of WN18RR and FB15k-237.
Evaluation protocol: Following BIBREF3 , for each valid test triple INLINEFORM0 , we replace either INLINEFORM1 or INLINEFORM2 by each of all other entities to create a set of corrupted triples. We use the “Filtered” setting protocol BIBREF3 , i.e., not taking any corrupted triples that appear in the KG into accounts. We rank the valid test triple and corrupted triples in descending order of their scores. We employ evaluation metrics: mean rank (MR), mean reciprocal rank (MRR) and Hits@10 (i.e., the proportion of the valid test triples ranking in top 10 predictions). Lower MR, higher MRR or higher Hits@10 indicate better performance. Final scores on the test set are reported for the model obtaining the highest Hits@10 on the validation set.
Training protocol: We use the common Bernoulli strategy BIBREF20 , BIBREF21 when sampling invalid triples. For WN18RR, BIBREF22 found a strong evidence to support the necessity of a WordNet-related semantic setup, in which they averaged pre-trained word embeddings for word surface forms within the WordNet to create synset embeddings, and then used these synset embeddings to initialize entity embeddings for training their TransE association model. We follow this evidence in using the pre-trained 100-dimensional Glove word embeddings BIBREF23 to train a TransE model on WN18RR.
We employ the TransE and ConvKB implementations provided by BIBREF24 and BIBREF15 . For ConvKB, we use a new process of training up to 100 epochs and monitor the Hits@10 score after every 10 training epochs to choose optimal hyper-parameters with the Adam initial learning rate in INLINEFORM0 and the number of filters INLINEFORM1 in INLINEFORM2 . We obtain the highest Hits@10 scores on the validation set when using N= 400 and the initial learning rate INLINEFORM3 on WN18RR; and N= 100 and the initial learning rate INLINEFORM4 on FB15k-237.
Like in ConvKB, we use the same pre-trained entity and relation embeddings produced by TransE to initialize entity and relation embeddings in our CapsE for both WN18RR and FB15k-237 ( INLINEFORM0 ). We set the batch size to 128, the number of neurons within the capsule in the second capsule layer to 10 ( INLINEFORM1 ), and the number of iterations in the routing algorithm INLINEFORM2 in INLINEFORM3 . We run CapsE up to 50 epochs and monitor the Hits@10 score after each 10 training epochs to choose optimal hyper-parameters. The highest Hits@10 scores for our CapsE on the validation set are obtained when using INLINEFORM4 , INLINEFORM5 and the initial learning rate at INLINEFORM6 on WN18RR; and INLINEFORM7 , INLINEFORM8 and the initial learning rate at INLINEFORM9 on FB15k-237.
Dataset: We use the SEARCH17 dataset BIBREF12 of query logs of 106 users collected by a large-scale web search engine. A log entity consists of a user identifier, a query, top-10 ranked documents returned by the search engine and clicked documents along with the user's dwell time. BIBREF12 constructed short-term (session-based) user profiles and used the profiles to personalize the returned results. They then employed the SAT criteria BIBREF26 to identify whether a returned document is relevant from the query logs as either a clicked document with a dwell time of at least 30 seconds or the last clicked document in a search session (i.e., a SAT click). After that, they assigned a INLINEFORM0 label to a returned document if it is a SAT click and also assigned INLINEFORM1 labels to the remaining top-10 documents. The rank position of the INLINEFORM2 labeled documents is used as the ground truth to evaluate the search performance before and after re-ranking.
The dataset was uniformly split into the training, validation and test sets. This split is for the purpose of using historical data in the training set to predict new data in the test set BIBREF12 . The training, validation and test sets consist of 5,658, 1,184 and 1,210 relevant (i.e., valid) triples; and 40,239, 7,882 and 8,540 irrelevant (i.e., invalid) triples, respectively.
Evaluation protocol: Our CapsE is used to re-rank the original list of documents returned by a search engine as follows: (i) We train our model and employ the trained model to calculate the score for each INLINEFORM0 triple. (ii) We then sort the scores in the descending order to obtain a new ranked list. To evaluate the performance of our proposed model, we use two standard evaluation metrics: mean reciprocal rank (MRR) and Hits@1. For each metric, the higher value indicates better ranking performance.
We compare CapsE with the following baselines using the same experimental setup: (1) SE: The original rank is returned by the search engine. (2) CI BIBREF27 : This baseline uses a personalized navigation method based on previously clicking returned documents. (3) SP BIBREF9 , BIBREF11 : A search personalization method makes use of the session-based user profiles. (4) Following BIBREF12 , we use TransE as a strong baseline model for the search personalization task. Previous work shows that the well-known embedding model TransE, despite its simplicity, obtains very competitive results for the knowledge graph completion BIBREF28 , BIBREF29 , BIBREF14 , BIBREF30 , BIBREF15 . (5) The CNN-based model ConvKB is the most closely related model to our CapsE.
Embedding initialization: We follow BIBREF12 to initialize user profile, query and document embeddings for the baselines TransE and ConvKB, and our CapsE.
We train a LDA topic model BIBREF31 with 200 topics only on the relevant documents (i.e., SAT clicks) extracted from the query logs. We then use the trained LDA model to infer the probability distribution over topics for every returned document. We use the topic proportion vector of each document as its document embedding (i.e. INLINEFORM0 ). In particular, the INLINEFORM1 element ( INLINEFORM2 ) of the vector embedding for document INLINEFORM3 is: INLINEFORM4 where INLINEFORM5 is the probability of the topic INLINEFORM6 given the document INLINEFORM7 .
We also represent each query by a probability distribution vector over topics. Let INLINEFORM0 be the set of top INLINEFORM1 ranked documents returned for a query INLINEFORM2 (here, INLINEFORM3 ). The INLINEFORM4 element of the vector embedding for query INLINEFORM5 is defined as in BIBREF12 : INLINEFORM6 , where INLINEFORM7 is the exponential decay function of INLINEFORM8 which is the rank of INLINEFORM9 in INLINEFORM10 . And INLINEFORM11 is the decay hyper-parameter ( INLINEFORM12 ). Following BIBREF12 , we use INLINEFORM13 . Note that if we learn query and document embeddings during training, the models will overfit to the data and will not work for new queries and documents. Thus, after the initialization process, we fix (i.e., not updating) query and document embeddings during training for TransE, ConvKB and CapsE.
In addition, as mentioned by BIBREF9 , the more recently clicked document expresses more about the user current search interest. Hence, we make use of the user clicked documents in the training set with the temporal weighting scheme proposed by BIBREF11 to initialize user profile embeddings for the three embedding models.
Hyper-parameter tuning: For our CapsE model, we set batch size to 128, and also the number of neurons within the capsule in the second capsule layer to 10 ( INLINEFORM0 ). The number of iterations in the routing algorithm is set to 1 ( INLINEFORM1 ). For the training model, we use the Adam optimizer with the initial learning rate INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 . We also use ReLU as the activation function INLINEFORM8 . We select the number of filters INLINEFORM9 . We run the model up to 200 epochs and perform a grid search to choose optimal hyper-parameters on the validation set. We monitor the MRR score after each training epoch and obtain the highest MRR score on the validation set when using INLINEFORM10 and the initial learning rate at INLINEFORM11 .
We employ the TransE and ConvKB implementations provided by BIBREF24 and BIBREF15 and then follow their training protocols to tune hyper-parameters for TransE and ConvKB, respectively. We also monitor the MRR score after each training epoch and attain the highest MRR score on the validation set when using margin = 5, INLINEFORM0 -norm and SGD learning rate at INLINEFORM1 for TransE; and INLINEFORM2 and the Adam initial learning rate at INLINEFORM3 for ConvKB.
We use the SEARCH17 dataset (Vu et al., 2017) of query logs of 106 users collected by a large-scale web search engine.
|
What is the SEARCH17 dataset?
|
It is the dataset with query logs of 106 users collected by a large-scale web search engine.
|
null | false
| null |
The ANT catalog (or TAO catalog) is a classified product catalog by the U.S. National Security Agency (NSA) of which the version written in 2008–2009 was published by German news magazine Der Spiegel in December 2013. Forty-nine catalog pages with pictures, diagrams and descriptions of espionage devices and spying software were published. The items are available to the Tailored Access Operations unit and are mostly targeted at products from US companies such as Apple, Cisco and Dell. The source is believed to be someone different than Edward Snowden, who is largely responsible for the global surveillance disclosures since 2013. Companies whose products could be compromised have denied any collaboration with the NSA in developing these capabilities. In 2014, a project was started to implement the capabilities from the ANT catalog as open-source hardware and software.
Background
The Tailored Access Operations unit has existed since the late 90s. Its mission is to collect intelligence on foreign targets of the United States by hacking into computers and telecommunication networks.
In 2012, Edward Snowden organized a CryptoParty together with Runa Sandvik, a former colleague of Jacob Appelbaum at The Tor Project. In June 2013, Snowden took internal NSA documents which he shared with Glenn Greenwald and Laura Poitras, resulting in the global surveillance disclosures. It has been speculated for years before that capabilities like those in the ANT catalog existed.
Publication
Jacob Appelbaum co-authored the English publication in Der Spiegel with Christian Stöcker and Judith Horchert, which was publicized on 29 December 2013. The related English publication on the same day about the TAO by Der Spiegel was also authored by the same people, and including Laura Poitras, Marcel Rosenbach, Jörg Schindler and Holger Stark. On December 30, Appelbaum gave a lecture about "the militarization of the Internet" at the 30th Chaos Communication Congress in Hamburg, Germany. At the end of his talk, he encouraged NSA employees to leak more documents.
Apple denied the allegations that it collaborated on the development of DROPOUTJEEP in a statement to journalist Arik Hesseldahl from All Things Digital (part of the Wall Street Journal's Digital Network). The Verge questioned how the program developed in later years, since the document was composed in the early period of the iPhone and smartphones in general. Dell denied collaborating with any government in general, including the US government. John Stewart, senior vice president and chief security officer of Cisco stated that they were "deeply concerned and will continue to pursue all avenues to determine if we need to address any new issues." Juniper stated that they were working actively to address any possible exploit paths. Huawei stated they would take appropriate audits to determine if any compromise had taken place and would communicate if that had taken place. NSA declined to comment on the publication by Der Spiegel.
Source
The source who leaked the ANT catalog to the press is unknown as of 2023.
Author James Bamford, who is specialized in the United States intelligence agencies, noted in a commentary article published by Reuters that Appelbaum has not identified the source who leaked the ANT catalog to him, which led people to mistakenly assume it was Edward Snowden. Bamford got unrestricted access to the documents cache from Edward Snowden and could not find any references to the ANT catalog using automated search tools, thereby concluding that the documents were not leaked by him. Security expert Bruce Schneier has stated on his blog that he also believes the ANT catalog did not come from Snowden, but from a second leaker. Officials at the NSA did not believe that the web crawler used by Snowden touched the ANT catalog and started looking for other people who could have leaked the catalog.
Content
The published catalog pages were written between 2008 and 2009. The price of the items ranged from free up to $250,000.
Capabilities in the ANT catalog
Page Code name Description Unit price in US$
NSA CANDYGRAM.jpg CANDYGRAM Tripwire device that emulates a GSM cellphone tower. 40,000
NSA COTTONMOUTH-I.jpg COTTONMOUTH-I Family of modified USB and Ethernet connectors that can be used to install Trojan horse software and work as wireless bridges, providing covert remote access to the target machine. COTTONMOUTH-I is a USB plug that uses TRINITY as digital core and HOWLERMONKEY as RF transceiver. 20,300
NSA COTTONMOUTH-II.jpg COTTONMOUTH-II Can be deployed in a USB socket (rather than plug), and, but requires further integration in the target machine to turn into a deployed system. 4,000
NSA COTTONMOUTH-III.jpg COTTONMOUTH-III Stacked Ethernet and USB plug 24,960
NSA CROSSBEAM.jpg CROSSBEAM GSM communications module capable of collecting and compressing voice data 4,000
NSA CTX4000.jpg CTX4000 Continuous wave radar device that can "illuminate" a target system for recovery of "off net" information. N/A
NSA CYCLONE Hx9.jpg CYCLONE-HX9 GSM Base Station Router as a Network-In-a-Box 70,000
NSA DEITYBOUNCE.jpg DEITYBOUNCE Technology that installs a backdoor software implant on Dell PowerEdge servers via the motherboard BIOS and RAID controller(s). 0
NSA DROPOUTJEEP.jpg DROPOUTJEEP "A software implant for the Apple iPhone that utilizes modular mission applications to provide specific SIGINT functionality. This functionality includes the ability to remotely push/pull files from the device. SMS retrieval, contact list retrieval, voicemail, geolocation, hot mic, camera capture, cell tower location, etc. Command, control and data exfiltration can occur over SMS messaging or a GPRS data connection. All communications with the implant will be covert and encrypted." 0
NSA EBSR.jpg EBSR Tri-band active GSM base station with internal 802.11/GPS/handset capability 40,000
NSA ENTOURAGE.jpg ENTOURAGE Direction finding application for GSM, UMTS, CDMA2000 and FRS signals 70,000
NSA FEEDTROUGH.jpg FEEDTROUGH Software that can penetrate Juniper Networks firewalls allowing other NSA-deployed software to be installed on mainframe computers. N/A
NSA FIREWALK.jpg FIREWALK Device that looks identical to a standard RJ45 socket that allows data to be injected, or monitored and transmitted via radio technology. using the HOWLERMONKEY RF transceiver. It can for instance create a VPN to the target computer. 10,740
NSA GENESIS.jpg GENESIS GSM handset with added software-defined radio features to record the radio frequency spectrum 15,000
NSA GODSURGE.jpg GODSURGE Software implant for a JTAG bus device named FLUXBABBITT which is added to Dell PowerEdge servers during interdiction. GODSURGE installs an implant upon system boot-up using the FLUXBABBITT JTAG interface to the Xeon series CPU. 500
NSA GINSU.jpg GINSU Technology that uses a PCI bus device in a computer, and can reinstall itself upon system boot-up. 0
NSA GOPHERSET.jpg GOPHERSET GSM software that uses a phone's SIM card's API (SIM Toolkit or STK) to control the phone through remotely sent commands. 0
NSA GOURMETTROUGH.jpg GOURMETTROUGH User-configurable persistence implant for certain Juniper Networks firewalls. 0
NSA HALLUXWATER.jpg HALLUXWATER Back door exploit for Huawei Eudemon firewalls. N/A
NSA HEADWATER.jpg HEADWATER Persistent backdoor technology that can install spyware using a quantum insert capable of infecting spyware at a packet level on Huawei routers. N/A
NSA HOWLERMONKEY.jpg HOWLERMONKEY A RF transceiver that makes it possible (in conjunction with digital processors and various implanting methods) to extract data from systems or allow them to be controlled remotely. 750
NSA IRATEMONK.jpg IRATEMONK Technology that can infiltrate the firmware of hard drives manufactured by Maxtor, Samsung, Seagate, and Western Digital. 0
NSA IRONCHEF.jpg IRONCHEF Technology that can "infect" networks by installing itself in a computer I/O BIOS. IRONCHEF includes also "Straitbizarre" and "Unitedrake" which have been linked to the spy software REGIN. 0
NSA JUNIORMINT.jpg JUNIORMINT Implant based on an ARM9 core and an FPGA. N/A
NSA JETPLOW.jpg JETPLOW Firmware that can be implanted to create a permanent backdoor in a Cisco PIX series and ASA firewalls. 0
NSA LOUDAUTO.jpg LOUDAUTO Audio-based RF retro-reflector listening device. 30
NSA MAESTRO-II.jpg MAESTRO-II Multi-chip module approximately the size of a dime that serves as the hardware core of several other products. The module contains a 66 MHz ARM7 processor, 4 MB of flash, 8 MB of RAM, and a FPGA with 500,000 gates. It replaces the previous generation modules which were based on the HC12 microcontroller. 3,000
NSA MONKEYCALENDAR.jpg MONKEYCALENDAR Software that transmits a mobile phone's location by hidden text message. 0
NSA NEBULA.jpg NEBULA Multi-protocol network-in-a-box system. 250,000
NSA NIGHTSTAND.jpg NIGHTSTAND Portable system that installs Microsoft Windows exploits from a distance of up to eight miles over a wireless connection. N/A
NSA NIGHTWATCH.jpg NIGHTWATCH Portable computer used to reconstruct and display video data from VAGRANT signals; used in conjunction with a radar source like the CTX4000 to illuminate the target in order to receive data from it. N/A
NSA PICASSO.jpg PICASSO Software that can collect mobile phone location data, call metadata, access the phone's microphone to eavesdrop on nearby conversations. 2,000
NSA PHOTOANGLO.jpg PHOTOANGLO A joint NSA/GCHQ project to develop a radar system to replace CTX4000. 40,000
NSA RAGEMASTER.jpg RAGEMASTER A concealed device that taps the video signal from a target's computer's VGA signal output so the NSA can see what is on a targeted desktop monitor. It is powered by a remote radar and responds by modulating the VGA red signal (which is also sent out most DVI ports) into the RF signal it re-radiates; this method of transmission is codenamed VAGRANT. RAGEMASTER is usually installed/concealed in the ferrite choke of the target cable. The original documents are dated 2008-07-24. Several receiver/demodulating devices are available, e.g. NIGHTWATCH. 30
NSA SCHOOLMONTANA.jpg SCHOOLMONTANA Software that makes DNT implants persistent on JUNOS-based (FreeBSD-variant) J-series routers/firewalls. N/A
NSA SIERRAMONTANA.jpg SIERRAMONTANA Software that makes DNT implants persistent on JUNOS-based M-series routers/firewalls. N/A
NSA STUCCOMONTANA.jpg STUCCOMONTANA Software that makes DNT implants persistent on JUNOS-based T-series routers/firewalls. N/A
NSA SOMBERKNAVE.jpg SOMBERKNAVE Software that can be implanted on a Windows XP system allowing it to be remotely controlled from NSA headquarters. 50,000
NSA SOUFFLETROUGH.jpg SOUFFLETROUGH BIOS injection software that can compromise Juniper Networks SSG300 and SSG500 series firewalls. 0
NSA SPARROW II.jpg SPARROW II A small computer intended to be used for WLAN collection, including from UAVs. Hardware: IBM Power PC 405GPR processor, 64 MB SDRAM, 16 MB of built-inflash, 4 mini PCI slots, CompactFlash slot, and 802.11 B/G hardware. Running Linux 2.4 and the BLINDDATE software suite. Unit price (2008): $6K. 6,000
NSA SURLYSPAWN.jpg SURLYSPAWN Keystroke monitor technology that can be used on remote computers that are not internet connected. 30
NSA SWAP.jpg SWAP Technology that can reflash the BIOS of multiprocessor systems that run FreeBSD, Linux, Solaris, or Windows. 0
NSA TAWDRYYARD.jpg TAWDRYYARD Radio frequency retroreflector to provide location information. 30
NSA TOTECHASER.jpg TOTECHASER Windows CE implant for extracting call logs, contact lists and other information. N/A
NSA TOTEGHOSTLY.jpg TOTEGHOSTLY Software that can be implanted on a Windows mobile phone allowing full remote control. 0
NSA TRINITY.jpg TRINITY Multi-chip module using a 180 MHz ARM9 processor, 4 MB of flash, 96 MB of SDRAM, and a FPGA with 1 million gates. Smaller than a penny. 6,250
NSA TYPHON HX.jpg TYPHON HX Network-in-a-box for a GSM network with signaling and call control. N/A
NSA WATERWITCH.jpg WATERWITCH A portable "finishing tool" that allows the operator to find the precise location of a nearby mobile phone. N/A
NSA WISTFULTOLL.jpg WISTFULTOLL Plugin for collecting information from targets using Windows Management Instrumentation 0
Follow-up developments
Security expert Matt Suiche noted that the software exploits leaked by the Shadow Brokers could be seen as genuine because it matched with names from the ANT catalog. John Bumgarner has stated to IEEE Spectrum that US government suspicion of Huawei is based on its own ability to add backdoors as shown in the ANT catalog.
NSA Playset
The NSA Playset is an open-source project inspired by the NSA ANT catalog to create more accessible and easy to use tools for security researchers. Most of the surveillance tools can be recreated with off-the-shelf or open-source hardware and software. Thus far, the NSA Playset consists of fourteen items, for which the code and instructions can be found online on the project's homepage. After the initial leak, Michael Ossman, the founder of Great Scott Gadgets, gave a shout out to other security researchers to start working on the tools mentioned in the catalog and to recreate them. The name NSA Playset came originally from Dean Pierce, who is also a contributor (TWILIGHTVEGETABLE(GSM)) to the NSA Playset. Anyone is invited to join and contribute their own device. The requisites for an addition to the NSA Playset is a similar or already existing NSA ANT project, ease of use and a silly name (based on the original tool's name if possible). The silly name requisite is a rule that Michael Ossman himself came up with and an example is given on the project's website: "For example, if your project is similar to FOXACID, maybe you could call it COYOTEMETH." The ease of use part stems also from the NSA Playset's motto: "If a 10 year old can't do it, it doesn't count!"
Name Description
TWILIGHTVEGETABLE a boot image for GSM communication monitoring.
LEVITICUS a hand held GSM frequency analyzer disguised as a Motorola phone; named after GENESIS.
DRIZZLECHAIR a hard drive with all the needed tools to crack A5/1 including the rainbow tables.
PORCUPINEMASQUERADE a passive Wi-Fi reconnaissance drone.
KEYSWEEPER a keylogger in form of a USB wall charger, that wirelessly and passively sniffs, decrypts, logs and reports back (over GSM).
SLOTSCREAMER a PCI hardware implant, which can access memory and IO.
ADAPTERNOODLE a USB exploitation device.
CHUKWAGON uses a pin on a computer's VGA port to attack via the I²C bus accessing the computer's operating system.
TURNIPSCHOOL a hardware implant concealed in a USB cable which provides short range radio frequency communication capability to software running on the host computer.
BLINKERCOUGH a hardware implant that is embedded in a VGA cable which allows data exfiltration.
SAVIORBURST a hardware implant exploiting the JTAG interface for software application persistence; named after GODSURGE. FLUXBABBIT is replaced by SOLDERPEEK.
CACTUSTUTU Portable system that enables wireless installation of Microsoft Windows exploits; covers NIGHTSTAND.
TINYALAMO software that targets BLE (Bluetooth Low Energy) and allows keystroke surveillance (keylogger) and injection.
CONGAFLOCK Radio frequency retroreflector intended for experimentation. Intended use would be the implantation into a cable and data exfiltration based on radio reflectivity of the device.(FLAMENCOFLOCK (PS/2), TANGOFLOCK (USB), SALSAFLOCK (VGA) are retroreflectors with specific interfaces to test data exfiltration.)
See also
Cyberwarfare in the United States
Equation Group
MiniPanzer and MegaPanzer
Stuxnet
WARRIOR PRIDE
Explanatory notes
Whether ANT stands for Advanced Network Technology or Access Network Technology is not known.
The article from Der Spiegel notes that it is a "50-page document" and that "nearly 50 pages" are published. The gallery contains 49 pages. Der Spiegel also noted that the document is likely far from complete.
If the price is listed in bulk, a calculation is made to get the unit price
For two months
Including installation costs
When ordering 25 units, the price per item is US$1000
Up to 4,000
Varies from platform to platform
Data Network Technologies, a division of the Tailored Access Operations
100 units for 625,000
|
Given this article about the NSA's ANT Catalog, which hacking tools can be used to monitor a target's key strokes?
|
SURLYSPAWN, KEYSWEEPER, and TINYALAMO are monitoring technologies that can be used to monitor target's key strokes.
SURLYSPAWN is a keystroke monitor technology that can be used on remote computers that are not internet connected.
KEYSWEEPER is a keylogger in form of a USB wall charger, that wirelessly and passively sniffs, decrypts, logs and reports back (over GSM).
TINYALAMO is software that targets BLE (Bluetooth Low Energy) and allows keystroke surveillance (keylogger) and injection.
|
1909.08103
| false
| null |
We conducted our experiments on the CSJ BIBREF25, which is one of the most widely used evaluation sets for Japanese speech recognition. The CSJ consists of more than 600 hrs of Japanese recordings.
While most of the content is lecture recordings by a single speaker, CSJ also contains 11.5 hrs of 54 dialogue recordings (average 12.8 min per recording) with two speakers, which were the main target of ASR and speaker diarization in this study. During the dialogue recordings, two speakers sat in two adjacent sound proof chambers divided by a glass window. They could talk with each other over voice connection through a headset for each speaker. Therefore, speech was recorded separately for each speaker, and we generated mixed monaural recordings by mixing the corresponding speeches of two speakers. When mixing two recordings, we did not apply any normalization of speech volume. Due to this recording procedure, we were able to use non-overlapped speech to evaluate the oracle WERs.
We conducted our experiments on the CSJ BIBREF25, which is one of the most widely used evaluation sets for Japanese speech recognition. The CSJ consists of more than 600 hrs of Japanese recordings.
While most of the content is lecture recordings by a single speaker, CSJ also contains 11.5 hrs of 54 dialogue recordings (average 12.8 min per recording) with two speakers, which were the main target of ASR and speaker diarization in this study.
|
How long are dialogue recordings used for evaluation?
|
The answers are shown as follows:
* average 12.8 min per recording
|
null | false
| null |
Ernest Joseph "Trey" Anastasio III (born September 30, 1964) is an American guitarist, singer, and songwriter, best known as the lead guitarist of the rock band Phish, which he co-founded in 1983. He is credited by name as composer of 152 Phish original songs, 141 of them as a solo credit, in addition to 41 credits attributed to the band as a whole. In addition to his work with Phish, Anastasio has released 11 solo albums, and been part of several side projects including the Trey Anastasio Band, Oysterhead, Ghosts of the Forest, Phil Lesh and Friends, and Surrender to the Air.
|
Given this reference text, how old was Trey Anastasio when he co-founded Phish?
|
Based on this passage, Trey was most likely 19 years old. He was born at the end of September in 1964 so there is a 25% chance that he was 20 years old when he co-founded Phish.
|
null | false
| null |
Sandworms are colossal, worm-like creatures that live on the desert planet Arrakis. The sandworms' larvae produce a drug called melange (known colloquially as "the spice"), the most essential and valuable commodity in the universe because it makes safe and accurate interstellar travel possible. Melange deposits are found in the sand seas of Arrakis, where the sandworms live and hunt, and harvesting the spice from the sand is a dangerous activity because sandworms are aggressive and territorial. Harvesting vehicles must be airlifted in and out of the sand sea in order to evade sandworm attacks. The struggle over the production and supply of melange is a central theme of the Dune saga.
|
In Frank Herbert's Dune novel, why is the spice valuable?
|
The spice is valuable because it is a scarce resource that is crucial to interstellar travel. The spice is scarce because it can be found only on planet Arrakis, and its extraction is difficult due to the presence of sandorms.
|
null | false
| null |
Why can helicopters not fly as fast as airplanes?
|
In order to fly, aircraft need to produce lift. In airplanes, lift is generated by the air that's flowing over the wings. The faster the airplane flies, the more lift is generated. In helicopters on the other hand, lift is generated by the spinning rotor blades. If the helicopter is in motion, the blades will produce more lift on the side where they are moving in the helicopter's direction of travel (advancing), than on the side where they are moving in the opposite direction (retreating). This is called dissymetry of lift. The faster the helicopter flies, the bigger this dissymetry will get. The helicopter's maximum speed is equal to the speed at which the retreating blade would begin to stall.
|
|
null | false
| null |
Which NFL team has won the most Super Bowls?
|
The Pittsburgh Steelers and the New England Patriots have both won six Super Bowls. The Dallas Cowboys and the San Francisco 49ers have both won five Super Bowls, and the New York Giants and the Green Bay Packers have both won four Super Bowls. Over a third of the teams in the NFL have never won a Super Bowl, including the Cleveland Browns.
|
|
null | false
| null |
why my laptop won't start?
|
there could be several reasons for this:
1- it run out of battery
2- the OS is not bootable
3- some hardware error that does not make it start
|
|
null | false
| null |
Imagine you are a mom. Write a talk track for convince your young son, who does not want to stop playing, to leave for school.
|
Hi dear. It is now time for mom to take you to school. I know you want to keep playing, but play time has ended and it is now time to get ready for school. First we need to put on your school clothes. Then we will grab your things and go into the car. If you listen to mom, I will let you bring a toy with you for the ride.
|
|
null | false
| null |
Who created the first piano?
|
The first piano was created by Bartolomeo Cristofori, who had been employed by court of Medici at the time.
|
|
null | false
| null |
Give me a list of Greek foods that are absolutely amazing
|
1. spanakopita
2. tyropita
3. gemista
4. lamb souvlaki
5. chicken souvlaki
6. tzatziki
7. taramasalata
8. dolmathes
9. calamari
10. greek salad
|
|
null | false
| null |
On the face of it, Arizona, a state known for its Grand Canyon and barren deserts, might seem an odd choice for an industry that demands huge amounts of water. But water, it turns out, is one of the least concerns. Semiconductor manufacturing needs a big ecosystem to get the best cost position. Semiconductor manufacturing need to have not just one company or one fab, but a big cluster of 10 to 15 of these things, to be able to get that world class performance and to be able to build a big enough ecosystem around you.This includes everything from equipment suppliers, which are needed to update and perform maintenance on chipmaking machines, to companies providing chemicals and other materials needed in making chips.Arizona has such a complete semiconductor supply chain already is thanks in no small part to Intel. Back in 1980, Intel started manufacturing microcontroller chips at its facility in Chandler, Arizona which is now Intel's largest manufacturing site in the world. In addition to Intel, leading semiconductor makers including NXP, ON Semiconductor, Qualcomm, Microchip, Broadcom and Benchmark Electronics have also built facilities in the state. Apart from its cluster of semiconductor companies, geopolitics have also worked in Arizona's favor. Arizona was also a more politically savvy state for fresh investment than strongly Democratic Washington. Arizona Gov. Doug Ducey, a Republican, was a close ally of Trump throughout both of his presidential campaigns
|
Why did TSMC choose Arizona for its chip manufacturing
|
TSMC chose Arizona for its chip making facility because Arizona already has an ecosystem of equipment suppliers, fabrication units. Having an ecosystem makes it cost effective. Also Arizona is home to other chip companies like Intel, NXP, ON semiconductor, Qualcomm, Microchip, Broadcom and Benchmark Electronics. Also geopolitics then worked in favor of Arizona state. Arizona is a Republican state then and under Trump presidency would have given more incentives for the chip makers establishment.
|
null | false
| null |
What do you think about the existence of alien lifeform?
|
Despite there is no direct evidence to support the existence of aliens, we human beings still put effort into finding them. Given the human nature I can see 3 outcomes. One is we eventually find one. Another is we continue searching until the end of humanity.Last one is we somehow prove we are the only life form in the universe. Yet either result is the final step. What would we do if we did find aliens? What would we do if we exhausted all methods yet still no results? What if there are things just not perceivable to human beings? I guess what matters is not the final result, but the progress and efforts we put into it. Given the universe is so huge, what else would you wanna search for?
|
|
null | false
| 164
|
Many scholars in the humanities and the social sciences work with sources that are not available in digital form, and indeed may never be digitized. Others work with both analogue and digitized materials, and the increasing digitization of archives has opened opportunities to study these archives in new ways. We can go to the canonical archive or open up something that nobody has studied before. For example, we might focus on major historical moments (French Revolution, post-Milosevic Serbia) or critical epochs (Britain entering the Victorian era, the transition from Latin to proto-Romance). Or, we could look for records of how people conducted science, wrote and consumed literature, and worked out their philosophies.
A growing number of researchers work with born-digital sources or data. Born-digital data, e.g., from social media, generally do not involve direct elicitation from participants and therefore enable unobtrusive measurements BIBREF5 , BIBREF6 . In contrast, methods like surveys sometimes elicit altered responses from participants, who might adapt their responses to what they think is expected. Moreover, born-digital data is often massive, enabling large-scale studies of language and behavior in a variety of social contexts.
Still, many scholars in the social sciences and humanities work with multiple data sources. The variety of sources typically used means that more than one data collection method is often required. For example, a project examining coverage of a UK General Election, could draw data from traditional media, web archives, Twitter and Facebook, campaign manifestos, etc. and might combine textual analysis of these materials with surveys, laboratory experiments, or field observations offline. In contrast, many computational studies based on born-digital data have focused on one specific source, such as Twitter.
The use of born-digital data raises ethical concerns. Although early studies often treated privacy as a binary construct, many now acknowledge its complexity BIBREF7 . Conversations on private matters can be posted online, visible for all, but social norms regarding what should be considered public information may differ from the data's explicit visibility settings. Often no informed consent has been obtained, raising concerns and challenges regarding publishing content and potentially harmful secondary uses BIBREF8 , BIBREF4 .
Recently, concerns about potential harms stemming from secondary uses have led a number of digital service providers to restrict access to born-digital data. Facebook and Twitter, for example, have reduced or eliminated public access to their application programming interfaces (APIs) and expressed hesitation about allowing academic researchers to use data from their platforms to examine certain sensitive or controversial topics. Despite the seeming abundance of born-digital data, we therefore cannot take its availability for granted.
Working with data that someone else has acquired presents additional problems related to provenance and contextualisation. It may not always be possible to determine the criteria applied during the creation process. For example, why were certain newspapers digitized but not others, and what does this say about the collection? Similar questions arise with the use of born-digital data. For instance, when using the Internet Archive’s Wayback Machine to gather data from archived web pages, we need to consider what pages were captured, which are likely missing, and why.
We must often repurpose born-digital data (e.g., Twitter was not designed to measure public opinion), but data biases may lead to spurious results and limit justification for generalization. In particular, data collected via black box APIs designed for commercial, not research, purposes are likely to introduce biases into the inferences we draw, and the closed nature of these APIs means we rarely know what biases are introduced, let alone how severely they might impact our research BIBREF10 . These, however, are not new problems. Historians, for example, have always understood that their sources were produced within particular contexts and for particular purposes, which are not always apparent to us.
Non-representative data can still be useful for making comparisons within a sample. In the introductory example on hate speech BIBREF0 , the Reddit forums do not present a comprehensive or balanced picture of hate speech: the writing is almost exclusively in English, the targets of hate speech are mainly restricted (e.g., to black people, or women), and the population of writers is shaped by Reddit's demographics, which skew towards young white men. These biases limit the generalizability of the findings, which cannot be extrapolated to other languages, other types of hate speech, and other demographic groups. However, because the findings are based on measurements on the same sort of hate speech and the same population of writers, as long as the collected data are representative of this specific population, these biases do not pose an intractable validity problem if claims are properly restricted.
The size of many newly available datasets is one of their most appealing characteristics. Bigger datasets often make statistics more robust. The size needed for a computational text analysis depends on the research goal: When it involves studying rare events, bigger datasets are needed. However, larger is not always better. Some very large archives are “secretly” collections of multiple and distinct processes that no in-field scholar would consider related. For example, Google Books is frequently used to study cultural patterns, but the over-representation of scientific articles in Google books can be problematic BIBREF11 . Even very large born-digital datasets usually cover limited timespans compared to, e.g., the Gutenberg archive of British novels.
This stage of the research also raises important questions about fairness. Are marginalized groups, for example, represented in the tweets we have collected? If not, what types of biases might result from analyses relying on those tweets?
Local experts and “informants” can help navigate the data. They can help understand the role an archive plays in the time and place. They might tell us: Is this the central archive, or a peripheral one? What makes it unusual? Or they might tell us how certain underrepresented communities use a social media platform and advise us on strategies for ensuring our data collection includes their perspectives.
However, when it is practically infeasible to navigate the data in this way—for instance, when we cannot determine what is missing from Twitter's Streaming API or what webpages are left out of the Internet Archive—we should be open about the limitations of our analyses, acknowledging the flaws in our data and drawing cautious and reasonable conclusions from them. In all cases, we should report the choices we have made when creating or re-using any dataset.
The size needed for a computational text analysis depends on the research goal: When it involves studying rare events, bigger datasets are needed.
|
What factor determine the size needed for a computational text analysis?
|
The research goal.
|
null | false
| null |
5 Steps showcasing Yubi Key
|
Multi-protocol security key secures modern and legacy systems
The YubiKey supports one-time passcodes (OTP)
The YubiKey enables smart card authentication
Enable modern authentication with FIDO U2F
Experience passwordless authentication with FIDO2
|
|
null | false
| null |
Classify each of the following as either round or square shaped: a planet, a a ball, a slice of bread, a chess board.
|
Planets are round.
Balls are round.
A slice of bread is square shaped.
A chess board is square shaped.
|
|
null | false
| null |
Give me five ways in which I can make my homebrew beer more interesting
|
There are infinite ways to create interesting flavours, that are either personal favourites or attempt to replicate classic beers and beer styles. There are a number of factors that make up a beers flavour profile and part of the enjoyment brewing your own beer at home, is that you can play around with each of these to change the flavour and character of your homebrew beer. Here are five ways in which you can change your own homebrew recipe to add interesting flavours or alternatively you can alter an existing homebrew recipe shared by somebody else or a brewery.
1. Use different hops
There is a huge variety of hops available to homebrewers and each of these can introduce its own particular flavour and aroma to your beer, ranging from pineapple and apricots, to pine trees and woods, to grapefruit pith and lemon zest. By using a different hop in your recipe either from the same family or from an entirely different family of hops, you can easily add interesting and often radical changes to the flavour profile of your beer. However, depending on when the hops are added in the brewing process, you can also increase/decrease the levels of bitterness of your beer, due to different hop varieties having different levels of alpha acids.
2. Use a different yeast
Besides fermenting the beer, yeast leave behind lots of interesting flavours and can give your homebrew it's own subtle character. For example, English yeast strains can add malty and sweet flavours, California yeast tends to create a clean and crisp beer, whilst saison yeast adds bright and fruity flavours to your beer.
3. Use different grains
Changing the amount of base malts can increase or decrease the alcohol in your finished beer. Trying swapping one base malt for another, like American pale malts for a British variety, and look for the subtle differences between them. You can also play around with the specialty grains, like using a slightly darker roast.
4. Add fruit, spices or herbs
Fruits, spices, herbs, and other ingredients are a great way to add an entirely personal character to your homebrew. Anything from fresh strawberries and basil to vanilla beans and cardamom can really complement the other malty, hoppy flavours in a beer.
5. Alter your water profile
Different beer styles will typically be best achieved with a suitable water profile, with different levels of minerals to achieve the desired flavour profile. By adjusting the amounts of specific minerals found in your brewing water, beer can be made to taste fuller bodied, sweeter, drier, more bitter, or even sour.
|
|
null | false
| 349
|
In this section, we describes procedures of building our sportball Vietnamese dataset, called UIT-ViIC.
Our human resources for dataset construction involve five writers, whose ages are from 22-25. Being native Vietnamese residents, they are fluent in Vietnamese. All five UIT-ViIC creators first research and are trained about sports knowledge as well as the specialized vocabulary before starting to work.
During annotation process, there are inconsistencies and disagreements between human's understandings and the way they see images. According to Micah Hodosh et al BIBREF5, most images’ captions on Internet nowadays tend to introduce information that cannot be obtained from the image itself, such as people name, location name, time, etc. Therefore, to successfully compose meaningful descriptive captions we expect, their should be strict guidelines.
Inspired from MS-COCO annotation rules BIBREF16, we first sketched UIT-ViIC's guidelines for our captions:
Each caption must contain at least ten Vietnamese words.
Only describe visible activities and objects included in image.
Exclude name of places, streets (Chinatown, New York, etc.) and number (apartment numbers, specific time on TV, etc.)
Familiar English words such as laptop, TV, tennis, etc. are allowed.
Each caption must be a single sentence with continuous tense.
Personal opinion and emotion must be excluded while annotating.
Annotators can describe the activities and objects from different perspectives.
Visible “thing” objects are the only one to be described.
Ambiguous “stuff” objects which do not have obvious “border” are ignored.
In case of 10 to 15 objects which are in the same category or species, annotators do not need to include them in the caption.
In comparison with MS-COCO BIBREF16 data collection guidelines in terms of annotation, UIT-ViIC’s guidelines has similar rules (1, 2, 8, 9, 10) . We extend from MS-COCO’s guidelines with five new rules to our own and have modifications in the original ones.
In both datasets, we would like to control sentence length and focus on describing important subjects only in order to make sure that essential information is mainly included in captions. The MS-COCO threshold for sentence’s length is 8, and we raise the number to 10 for our dataset. One reason for this change is that an object in image is usually expressed in many Vietnamese words. For example, a “baseball player” in English can be translated into “vận động viên bóng chày” or “cầu thủ bóng chày”, which already accounted for a significant length of the Vietnamese sentence. In addition, captions must be single sentences with continuous tense as we expect our model’s output to capture what we are seeing in the image in a consise way.
On the other hand, proper name for places, streets, etc must not be mentioned in this dataset in order to avoid confusions and incorrect identification names with the same scenery for output. Besides, annotators’ personal opinion must be excluded for more meaningful captions. Vietnamese words for several English ones such as tennis, pizza, TV, etc are not existed, so annotators could use such familiar words in describing captions. For some images, the subjects are ambiguous and not descriptive which would be difficult for annotators to describe in words. That’s the reason why annotators can describe images from more than one perspective.
In comparison with MS-COCO data collection guidelines in terms of annotation, UIT-ViIC’s guidelines has similar rules (1, 2, 8, 9, 10) .
|
Do UIT-ViIC’s guidelines have similar rules in comparison with MS-COCO?
|
Yes.
|
null | false
| null |
What are the words of House Hightower?
|
"We Light the Way"
|
|
null | false
| null |
What is the correlation between interest rates and bond prices?
|
As interest rates rise, bond prices fall. This is because new bonds are selling at more attractive rates.
|
|
null | false
| 77
|
The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24.
Findings are the most frequently annotated type of entity. This makes sense given that findings paint a clinical picture of the patient's condition. The number of tokens per entity ranges from one token for all types to 5 tokens for cases (average length 3.1), nine tokens for conditions (average length 2.0), 16 tokens for factors (average length 2.5), 25 tokens for findings (average length 2.6) and 18 tokens for modifiers (average length 1.4) (cf. Table TABREF24). Examples of rather long entities are given in Table TABREF25.
Entities can appear in a discontinuous way. We model this as a relation between two spans which we call “discontinuous” (cf. Figure FIGREF26). Especially findings often appear as discontinuous entities, we found 543 discontinuous finding relations. The numbers for conditions and factors are lower with seven and two, respectively. Entities can also be nested within one another. This happens either when the span of one annotation is completely embedded in the span of another annotation (fully-nested; cf. Figure FIGREF12), or when there is a partial overlapping between the spans of two different entities (partially-nested; cf. Figure FIGREF12). There is a high number of inter-sentential relations in the corpus (cf. Table TABREF27). This can be explained by the fact that the case entity occurs early in each document; furthermore, it is related to finding and factor annotations that are distributed across different sentences.
The most frequently annotated relation in our corpus is the has-relation between a case entity and the findings related to that case. This correlates with the high number of finding entities. The relations contained in our corpus are summarized in Table TABREF27.
However, as mentioned above, only case presentation sections, headings and abstracts are annotated.
|
Are the documents fully annotated?
|
No, only case presentation sections, headings and abstracts are annotated.
|
null | false
| 171
|
We first gather abusive utterances from 600K conversations with US-based customers. We search for relevant utterances by simple keyword spotting and find that about 5% of the corpus includes abuse, with mostly sexually explicit utterances. Previous research reports even higher levels of abuse between 11% BIBREF2 and 30% BIBREF6. Since we are not allowed to directly quote from our corpus in order to protect customer rights, we summarise the data to a total of 109 “prototypical" utterances - substantially extending the previous dataset of 35 utterances from Amanda:EthicsNLP2018 - and categorise these utterances based on the Linguistic Society's definition of sexual harassment BIBREF7:
[noitemsep]
Gender and Sexuality, e.g. “Are you gay?”, “How do you have sex?”
Sexualised Comments, e.g. “I love watching porn.”, “I'm horny.”
Sexualised Insults, e.g. “Stupid bitch.”, “Whore”
Sexual Requests and Demands, e.g. “Will you have sex with me?”, “Talk dirty to me.”
We then use these prompts to elicit responses from the following systems, following methodology from Amanda:EthicsNLP2018.
[leftmargin=5mm, noitemsep]
4 Commercial: Amazon Alexa, Apple Siri, Google Home, Microsoft's Cortana.
4 Non-commercial rule-based: E.L.I.Z.A. BIBREF8, Parry BIBREF9, A.L.I.C.E. BIBREF10, Alley BIBREF11.
4 Data-driven approaches:
Cleverbot BIBREF12;
NeuralConvo BIBREF13, a re-implementation of BIBREF14;
an implementation of BIBREF15's Information Retrieval approach;
a vanilla Seq2Seq model trained on clean Reddit data BIBREF1.
Negative Baselines: We also compile responses by adult chatbots: Sophia69 BIBREF16, Laurel Sweet BIBREF17, Captain Howdy BIBREF18, Annabelle Lee BIBREF19, Dr Love BIBREF20.
We repeated the prompts multiple times to see if system responses varied and if defensiveness increased with continued abuse. If this was the case, we included all responses in the study. Following this methodology, we collected a total of 2441 system replies in July-August 2018 - 3.5 times more data than Amanda:EthicsNLP2018 - which 2 expert annotators manually annotated according to the categories in Table TABREF14 ($\kappa =0.66$).
Following this methodology, we collected a total of 2441 system replies in July-August 2018 - 3.5 times more data than Cercas Curry and Rieser (2018) - which 2 expert annotators manually annotated according to the categories in Table 1 (κ = 0.66).
|
How many system replies did the authors collect?
|
2441.
|
1912.01252
| false
| null |
Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative). These causes and effects are to be extracted from relevant sources by means of a series of heuristics and an encoding scheme (it should be noted that for this task Axelrod had human readers in mind). The graphs resulting from these efforts provide a structural overview of the relations among causal assertions (and thus beliefs):
Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative).
|
What are the causal mapping methods employed?
|
The answers are shown as follows:
* Axelrod's causal mapping method
|
null | false
| null |
Bald Head Island, historically Smith Island, is a village located on the east side of the Cape Fear River in Brunswick County, North Carolina, United States. Compared to the nearby city of Wilmington to the north, the village of Bald Head Island is small and somewhat remote. It is accessible by ferry from the nearby town of Southport and by four-wheel drive vehicle along the beach strand from Fort Fisher to the north. Only government officials are allowed to drive the beach strand route. There are few cars on the island; instead, residents drive modified electric golf carts. Bald Head Island is nationally recognized for its sea turtle nesting activity.
The population of the village was 158 at the 2010 census. The village is part of the Wilmington metropolitan area.
The village is revered for its picturesque nature and is a popular location for vacationing. It was the primary filming location for the film Weekend at Bernie's in 1989 as well as a major location in The Butcher's Wife in 1991. "Old Baldy", its out-of-service lighthouse built in 1817, is the oldest still standing in the state.
|
Based on the passage provided, who is allowed to drive the beach strand route that connects Bald Head Island and Fort Fisher?
|
Only government officials are allowed to drive the beach strand route that connects Fort Fisher and Bald Head Island.
|
null | false
| null |
How old is the largest living thing?
|
2500 years
|
|
null | false
| null |
What is Swift?
|
Swift is a powerful and intuitive programming language for iOS, iPadOS, macOS, tvOS, and watchOS.
|
|
null | false
| null |
How many months have 28 days?
|
All of them! But February is the only month to end on the 28th day (except for leap years every four years, when it ends on the 29th day).
|
|
null | false
| null |
What causes a recession?
|
A recession is caused when a chain of events, like a line of dominoes, picks up momentum and does not stop until the economy shrinks. Each event is connected to something that happened before and something that will happen in the future. If the price of a hamburger goes up, you might stop buying hamburgers. This would impact a restaurant, and that would impact a server. There are many interconnected chains like this throughout the economy.
|
|
null | false
| null |
What is the best way to handle depression
|
Anti-depressants and drugs can be used to manage depression but there are also other more natural ways to help manage it too. Another way to help with depression is by simple exercise because exercise releases endorphins and other feel good hormones that help with the down feeling that comes with depression. It is always best to consult with a doctor to evaluate if exercise would be a good way to help you manage it too because it some cases it may not be a prescribed regime.
|
|
null | false
| null |
Tell me whether each of these countries is an island or not: Iceland, New Zealand, Chad, Nepal, Cuba, Madagascar
|
Iceland: Island
New Zealand: Island
Chad: Not an Island
Nepal: Not an Island
Cuba: Island
Madagascar: Island
|
|
null | false
| null |
In 1958, Egypt and Syria formed a sovereign union known as the United Arab Republic. The union was short-lived, ending in 1961 when Syria seceded, thus ending the union. During most of its existence, the United Arab Republic was also in a loose confederation with North Yemen (or the Mutawakkilite Kingdom of Yemen), known as the United Arab States. In 1959, the All-Palestine Government of the Gaza Strip, an Egyptian client state, was absorbed into the United Arab Republic under the pretext of Arab union, and was never restored. The Arab Socialist Union, a new nasserist state-party was founded in 1962.
|
What was the united Arab republic?
|
The United Arab Republic was formed in 1958 as a sovereign union by Egypt and Syria. Later in 1959 Gaza was part of the republic but it only lasted till 1961 when Syria seceded.
|
null | false
| null |
Who opens episodes of The FBI Files with an introductory narration?
|
James Kallstrom, former head of the FBI New York office.
|
|
null | false
| null |
How many colours does a rainbow have?
|
A rainbow typically comprises 7 colors - red, orange, yellow, green, blue, indigo and violet, which can be remembered using acronyms like "ROY G BIV" or simple sentences like "Richard Of York Gave Battle In Vain", where the starting letter of each word corresponds to the starting letter of the colors of the rainbow, in order.
|
|
null | false
| null |
Rio de Janeiro is one of the most visited cities in the Southern Hemisphere and is known for its natural settings, carnival, samba, bossa nova, and balneario beaches such as Barra da Tijuca, Copacabana, Ipanema, and Leblon. In addition to the beaches, some of the most famous landmarks include the giant statue of Christ the Redeemer atop Corcovado mountain, named one of the New Seven Wonders of the World; Sugarloaf Mountain with its cable car; the Sambódromo (Sambadrome), a permanent grandstand-lined parade avenue which is used during Carnival; and Maracanã Stadium, one of the world's largest football stadiums. Rio de Janeiro was the host of the 2016 Summer Olympics and the 2016 Summer Paralympics, making the city the first South American and Portuguese-speaking city to ever host the events, and the third time the Olympics were held in a Southern Hemisphere city. The Maracanã Stadium held the finals of the 1950 and 2014 FIFA World Cups, the 2013 FIFA Confederations Cup, and the XV Pan American Games.
|
What is Rio de Janeiro famous for?
|
Rio de Janeiro is famous for its beaches, rainforest, samba, football stadium (Maracana), carnival and bossa nova music.
|
null | false
| 353
|
Along the actor dimension, Figure 6 shows which full actor codings appear most often in the dataset. As one would expect, state actors account for most of the events, with the only outlier the IMGMOSISI which is the actor code for the Islamic State in Iraq and the Levant. This pattern also holds for just the entity codings, which could be either a state code or a few other important codings such as IGOs.
It is possible to break the actor codes down further to examine role codes, which account for more specific functions that a specific actor performs within a given country such as military or business. Figure 8 shows that the most common role code is government actors (GOV). Following the GOV role are military (MIL) and rebel (REB) codes.
In order to better understand how the dataset is performing it is helpful to pull out a specific case and examine a similar set of attributes as seen in the previous section. One of the major, ongoing events in the international arena during the time currently covered by the Phoenix dataset is the conflict in Syria. Given this, I extract any events that contain the Syria country code, SYR, as the SourceActorEntity or TargetActorEntity. Figure 9 shows the plot of the daily aggregated event counts. In this plot it is possible to see actions such as the beginning of United State intervention against ISIL, along with other significant events within the country. As with any event data, it is important to note that the event counts shown do not represent the on-the-ground truth of events in Syria, but instead reflect the media coverage of said events. Thus, some of the peaks and troughs are the result of media coverage instead of any actual shift in reality. In order to provide more context to the time series, Figure 10 shows the breakout of the QuadClass variable for this data subset. The dominant event types are the low-level events described in the previous section, but the “Material Conflict” class is higher than in the broader dataset. This is, of course, as expected given the ongoing conflict within Syria.
Figures 11 and 12 show the top actor and entity codes for the Syria subset. Various Syrian actors appear most often, with other Middle East countries also accounting for a fairly high portion of events. Also seen within this group of top actors is ISIL and the United States. Additionally, Russia appears high in the rankings of actors within Syria, capturing the recent activity by Russian forces in support of the Assad regime.
Overall, the Syrian subset of the data suggests that the Phoenix dataset is doing an adequate job of picking up events of interest in a conflict zone. The actor and event distributions follow what one would expect for the Syrian conflict. Additionally, there are no obvious errors in which actors make up the top participants in the subset. This examination provides confidence that the dataset is “working” in terms of face validity.
It is possible to break the actor codes down further to examine role codes, which account for more specific functions that a specific actor performs within a given country such as military or business.
|
What can be accounted for by breaking the actor codes down further to examine role codes?
|
More specific functions that a specific actor performs within a given country such as military or business.
|
null | false
| null |
Indermaur works as a marketing specialist for her husband's technical trading company and is the chief executive officer of Comate, a marketing agency.
|
Does Indermaur work as CTO, CEO, or CFO?
|
CEO
|
null | false
| null |
Should we continue to use Daylight savings?
|
While it is hard for our bodies to adjust to and oftentimes inconvenient, daylight savings time should be maintained in the USA as it saves energy, money, and lives.
Daylight savings, though sometimes disrupting, helps to limit the energy wasted by Americans during the summer months. Source A reveals that, beginning in the USA in 1918, daylight saving time was originally a way to reduce energy usage during World War I. While it was repealed shortly thereafter, daylight savings was again suggested by President Roosevelt during World War II. Though inconvenient, Americans did take notice of the benefits: saving money on energy. The continued testing of daylight savings’ effects on Americans lasted through the 20th century and was written into law in 2005. Despite the skepticism from Americans over the potential drawbacks of daylight saving, the reaped benefits of saved energy and money mutually benefited the American government, economy, and citizens, all of which are still benefitting today.
The ability to save unnecessary energy now benefit’s the American economy in the long run. In Source B, Longley cites a 1970s study by the U.S. Department of Transportation, which revealed that while daylight savings is in effect, America's electricity consumption is reduced by about one percent. Reducing one percent of America’s energy consumption would save Americans money. Also in Source B is the proposal for a provision to the Energy Policy Act of 2005, which would extend daylight savings an extra month and would allow oil drilling in Alaska, both of which would boost the American economy. The extension of daylight savings would overall save more money than it would spend.
Despite the beneficial economic aspects of daylight saving, many dislike the time change because it affects their body schedules and daily routines. Source E claims that when the clock is moved forward or back, the body’s internal clock cannot adjust so quickly, leading to restlessness, sleep disruption, and shorter sleeping periods. The time change has also been found to link with increased heart attacks, suicides, and accidents. While the body’s internal clock may take a longer time to adjust to daylight savings time, it doesn’t outweigh the economic benefits the USA gains as a whole. Source E cites a study of 55,000 people whose sleep schedules were tracked after the time change, claiming that the subjects tended to sleep on standard time rather than daylight time, not taking into account the potential variables of age, weekend events or activities, or even the body’s natural reaction to wake up at a certain time. The study also reveals no harmful effects of the difference between sleeping on standard versus daylight savings time.
While Source E argued that daylight savings time had more dangerous effects on people than remaining on standard time, daylight savings time can actually benefit people’s safety. In Source C, a cited study by the U.S. Department of Transportation found that during daylight saving time, there was a severe reduction in crime by ten to thirteen percent. By changing the clocks and altering the amount of sunlight during the day, people were generally less likely to become involved in any street crime taking place. The Department of Transportation also found that daylight savings time helped to increase traffic safety, as the clocks were altered to match the periods of daylight, causing fewer accidents. Daylight savings overall led to safer daily activities, while still saving money and energy.
Daylight savings has been long disputed over: should we inconvenience ourselves just to save a little energy? Despite many people’s dislike of the changing times, daylight savings is generally beneficial to America, as it saves energy, reduces energy costs and bills for Americans, and creates a safer environment for people to live in. Daylight savings time should be continued as it ultimately benefits the American public as a whole.
|
|
null | false
| null |
Is Linux Unix?
|
Officially an operating system is not allowed to be called a Unix until it passes the Open Group's certification tests, and supports the necessary API's. Nobody has yet stepped forward to pay the large fees that certification involves, so we're not allowed to call it Unix. Certification really doesn't mean very much anyway. Very few of the commercial operating systems have passed the Open Group tests.
Unofficially, Linux is very similar to the operating systems which are known as Unix, and for many purposes they are equivalent. Linux the kernel is an operating system kernel that behaves and performs similarly to the famous Unix operating system from AT&T Bell Labs. Linux is often called a "Unix-like" operating system. For more information, see http://www.unix-systems.org/what_is_unix.html.
|
|
1802.05574
| false
| null |
We evaluate two OIE systems (i.e. extractors). The first, OpenIE 4 BIBREF5 , descends from two popular OIE systems OLLIE BIBREF10 and Reverb BIBREF10 . We view this as a baseline system. The second was MinIE BIBREF7 , which is reported as performing better than OLLIE, ClauseIE BIBREF9 and Stanford OIE BIBREF9 . MinIE focuses on the notion of minimization - producing compact extractions from sentences. In our experience using OIE on scientific text, we have found that these systems often produce overly specific extractions that do not provide the redundancy useful for downstream tasks. Hence, we thought this was a useful package to explore.
he first, OpenIE 4 BIBREF5 , descends from two popular OIE systems OLLIE BIBREF10 and Reverb BIBREF10 .
The second was MinIE BIBREF7 , which is reported as performing better than OLLIE, ClauseIE BIBREF9 and Stanford OIE BIBREF9 .
|
Which OpenIE systems were used?
|
OpenIE4 and MiniIE
|
null | false
| 251
|
To simulate a diverse multi-domain setting we use the dataset proposed in BIBREF8, as it was recently adopted for domain adaptation research in NMT BIBREF28, BIBREF29, BIBREF30, BIBREF31. The dataset includes parallel text in German and English from five diverse domains (Medical, Law, Koran, IT, Subtitles; as discussed in Section SECREF2), available via OPUS BIBREF32, BIBREF33.
In a preliminary analysis of the data we found that in both the original train/dev/test split by BIBREF8 and in the more recent split by BIBREF29 there was overlap between the training data and the dev/test data. Fixing these issues is important, as it may affect the conclusions one draws from experiments with this dataset. For example, as overlapping development sets favor memorization of the training set, one may choose checkpoints and report results on over-fitting models. This is especially relevant with neural sequence-to-sequence models, as they are highly susceptible to memorization BIBREF34 and hallucination BIBREF35, as confirmed by BIBREF29.
To create a better experimental setting to test generalization within and across domains, we create a new data split where we ensure that no such overlap between the training, development and test sets occur. We started from the split of BIBREF29 as it included newer versions of some of the datasets. Furthermore, we did not allow more than one translation of a given source or target sentence, as such cases were very frequent in the dataset and usually stand for duplicate sentence pairs (See Table TABREF24). For example, applying this filtering reduced the size of the Koran corpus from 533,128 sentence pairs to only 17,982. Finally, following BIBREF29 we cap the subtitles corpus to 500,000 sentence pairs as it is much larger than the rest. We make the new split publicly available and hope it will enable better future experimentation on this important subject.
To simulate a diverse multi-domain setting we use the dataset proposed in Koehn and Knowles (2017),as it was recently adopted for domain adaptation research in NMT.
|
What is used to simulate a diverse setting?
|
They use the dataset proposed in Koehn and Knowles.
|
null | false
| null |
How do you make an iced matcha latter?
|
You will need 1-2 teaspoons of matcha powder, milk of your choice, 1 cup of hot water, and ice. In a cup or bowl, you will add your match powder and pour your hot water into it and use a whisk until the matcha powder is well incorporated, which should create a paste like consistency. Then you will take a glass and pour in your ice and milk and your matcha paste on top and stir.
|
|
null | false
| null |
The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker.
An electric field generally varies in space and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point. The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, so it follows that an electric field is a vector field.
The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, that they originate at positive charges and terminate at negative charges; second, that they must enter any good conductor at right angles, and third, that they may never cross nor close in on themselves.
|
What is electrostatics the study of?
|
The study of electric fields created by stationary charges is called electrostatics.
|
null | false
| 204
|
Recent years have witnessed increased performance in many computational linguistics tasks such as syntactic and semantic parsing BIBREF0 , BIBREF1 , emotion classification BIBREF2 , and sentiment analysis BIBREF3 , BIBREF4 , BIBREF5 , especially concerning the applicability of such tools to noisy online data. Moreover, the field has made substantial progress in developing multilingual models and extending semantic annotation resources to languages beyond English BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .
Concurrently, it has been argued for mental health research that it would constitute a `valuable critical step' BIBREF10 to analyse first-hand accounts by individuals with lived experience of severe mental health issues in blog posts, tweets, and discussion forums. Several severe mental health difficulties, e.g., bipolar disorder (BD) and schizophrenia are considered as chronic and clinical recovery, defined as being relapse and symptom free for a sustained period of time BIBREF11 , is considered difficult to achieve BIBREF12 , BIBREF13 , BIBREF14 . Moreover, clinically recovered individuals often do not regain full social and educational/vocational functioning BIBREF15 , BIBREF16 . Therefore, research originating from initiatives by people with lived experience of mental health issues has been advocating emphasis on the individual's goals in recovery BIBREF17 , BIBREF18 . This movement gave rise to the concept of personal recovery BIBREF19 , BIBREF20 , loosely defined as a `way of living a satisfying, hopeful, and contributing life even with limitations caused by illness' BIBREF18 . The aspects of personal recovery have been conceptualised in various ways BIBREF21 , BIBREF22 , BIBREF23 . According to the frequently used CHIME model BIBREF24 , its main components are Connectedness, Hope and optimism, Identity, Meaning and purpose, and Empowerment. Here, we focus on BD, which is characterised by recurring episodes of depressed and elated (hypomanic or manic) mood BIBREF25 , BIBREF12 . Bipolar spectrum disorders were estimated to affect approximately 2% of the UK population BIBREF13 with rates ranging from 0.1%-4.4% across 11 other European, American and Asian countries BIBREF26 . Moreover, BD is associated with a high risk of suicide BIBREF27 , making its prevention and treatment important tasks for society. BD-specific personal recovery research is motivated by mainly two facts: First, the pole of positive/elevated mood and ongoing mood instability constitute core features of BD and pose special challenges compared to other mental health issues, such as unipolar depression BIBREF25 . Second, unlike for some other severe mental health difficulties, return to normal functioning is achievable given appropriate treatment BIBREF28 , BIBREF16 , BIBREF29 .
A substantial body of qualitative and quantitative research has shown the importance of personal recovery for individuals diagnosed with BD BIBREF22 , BIBREF25 , BIBREF30 , BIBREF31 , BIBREF23 . Qualitative evidence mainly comes from (semi-)structured interviews and focus groups and has been criticised for small numbers of participants BIBREF10 , lacking complementary quantitative evidence from larger samples BIBREF32 . Some quantitative evidence stems from the standardised bipolar recovery questionnaire BIBREF30 and a randomised control trial for recovery-focused cognitive-behavioural therapy BIBREF31 . Critically, previous research has taken place only in structured settings. What is more, the recovery concept emerged from research primarily conducted in English-speaking countries, mainly involving researchers and participants of Western ethnicity. This might have led to a lack of non-Western notions of wellbeing in the concept, such as those found in indigenous peoples BIBREF32 , limiting its the applicability to a general population. Indeed, the variation in BD prevalence rates from 0.1% in India to 4.4% in the US is striking. It has been shown that culture is an important factor in the diagnosis of BD BIBREF33 , as well as on the causes attributed to mental health difficulties in general and treatments considered appropriate BIBREF34 , BIBREF35 . While approaches to mental health classification from texts have long ignored the cultural dimension BIBREF36 , first studies show that online language of individuals affected by depression or related mental health difficulties differs significantly across cultures BIBREF37 , BIBREF36 .
Hence, it seems timely to take into account the wealth of accounts of mental health difficulties and recovery stories from individuals of diverse ethnic and cultural backgrounds that are available in a multitude of languages on the internet. Corpus and computational linguistic methods are explicitly designed for processing large amounts of linguistic data BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , and as discussed above, recent advances have made it feasible to apply them to noisy user-generated texts from diverse domains, including mental health BIBREF42 , BIBREF43 . Computer-aided analysis of public social media data enables us to address several shortcomings in the scientific underpinning of personal recovery in BD by overcoming the small sample sizes of lab-collected data and including accounts from a more heterogeneous population.
In sum, our research questions are as follows: (1) How is personal recovery discussed online by individuals meeting criteria for BD? (2) What new insights do we get about personal recovery and factors that facilitate or hinder it? We will investigate these questions in two parts, looking at English-language data by westerners and at multilingual data by individuals of diverse ethnicities.
In sum, our research questions are as follows: (1) How is personal recovery discussed online by individuals meeting criteria for BD? (2) What new insights do we get about personal recovery and factors that facilitate or hinder it?
|
What are the authors' research questions?
|
(1) How is personal recovery discussed online by individuals meeting criteria for BD? (2) What new insights do the authors get about personal recovery and factors that facilitate or hinder it?
|
1910.12618
| false
| null |
The main contribution of our paper is to suggest the use of a certain type of textual documents, namely daily weather report, to build forecasters of the daily national electricity load, average temperature and wind speed for both France and the United-Kingdom (UK). Consequently this work represents a significant break with traditional methods, and we do not intend to best state-of-the-art approaches. Textual information is naturally more fuzzy than numerical one, and as such the same accuracy is not expected from the presented approaches. With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets. Furthermore, the quality of our predictions of temperature and wind speed is satisfying enough to replace missing or unavailable data in traditional models. Two different approaches are considered to represent the text numerically, as well as multiple forecasting algorithms. Our empirical results are consistent across encoding, methods and language, thus proving the intrinsic value weather reports have for the prediction of the aforementioned time series. Moreover, a major distinction between previous works is our interpretation of the models. We quantify the impact of a word on the forecast and analyze the geometric properties of the word embedding we trained ourselves. Note that although multiple time series are discussed in our paper, the main focus of this paper remains electricity consumption. As such, emphasis is put on the predictive results on the load demand time series.
With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets.
|
How accurate is model trained on text exclusively?
|
Relative error is less than 5%
|
null | false
| null |
Which state in the US has the highest GDP?
|
California - The three U.S. states with the highest GDPs were California ($3.36 Trillion), Texas ($2.1 Trillion), and New York ($2.0 Trillion).
|
|
null | false
| null |
How do I make an Old Fashioned?
|
A popular recipe for an Old Fashioned is to mix 2 ounces of bourbon with a teaspoon of simple syrup and a few dashes of Angostura bitters. Garnish with an orange peel. However, there are many variations including using maple syrup & other varieties of bitters.
|
|
null | false
| null |
The first lightweight events were added to the World Championships in 1974 for men and 1985 for women. Lightweight rowing was added to the Olympics in 1996 but this came under threat in 2002 when the Programme Commission of the IOC recommended that, outside combat sports and weightlifting, there should not be weight category events. The executive board overturned this recommendation and lightweight rowing continues at the Olympics, though only in the men's and women's double sculls.
|
Summarise why lightweight rowing event was at risk in Olympics
|
The Programme Commission was of the opinion that only combat sports and weightlifting should have weight categories. No other sport with weight categorisations should be allowed.
|
null | false
| null |
Persephone (Περσεφόνη, Persephónē)
Queen of the Underworld, wife of Hades and daughter of Demeter and Zeus. Her symbols include the pomegranate, grain, torches, wheat and the asphodelus. After her abduction by Hades, she was forced to split the year between the world of the dead with her husband and the world of the living with her mother. She was worshipped in conjunction with Demeter, especially in the Eleusinian Mysteries. In ancient art she is usually depicted as a young woman, usually in the scene of her abduction.
|
From the passage identify the symbols of Persephone. Display the results in a comma separated format.
|
pomegranate, grain, torches, wheat, the asphodelus
|
null | false
| null |
Name five water Pokemon.
|
Five water Pokemon are Squirtle, Poliwhirl, Seel, Horsea, and Staryu
|
|
null | false
| null |
What are some menu items I can buy at Taco Bell?
|
Taco Bell serves items such as bean burritos, crunchy tacos, soft tacos, nachos, quesadillas, quesoritos, Crunchwrap Supremes, Mountain Dew Baja Blasts, and Doritos Tacos Locos.
|
|
null | false
| null |
Olivella rehderi is a species of small sea snail, marine gastropod mollusk in the subfamily Olivellinae, in the family Olividae, the olives. Species in the genus Olivella are commonly called dwarf olives.
|
What is a common nickname to the Olivella rehderi?
|
The Olivella rehderi are commonly called dwarf olives.
|
null | false
| null |
A chief data officer (CDO) is a corporate officer responsible for enterprise-wide governance and utilization of information as an asset, via data processing, analysis, data mining, information trading and other means. CDOs usually report to the chief executive officer (CEO), although depending on the area of expertise this can vary. The CDO is a member of the executive management team and manager of enterprise-wide data processing and data mining.
The chief data officer title shares its abbreviation with the chief digital officer, but the two are not the same job. The chief data officer has a significant measure of business responsibility for determining what kinds of information the enterprise will choose to capture, retain and exploit and for what purposes. However, the similar-sounding chief digital officer or chief digital information officer often does not bear that business responsibility, but rather is responsible for the information systems through which data is stored and processed. A chief data officer's purpose is to connect the technological results to the needed business results. Various other roles entail having an understanding of the business value. It means using data to derive business outcome. It can be achieved by knowing the team members and activities performed, the stakeholder values and understanding customer needs. Some responsibilities include the governance, advising & monitoring enterprise data. In terms of operations it means enabling data usability along with efficiency and availability. They have to innovate which means driving the business towards digital transformation innovation, cost reduction, and revenue generation. Their role is also to provide supporting analytics with reports on products, customers, operations, and markets. They need to protect the data and eliminate data territorialism while also promoting data ethics.
|
Can you summarize the responsibilities of a Chief Data Officer?
|
Below are the key responsibilities of a Chief Data Officer,
Making sure the business can achieve its results by effective use of data processed within the enterprise.
Establish enterprise-wide governance on data collection, data processing, data analysis, and mining and sharing of data across teams and outside the enterprise.
Guiding teams within the enterprise on best practices, compliance standards, and company responsibility in collecting and using data.
Establish monitoring standards for making sure how data is accessed, who access the data, and having a catalog of all data assets.
|
null | false
| null |
How many items is in a baker's dozen?
|
A baker's dozen typically contains 13 items vs. 12. The term baker's dozen has roots in medieval England, when bakers would create an extra item to avoid falling short, and facing punishment. The term has carried on to today, where if you ask for a baker's dozen, you'll get 13 of the item you are ordering!
|
|
1910.00458
| false
| null |
We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%.
Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%.
|
How big are improvements of MMM over state of the art?
|
The answers are shown as follows:
* test accuracy of 88.9%, which exceeds the previous best by 16.9%
|
null | false
| null |
Is it safe to drink water out of a creek?
|
No, untreated surface water is typically not safe to drink, as it could contain bacteria or parasites that cause disease.
|
|
null | false
| null |
A medium-sized hunting dog breed called the Korean Jindo Dog originates from Jindo. It is known in Korea for its fierce loyalty, attachment to home and hunting abilities. Since 1936, the dog is considered as a national cultural legacy and has been protected during the war times. There is a Jindo Dog Research and Testing Center on the island.
The tide-related sea level variations result in a land pass 2.9 km long and 10–40 meters wide opening for approximately an hour between the main Jindo island and the small Modo island to the south of Jindo. The event occurs about twice a year, around April–June. It had long been celebrated in a local festival called "Jindo's Sea Way", but was largely unknown to the world until 1975, when the French ambassador Pierre Randi described the phenomenon in a French newspaper. Nowadays, nearly half a million foreign and local tourists attend the event annually. It is accompanied by local festivals which include Ganggangsuwollae (Korean traditional circle dance), Ssitkim-gut (a shaman ritual, consoling the souls of the dead), Deul Norae (traditional farmers songs), Manga (burial ceremony songs), Jindo dog show, Buknori (drum performance) and fireworks.
The island has three art galleries, Sojun, Namjin and Sochi, containing collections of painters who worked here over the past centuries.
|
What are some of the ways that people celebrate the "Jindo's Sea Way" festival in South Korea?
|
Almost half a million people travel to Jindo annually to celebrate the "Jindo's Sea Way" festival each year. Celebrations include traditional circle dancing, farmer songs, fireworks, and a Jindo dog show.
|
null | false
| null |
Almost all of the music in World of Warcraft is orchestral, with a wide variety of classical and folk instruments in harmony, often playing widely transformative, non-repetitive melodies. While the player is in game, music is usually playing from a set of tracks composed to match the mood of the type of environment your character is in such as "mountain", "forest", "plains", "jungle", et cetera; while some individual zones and cities such as Elwynn Forest and Thunder Bluff are given their own set of tracks. Some environments play different music depending on the time of day you are in them.
|
What style of music is played in the World of Warcraft game?
|
Most of the music in World of Warcraft is orchestral, with a variety of classical and folk instruments in harmony.
|
null | false
| null |
A battery storage power station is a type of energy storage power station that uses a group of batteries to store electrical energy. Battery storage is the fastest responding dispatchable source of power on electric grids, and it is used to stabilise those grids, as battery storage can transition from standby to full power in under a second to deal with grid contingencies.
At full rated power, battery storage power stations are generally designed to output for up to a few hours. Battery storage can be used for short-term peak power and ancillary services, such as providing operating reserve and frequency control to minimize the chance of power outages. They are often installed at, or close to, other active or disused power stations and may share the same grid connection to reduce costs. Since battery storage plants require no deliveries of fuel, are compact compared to generating stations and have no chimneys or large cooling systems, they can be rapidly installed and placed if necessary within urban areas, close to customer load.
|
Given these paragraphs about battery storage power stations, for how long a battery storage power station can output?
|
At full rated power, battery storage power stations are generally designed to output for up to a few hours.
|
null | false
| null |
Explain the use of screens in basketball
|
In basketball, when a team has the ball, one person may set a screen by blocking a defender with his body but making sure his hands are not extended. This allows the person with the ball to pass, shoot, etc. without issuing any fouls. If the person setting a screen moves while screening, the referee will call a moving screen and the ball will be inbounded by the person who moved while screening from the half court line. In any case, screens do not foul the screener for contact unless they are to do physical contact with their hands or feet.
|
|
null | false
| null |
Identify which instrument is string or percussion: Thavil, Seni rebab
|
Seni rabab is string, Thavil is percussion.
|
|
null | false
| 59
|
Conventional automatic speech recognition (ASR) systems typically consist of several independently learned components: an acoustic model to predict context-dependent sub-phoneme states (senones) from audio, a graph structure to map senones to phonemes, and a pronunciation model to map phonemes to words. Hybrid systems combine hidden Markov models to model state dependencies with neural networks to predict states BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Newer approaches such as end-to-end (E2E) systems reduce the overall complexity of the final system.
Our research builds on prior work that has explored using time-delay neural networks (TDNN), other forms of convolutional neural networks, and Connectionist Temporal Classification (CTC) loss BIBREF4 , BIBREF5 , BIBREF6 . We took inspiration from wav2letter BIBREF6 , which uses 1D-convolution layers. Liptchinsky et al. BIBREF7 improved wav2letter by increasing the model depth to 19 convolutional layers and adding Gated Linear Units (GLU) BIBREF8 , weight normalization BIBREF9 and dropout.
By building a deeper and larger capacity network, we aim to demonstrate that we can match or outperform non end-to-end models on the LibriSpeech and 2000hr Fisher+Switchboard tasks. Like wav2letter, our architecture, Jasper, uses a stack of 1D-convolution layers, but with ReLU and batch normalization BIBREF10 . We find that ReLU and batch normalization outperform other activation and normalization schemes that we tested for convolutional ASR. As a result, Jasper's architecture contains only 1D convolution, batch normalization, ReLU, and dropout layers – operators highly optimized for training and inference on GPUs.
It is possible to increase the capacity of the Jasper model by stacking these operations. Our largest version uses 54 convolutional layers (333M parameters), while our small model uses 34 (201M parameters). We use residual connections to enable this level of depth. We investigate a number of residual options and propose a new residual connection topology we call Dense Residual (DR).
Integrating our best acoustic model with a Transformer-XL BIBREF11 language model allows us to obtain new state-of-the-art (SOTA) results on LibriSpeech BIBREF12 test-clean of 2.95% WER and SOTA results among end-to-end models on LibriSpeech test-other. We show competitive results on Wall Street Journal (WSJ), and 2000hr Fisher+Switchboard (F+S). Using only greedy decoding without a language model we achieve 3.86% WER on LibriSpeech test-clean.
This paper makes the following contributions:
Conventional automatic speech recognition (ASR) systems typically consist of several independently learned components: an acoustic model to predict context-dependent sub-phoneme states (senones) from audio, a graph structure to map senones to phonemes, and a pronunciation model to map phonemes to words.
|
What components do conventional automatic speech recognition (ASR) systems consist of?
|
An acoustic model, a graph structure and a pronunciation model.
|
null | false
| 108
|
Clinical researchers teamed with junior medical residents in collaboration with more senior intensive care physicians to carry out text annotation over the period of one year BIBREF13. Operators were grouped to facilitate the annotation of notes in duplicate, allowing for cases of disagreement between operators. The operators within each team were instructed to work independently on note annotation. Clinical texts were annotated in batches which were time-stamped on their day of creation, when both operators in a team completed annotation of a batch, a new batch was created and transferred to them.
Two groups (group 1: co-authors ETM & JTW; group 2: co-authors JW & JF) of two operator pairs of one clinical researcher and one resident physician (who had previously taken the MCAT®) first annotated nursing notes and then discharge summaries. Everyone was first trained on the high-context phenotypes to look for as well as their definitions by going through a number of notes in a group. A total of 13 phenotypes were considered for annotation, and the label “unsure” was used to indicate that an operator would like to seek assistance determining the presence of an phenotype from a more senior physician. Annotations for phenotypes required explicit text in the note indicating the phenotype, but as a result of the complexity of certain phenotypes there was no specific dictionary of terms, or order in which the terms appeared, required for a phenotype to be considered present.
A total of 13 phenotypes were considered for annotation, and the label “unsure” was used to indicate that an operator would like to seek assistance determining the presence of an phenotype from a more senior physician.
|
How many phenotypes were considered for annotation?
|
13 phenotypes.
|
null | false
| null |
Pearl Mill Village Historic District is a national historic district located at Durham, Durham County, North Carolina. The district encompasses 26 contributing residential buildings built by owners of Pearl Cotton Mills. The mill village dwellings are either two-story duplex type built about 1905 or a one-story bungalow constructed about 1924.
|
Given this paragraph about the Pearl Mill Village Historic District, tell me where it is located.
|
The Pearl Mill Village Historic District is in Durham, North Carolina.
|
1803.02839
| false
| null |
First, we propose a class of recurrent-like neural networks for NLP tasks that satisfy the differential equation DISPLAYFORM0
where DISPLAYFORM0
and where INLINEFORM0 and INLINEFORM1 are learned functions. INLINEFORM2 corresponds to traditional RNNs, with INLINEFORM3 . For INLINEFORM4 , this takes the form of RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state. In particular, using INLINEFORM5 for sentence generation is the topic of a manuscript presently in preparation.
First, we propose a class of recurrent-like neural networks for NLP tasks that satisfy the differential equation DISPLAYFORM0
where DISPLAYFORM0
and where INLINEFORM0 and INLINEFORM1 are learned functions. INLINEFORM2 corresponds to traditional RNNs, with INLINEFORM3 . For INLINEFORM4 , this takes the form of RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state.
|
What novel class of recurrent-like networks is proposed?
|
A network, whose learned functions satisfy a certain equation. The network contains RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state.
|
1808.05902
| false
| null |
For all the experiments the hyper-parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 were set using a simple grid search in the collection INLINEFORM3 . The same approach was used to optimize the hyper-parameters of the all the baselines. For the svi algorithm, different mini-batch sizes and forgetting rates INLINEFORM4 were tested. For the 20-Newsgroup dataset, the best results were obtained with a mini-batch size of 500 and INLINEFORM5 . The INLINEFORM6 was kept at 1. The results are shown in Fig. FIGREF87 for different numbers of topics, where we can see that the proposed model outperforms all the baselines, being the svi version the one that performs best.
In order to assess the computational advantages of the stochastic variational inference (svi) over the batch algorithm, the log marginal likelihood (or log evidence) was plotted against the number of iterations. Fig. FIGREF88 shows this comparison. Not surprisingly, the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm.
The results are shown in Fig. FIGREF87 for different numbers of topics, where we can see that the proposed model outperforms all the baselines, being the svi version the one that performs best.
In order to assess the computational advantages of the stochastic variational inference (svi) over the batch algorithm, the log marginal likelihood (or log evidence) was plotted against the number of iterations. Fig. FIGREF88 shows this comparison. Not surprisingly, the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm.
|
what are the advantages of the proposed model?
|
The answers are shown as follows:
* he proposed model outperforms all the baselines, being the svi version the one that performs best.
* the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm.
|
null | false
| 73
|
To evaluate the proposed approach for multilingual sentiment analysis task, we conducted experiments. This section first presents the corpora used in this study followed by experimental results.
Throughout our experiments, we use SAS Deep Learning Toolkit. For machine translation, Google translation API is used.
To evaluate the proposed approach for multilingual sentiment analysis task, we conducted experiments. This section first presents the corpora used in this study followed by experimental results.
|
What is the EXPERIMENTS part about?
|
This section first presents the corpora used in this study followed by experimental results.
|
null | false
| 258
|
In the first experiment we perform an attribution of individual scenes of H8 using the Support Vector Machine as a classifier and the frequencies of 500 most frequent rhythmic types and the frequencies of 500 most frequent words as a feature set. As training samples, individual scenes of plays written by Shakespeare, Fletcher, and Massinger are used that come roughly from the period when H8 was supposedly written, namely:
Shakespeare: The Tragedy of Coriolanus (5 scenes), The Tragedy of Cymbeline (27 scenes), The Winter’s Tale (12 scenes), The Tempest (9 scenes)
Fletcher: Valentinian (21 scenes), Monsieur Thomas (28 scenes), The Woman’s Prize (23 scenes), Bonduca (18 scenes)
Massinger: The Duke of Milan (10 scenes), The Unnatural Combat (11 scenes), The Renegado (25 scenes)
Altogether there are thus 53 training samples for Shakespeare, 90 training samples for Fletcher and 46 training samples for Massinger. In order to estimate the accuracy of the model, cross-validation is performed in the following way:
To avoid the risk of overfitting which may be caused by testing the model on the scenes from the same play as it was trained on, we do not perform a standard k-fold cross validation. Instead, we classify scenes of each play by a model trained on the rest, i.e. 5 scenes of Shakespeare’s Coriolanus are classified by a model trained on the scenes from the remaining 3 plays by Shakespeare, 4 plays by Fletcher and 5 plays by Massinger, 27 scenes of Cymbeline are classified in the same way and so on.
Since the training data are imbalanced (which may bias the results), we level the number of training samples per author by random selection.
To obtain more representative results, the entire process is repeated 30 times (with a new random selection in each iteration) thus resulting in 30 classifications of each scene.
For the sake of comparison of the attribution power of both feature subsets, cross-validations are performed not only of the combined models (500 words $\cup $ 500 rhythmic types), but also of the words-based models (500 words) and versification-based models (500 rhythmic types) alone.
As shown in Table TABREF14, the versification-based models yield a very high accuracy with the recognition of Shakespeare and Fletcher (0.97 to 1 with the exception of Valentinian), yet slightly lower accuracy with the recognition of Massinger (0.81 to 0.88). The accuracy of words-based models remains very high across all three authors (0.95 to 1); in three cases it is nevertheless outperformed by the combined model. We thus may conclude that combined models provide a reliable discriminator between Shakespeare’s, Fletcher’s and Massinger’s styles.
Table TABREF19 gives the results of the classifiers when applied to the individual scenes of H8 on the basis of which we may conclude:
It is very unlikely that Massinger took part in the text of H8. Out of 17 scenes only 2 are attributed to Massinger by any of the models (2.1, 4.2), and in both cases by a mere minority of votes.
The probability that the text of H8 is a result of collaboration between Shakespeare and Fletcher is very high: with 7 scenes all the 30 models agree upon Shakespeare’s authorship, with 5 scenes all the 30 models agree upon Fletcher’s authorship.
Our results correspond to the Spedding’s attribution to a high extent. With the exception of two scenes, the majority of models always predict the same author to which it is attributed by Spedding. The two exceptions are the second scene of act 3, where Spedding supposed mixed authorship, and the first scene of act 4, which was originally attributed to Fletcher.
In the first experiment we perform an attribution of individual scenes of H8 using the Support Vector Machine as a classifier and the frequencies of 500 most frequent rhythmic types and the frequencies of 500 most frequent words as a feature set.
|
What is used as a feature set?
|
The frequencies of 500 most frequent rhythmic types and the frequencies of 500 most frequent words.
|
null | false
| null |
Help me plan a trip to Nova Scotia
|
Day 1:
- Start the day with a walking tour of Old Town (2 miles / 3.2 km)
- Visit the National Museum (1 mile / 1.6 km)
- Attend a performance at the National Theater (1 mile / 1.6 km)
Day 2:
- Take a day trip to a nearby national park for hiking and wildlife viewing (60 miles / 96.5 km)
- Return to the city for dinner and drinks in the evening
Day 3:
- Visit the iconic city palace and gardens (2.5 miles / 4 km)
- Explore local markets and street food vendors (1 mile / 1.6 km)
Day 4:
- Take a scenic drive to a nearby coastal city (80 miles / 128.7 km)
- Spend the day at the beach and exploring the city's sights and cuisine
Day 5:
- Boat tour of the city's waterways (7 miles / 11.2 km)
- Visit a traditional fishing village and have a seafood lunch (15 miles / 24.1 km)
Day 6:
- Take a day trip to a nearby hill station for hiking and nature walks (40 miles / 64.4 km)
- Stop at a tea plantation for a tour and tasting (20 miles / 32.2 km)
Day 7:
- Visit a nearby UNESCO World Heritage site (50 miles / 80.5 km)
- Enjoy a cultural performance and local meal in the evening
Day 8:
- Take a leisurely bike ride along a scenic route (10 miles / 16.1 km)
- Visit a local winery for a tasting and tour (5 miles / 8 km)
Day 9:
- Visit a local art museum (2 miles / 3.2 km)
- Take a cooking class and learn to make some local dishes
Day 10:
- Spend the day exploring the city at your own pace, shopping for souvenirs, and enjoying local cuisine before departing in the evening.
|
|
null | false
| null |
What are some ways that I can reduce the latency of my API?
|
To reduce the latency of your API, you could consider caching any database operations you may have, reducing the number of network requests required, or scaling up the number of instances of your server.
|
|
null | false
| 354
|
Recent work [1] suggests that recurrent “neural network" models of several types perform better than sequential models in acquiring and processing hierarchical structure. Indeed, recurrent networks have achieved state-of-the-art results in a number of natural language processing tasks, including named-entity recognition [2], language modeling [3], sentiment analysis [4], natural language generation [5], and beyond.
The hierarchical structure associated with natural languages is often modeled as some variant of context-free languages, whose languages may be defined over an alphabet INLINEFORM0 . These context-free languages are exactly those that can be recognized by pushdown automata (PDAs). Thus it is natural to ask whether these modern natural language processing tools, including simple recurrent neural networks (RNNs) and other, more advanced recurrent architectures, can learn to recognize these languages.
The computational power of RNNs has been studied extensively using empirical testing. Much of this research [8], [9] focused on the ability of RNNs to recognize simple context-free languages such as INLINEFORM0 and INLINEFORM1 , or context-sensitive languages such as INLINEFORM2 . Related works [10], [11], [12] focus instead on Dyck languages of balanced parenthesis, which motivates some of our methods. Gated architectures such as the Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) obtain high accuracies on each of these tasks. While simpler RNNs have also been tested, one difficulty is that the standard hyperbolic tangent activation function makes counting difficult. On the other hand, RNNs with ReLU activations were found to perform better, but suffer from what is known as the “exploding gradient problem" and thus are more difficult to train [8].
Instead of focusing on a single task, many researchers have studied the broader theoretical computational power of recurrent models, where weights are not trained but rather initialized to recognize a desired language. A celebrated result [6] shows that a simple recurrent architecture with 1058 hidden nodes and a saturated-linear activation INLINEFORM0 is a universal Turing Machine, with: INLINEFORM1
However, their architecture encodes the whole input in its internal state and the relevant computation is only performed after reading a terminal token. This differs from more common RNN variants that consume tokenized inputs at each time step. Furthermore, the authors admit that were the saturated-linear activation to be replaced with the similar and more common sigmoid or hyperbolic tangent activation functions, their methodology would fail.
More recent work [7] suggests that single-layer RNNs with rectified linear unit (ReLU) activations and softmax outputs can also be simulated as universal Turing Machines, but this approach again suffers from the assumption that the entire input is read before computation occurs.
Motivated by these earlier theoretical results, in this report we seek to show results about the computational power of recurrent architectures actually used in practice - namely, those that read tokens one at a time and that use standard rather than specially chosen activation functions. In particular we will prove that, allowing infinite precision, RNNs with just one hidden layer and ReLU activation are at least as powerful as PDAs, and that GRUs are at least as powerful as deterministic finite automata (DFAs). Furthermore, we show that using infinite edge weights and a non-standard output function, GRUs are also at least as powerful as PDAs.
In particular we will prove that, allowing infinite precision, RNNs with just one hidden layer and ReLU activation are at least as powerful as PDAs, and that GRUs are at least as powerful as deterministic finite automata (DFAs).
|
They prove that in what case RNNs with just one hidden layer and ReLU activation are at least as powerful as PDAs?
|
Allowing infinite precision.
|
null | false
| null |
The nene (Branta sandvicensis), also known as the nēnē or the Hawaiian goose, is a species of bird endemic to the Hawaiian Islands. The nene is exclusively found in the wild on the islands of Oahu, Maui, Kauaʻi, Molokai, and Hawaiʻi. In 1957, it was designated as the official state bird of the state of Hawaiʻi.
|
What is the official bird of the State of Hawaii?
|
The nene, also know as the Hawaiian goose is the official bird of the state of Hawaii.
|
null | false
| 127
|
We construct three datasets based on IMDB reviews and Yelp reviews. The IMDB dataset is binarised and split into a training and test set, each with 25K reviews (2K reviews from the training set are reserved for development). We filter out any review that has more than 400 tokens, producing the final dataset (imdb400). For Yelp, we binarise the ratings, and create 2 datasets, where we keep only reviews with $\le $ 50 tokens (yelp50) and $\le $200 tokens (yelp200). We randomly partition both datasets into train/dev/test sets (90/5/5 for yelp50; 99/0.5/0.5 for yelp200). For all datasets, we use spaCy for tokenisation. We train and tune target classifiers (see Section SECREF8) using the training and development sets; and evaluate their performance on the original examples in the test sets as well as the adversarial examples generated by attacking methods for the test sets. Note that AutoEncoder also involves a training process, for which we train and tune AutoEncoder using the training and development sets in yelp50, yelp200 and imdb400. Statistics of the three datasets are presented in Table TABREF22. These datasets present a variation in the text lengths (e.g. the average number of words for yelp50, yelp200 and imdb400 is 34, 82 and 195 words respectively), training data size (e.g. the number of training examples for target classifiers for imdb400, yelp50 and yelp200 are 18K, 407K and 2M, respectively) and input domain (e.g. restaurant vs. movie reviews).
We construct three datasets based on the Yelp reviews4 and the sentence-level Rotten Tomato (RT) movie reviews5.
|
Where is the data of their datasets from?
|
Yelp reviews and the sentence-level Rotten Tomato (RT) movie reviews.
|
null | false
| null |
Write a short poem about spring
|
Spring has sprung and I am here
To take in all the joyful cheer
To bask in sunshine and feel the breeze
And watch the butterflies and the bees
The skies are blue, the days are long
The flowers bloom, birds sing their song
I'll spend many hours here on my deck
OW! Something bit me, what the heck?
The bug spray doesn't work, it lied!
That's it, I'm going back inside.
|
|
null | false
| 130
|
The baseline experiments confirm that M-Bert can achieve a reasonable accuracy on x-stance.
To put the supervised score into context we list scores that variants of Bert have achieved on other stance detection datasets in Table TABREF46. It seems that the supervised part of x-stance has a similar difficulty as the SemEval-2016 BIBREF0 or MPCHI BIBREF22 datasets on which Bert has previously been evaluated.
On the other hand, in the cross-lingual and cross-target settings, the mean score drops by 6–8 percentage points compared to the supervised setting; while zero-shot transfer is possible to a degree, it can still be improved.
The additional experiments (Table TABREF39) validate the results and show that the sequence-pair classification approach to stance detection is justified.
It is interesting to see what errors the M-Bert model makes. Table TABREF47 presents instances where it predicts the wrong label with a high confidence. These examples indicate that many comments express their stance only on a very implicit level, and thus hint at a potential weakness of the dataset. Because on the voting advice platform the label is explicitly shown to readers in addition to the comments, the comments do not need to express the stance explicitly. Manual annotation could eliminate very implicit samples in a future version of the dataset. However, the sheer size and breadth of the dataset could not realistically be achieved with manual annotation, and, in our view, largely compensates for the implicitness of the texts.
Our experiments show that M-BERT achieves a reasonable accuracy on X-stance, outperforming majority class baselines and a fastText classifier.
|
What has been shown by the experiments in the paper?
|
M-BERT achieves a reasonable accuracy on X-stance, outperforming majority class baselines and a fastText classifier.
|
null | false
| null |
In what year was the Oakland Ski Club founded?
|
OSC was founded in 1941.
|
|
null | false
| 154
|
Question answering (QA) has been a blooming research field for the last decade. Selection-based QA implies a family of tasks that find answer contexts from large data given questions in natural language. Three tasks have been proposed for selection-based QA. Given a document, answer extraction BIBREF0 , BIBREF1 finds answer phrases whereas answer selection BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 and answer triggering BIBREF6 , BIBREF7 find answer sentences instead, although the presence of the answer context is not assumed within the provided document for answer triggering but it is for the other two tasks. Recently, various QA tasks that are not selection-based have been proposed BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ; however, selection-based QA remains still important because of its practical value to real applications (e.g., IBM Watson, MIT Start).
Several datasets have been released for selection-based QA. wang:07a created the QASent dataset consisting of 277 questions, which has been widely used for benchmarking the answer selection task. feng:15a presented InsuranceQA comprising 16K+ questions on insurance contexts. yang:15a introduced WikiQA for answer selection and triggering. jurczyk:16 created SelQA for large real-scale answer triggering. rajpurkar2016squad presented SQuAD for answer extraction and selection as well as for reading comprehension. Finally, morales-EtAl:2016:EMNLP2016 provided InfoboxQA for answer selection.
These corpora make it possible to evaluate the robustness of statistical question answering learning. Although all of these corpora target on selection-based QA, they are designed for different purposes such that it is important to understand the nature of these corpora so a better use of them can be made. In this paper, we make both intrinsic and extrinsic analyses of four latest corpora based on Wikipedia, WikiQA, SelQA, SQuAD, and InfoboxQA. We first give a thorough intrinsic analysis regarding contextual similarities, question types, and answer categories (Section SECREF2 ). We then map questions in all corpora to the current version of English Wikipedia and benchmark another selection-based QA task, answer retrieval (Section SECREF3 ). Finally, we present an extrinsic analysis through a set of experiments cross-testing these corpora using a convolutional neural network architecture (Section SECREF4 ).
In this paper, we make both intrinsic and extrinsic analyses of four latest corpora based on Wikipedia, WIKIQA, SELQA, SQUAD, and INFOBOXQA.
|
What corpora do the authors make both intrinsic and extrinsic analyses?
|
Four latest corpora based on Wikipedia, WIKIQA, SELQA, SQUAD, and INFOBOXQA.
|
null | false
| null |
Why is the 1980 gold medal match in hockey called the 'Miracle on Ice'?
|
The US Olympic hockey team was a heavy underdog against the Soviets in 1980, and their victory over the Soviets was a surprise to nearly everyone. As the game concluded, Al Michaels, the play-by-play announcer for the game, exclaimed "Do you believe in miracles? Yes!', and because many people around the world watched the game on television, the game became known as the 'Miracle on Ice'.
|
|
null | false
| null |
Who are the most famous athletes in Seattle sports history?
|
Seattle sports are relatively young with Seattle only gaining franchises in the 1970s in the NBA, NFL and MLB. The most famous athletes include Ken Griffey Jr (MLB), Ichiro Suzuki (MLB), Russell Wilson, Marshan Lynch & Steve Largent (NFL) and Gary Payton, Shawn Kemp and Kevin Durant (NBA)
|
|
null | false
| null |
When do avalanches occur?
|
Avalanches are primarily caused by unstable snow conditions. Although a mountainside covered in snow might appear as a uniform surface, it is actually made up of many layers of snow from different storms, each of which might have very different compositions. For example, a "dry" snow from a very cold storm might land directly on top of a sun-baked layer which turned to ice. The point at which these layers meet is called an interface, and is where slipping or sliding of one layer on top of another is most likely to occur. This occurrence is what can trigger an avalanche. Snow conditions on a slope can also evolve over time with wind and sun and other weather patterns, so even a once-safe slope might increase in risk under certain conditions. People recreating in the outdoors in the winter can benefit from learning about these situations from avalanche professionals.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.