paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Twitter was created by Jack Dorsey, Noah Glass, Biz Stone, and Evan Williams in March 2006 and launched in July of that year. Twitter, Inc. is based in San Francisco, California and has more than 25 offices around the world. By 2012, more than 100 million users tweeted 340 million tweets a day, and the service handled an average of 1.6 billion search queries per day. In 2013, it was one of the ten most-visited websites and has been described as "the SMS of the Internet". By the start of 2019, Twitter had more than 330 million monthly active users. In practice, the vast majority of tweets are tweeted by a minority of users. In 2020, it was estimated that approximately 48 million accounts (15% of all accounts) were fake.
|
Given a reference text about twitter, how many accounts were estimated to be fake?
|
In 2020, about 48 million accounts were estimated to be fake.
|
null | false
| 381
|
Named entity typing is the task of detecting the type (e.g., person, location, or organization) of a named entity in natural language text. Entity type information has shown to be useful in natural language tasks such as question answering BIBREF0 , knowledge-base population BIBREF1 , BIBREF2 , and co-reference resolution BIBREF3 . Motivated by its application to downstream tasks, recent work on entity typing has moved beyond standard coarse types towards finer-grained semantic types with richer ontologies BIBREF0 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . Rather than assuming an entity can be uniquely categorized into a single type, the task has been approached as a multi-label classification problem: e.g., in “... became a top seller ... Monopoly is played in 114 countries. ...” (fig:arch), “Monopoly” is considered both a game as well as a product.
The state-of-the-art approach BIBREF8 for fine-grained entity typing employs an attentive neural architecture to learn representations of the entity mention as well as its context. These representations are then combined with hand-crafted features (e.g., lexical and syntactic features), and fed into a linear classifier with a fixed threshold. While this approach outperforms previous approaches which only use sparse binary features BIBREF4 , BIBREF6 or distributed representations BIBREF9 , it has a few drawbacks: (1) the representations of left and right contexts are learnt independently, ignoring their mutual connection; (2) the attention on context is computed solely upon the context, considering no alignment to the entity; (3) document-level contexts which could be useful in classification are not exploited; and (4) hand-crafted features heavily rely on system or human annotations.
To overcome these drawbacks, we propose a neural architecture (fig:arch) which learns more context-aware representations by using a better attention mechanism and taking advantage of semantic discourse information available in both the document as well as sentence-level contexts. Further, we find that adaptive classification thresholds leads to further improvements. Experiments demonstrate that our approach, without any reliance on hand-crafted features, outperforms prior work on three benchmark datasets.
We found that a flat architecture with inter-sentence Transformer layers performs the best, achieving the state-of-the-art results on this task.
|
Which architecture performs best they find?
|
A flat architecture with inter-sentence Transformer layers.
|
2003.06279
| false
| null |
In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.
A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.
|
On what model architectures are previous co-occurence networks based?
|
The answers are shown as follows:
* in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window
* connects only adjacent words in the so called word adjacency networks
|
null | false
| null |
The arch is among many in the Devils Garden area in the north of the park. Landscape Arch was named by Frank Beckwith who explored the area in the winter of 1933–1934 as the leader of an Arches National Monument scientific expedition. The arch can be reached by a 0.8 mi (1.3 km) graded gravel trail.
|
Who named the Landscape Arch?
|
The landscape arch was named by Frank Beckwith who found the arch by exploring the area in the winter of 1933-1934
|
1701.03214
| false
| null |
For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded. We early stopped the training process when we observed that the BLEU score of the development set converges. For testing, we self ensembled the three parameters of the best development loss, the best development BLEU, and the final parameters. Beam size was set to 100.
For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded.
|
What kinds of neural networks did they use in this paper?
|
The answers are shown as follows:
* LSTMs
|
null | false
| 133
|
The task of document quality assessment is to automatically assess a document according to some predefined inventory of quality labels. This can take many forms, including essay scoring (quality = language quality, coherence, and relevance to a topic), job application filtering (quality = suitability for role + visual/presentational quality of the application), or answer selection in community question answering (quality = actionability + relevance of the answer to the question). In the case of this paper, we focus on document quality assessment in two contexts: Wikipedia document quality classification, and whether a paper submitted to a conference was accepted or not.
Automatic quality assessment has obvious benefits in terms of time savings and tractability in contexts where the volume of documents is large. In the case of dynamic documents (possibly with multiple authors), such as in the case of Wikipedia, it is particularly pertinent, as any edit potentially has implications for the quality label of that document (and around 10 English Wikipedia documents are edited per second). Furthermore, when the quality assessment task is decentralized (as in the case of Wikipedia and academic paper assessment), quality criteria are often applied inconsistently by different people, where an automatic document quality assessment system could potentially reduce inconsistencies and enable immediate author feedback.
Current studies on document quality assessment mainly focus on textual features. For example, BIBREF0 examine features such as the article length and the number of headings to predict the quality class of a Wikipedia article. In contrast to these studies, in this paper, we propose to combine text features with visual features, based on a visual rendering of the document. Figure 1 illustrates our intuition, relative to Wikipedia articles. Without being able to read the text, we can tell that the article in Figure 1 has higher quality than Figure 1 , as it has a detailed infobox, extensive references, and a variety of images. Based on this intuition, we aim to answer the following question: can we achieve better accuracy on document quality assessment by complementing textual features with visual features?
Our visual model is based on fine-tuning an Inception V3 model BIBREF1 over visual renderings of documents, while our textual model is based on a hierarchical biLSTM. We further combine the two into a joint model. We perform experiments on two datasets: a Wikipedia dataset novel to this paper, and an arXiv dataset provided by BIBREF2 split into three sub-parts based on subject category. Experimental results on the visual renderings of documents show that implicit quality indicators, such as images and visual layout, can be captured by an image classifier, at a level comparable to a text classifier. When we combine the two models, we achieve state-of-the-art results over 3/4 of our datasets.
This paper makes the following contributions:
All code and data associated with this research will be released on publication.
We perform experiments on two datasets: a Wikipedia dataset novel to this paper, and an arXiv dataset provided by Kang et al. (2018) split into three sub-parts based on subject category.
|
What datasets do the authors perform experiments on?
|
A Wikipedia dataset and an arXiv dataset.
|
null | false
| 59
|
Conventional automatic speech recognition (ASR) systems typically consist of several independently learned components: an acoustic model to predict context-dependent sub-phoneme states (senones) from audio, a graph structure to map senones to phonemes, and a pronunciation model to map phonemes to words. Hybrid systems combine hidden Markov models to model state dependencies with neural networks to predict states BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Newer approaches such as end-to-end (E2E) systems reduce the overall complexity of the final system.
Our research builds on prior work that has explored using time-delay neural networks (TDNN), other forms of convolutional neural networks, and Connectionist Temporal Classification (CTC) loss BIBREF4 , BIBREF5 , BIBREF6 . We took inspiration from wav2letter BIBREF6 , which uses 1D-convolution layers. Liptchinsky et al. BIBREF7 improved wav2letter by increasing the model depth to 19 convolutional layers and adding Gated Linear Units (GLU) BIBREF8 , weight normalization BIBREF9 and dropout.
By building a deeper and larger capacity network, we aim to demonstrate that we can match or outperform non end-to-end models on the LibriSpeech and 2000hr Fisher+Switchboard tasks. Like wav2letter, our architecture, Jasper, uses a stack of 1D-convolution layers, but with ReLU and batch normalization BIBREF10 . We find that ReLU and batch normalization outperform other activation and normalization schemes that we tested for convolutional ASR. As a result, Jasper's architecture contains only 1D convolution, batch normalization, ReLU, and dropout layers – operators highly optimized for training and inference on GPUs.
It is possible to increase the capacity of the Jasper model by stacking these operations. Our largest version uses 54 convolutional layers (333M parameters), while our small model uses 34 (201M parameters). We use residual connections to enable this level of depth. We investigate a number of residual options and propose a new residual connection topology we call Dense Residual (DR).
Integrating our best acoustic model with a Transformer-XL BIBREF11 language model allows us to obtain new state-of-the-art (SOTA) results on LibriSpeech BIBREF12 test-clean of 2.95% WER and SOTA results among end-to-end models on LibriSpeech test-other. We show competitive results on Wall Street Journal (WSJ), and 2000hr Fisher+Switchboard (F+S). Using only greedy decoding without a language model we achieve 3.86% WER on LibriSpeech test-clean.
This paper makes the following contributions:
All Jasper models have four additional convolutional blocks: one pre-processing and three post-processing.
|
Do Jasper models contain an additional pre-processing block?
|
Yes.
|
null | false
| null |
The Cable Television Standards Council of Canada was an independent organization established by the Canadian cable television industry to administer Standards, Codes and Guidelines that ensure high standards of customer service. In February 2002, Vidéotron withdrew from the CTSC . Later that year, effective December 1, Shaw also withdrew . "The CTSC dealt with complaints with respect to cable service, such as concerns about quality of service and billing until 12 April 2006, when this organisation ceased operation. Since then, complaints filed with the Commission about quality of service and billing are forwarded to the appropriate licensee [that is a cable company] for resolution."
|
When did Videotron withdraw from the Cable Television Standards Council (CTSC)?
|
Videotron withdrew from the Cable Television Standards Council in February 2002.
|
1906.05474
| false
| null |
FLOAT SELECTED: Table 1: BLUE tasks
FLOAT SELECTED: Table 1: BLUE tasks
|
which tasks are used in BLUE benchmark?
|
The answers are shown as follows:
* Inference task
The aim of the inference task is to predict whether the premise sentence entails or contradicts the hypothesis sentence
* Document multilabel classification
The multilabel classification task predicts multiple labels from the texts.
* Relation extraction
The aim of the relation extraction task is to predict relations and their types between the two entities mentioned in the sentences.
* Named entity recognition
The aim of the named entity recognition task is to predict mention spans given in the text
* Sentence similarity
The sentence similarity task is to predict similarity scores based on sentence pairs
|
null | false
| null |
What are 3 popular movies with Adam Sandler?
|
The following are 3 popular movies with Adam Sandler:
1. Big Daddy
2. Billy Madison
3. The Waterboy
|
|
null | false
| 512
|
The fully-connected layers consist of three layers and the number of the units are 512, 256, and 1 respectively
|
How many fully connected layers with how many neurons were used?
|
We added the description of the fully-connected layer.
|
null | false
| null |
The Berlin Wall (German: Berliner Mauer, pronounced [bɛʁˌliːnɐ ˈmaʊ̯ɐ] (listen)) was a guarded concrete barrier that encircled West Berlin of the Federal Republic of Germany (FRG) from 1961 to 1989, separating it from East Berlin and the German Democratic Republic (GDR) (GDR). Construction of the Berlin Wall was commenced by the government of the GDR on 13 August 1961. It included guard towers placed along large concrete walls, accompanied by a wide area (later known as the "death strip") that contained anti-vehicle trenches, beds of nails and other defenses. The primary intention for the Wall's construction was to prevent East German citizens from fleeing to the West.
|
What was the primary reason for constructing the Berlin Wall?
|
The Berlin Wall was primarily constructed to prevent East German citizens from fleeing to the West.
|
null | false
| null |
What is the YIMBY movement?
|
YIMBY stands for Yes In My Back Yard. The YIMBY movement is a grassroots response to the phenomenon of NIMBYism, which stands for Not In My Back Yard, and is the name for concentrated local opposition to societally-beneficial resources. For example, NIMBY neighbors might oppose the construction of a new hospital in their neighborhood on the grounds that it will increase local traffic. The whole city will benefit from the new hospital, but the immediate neighbors will oppose it on these grounds regardless of where in the city it is constructed.
The YIMBY movement in the United States is primarily concerned with housing. YIMBY organizations have been formed across the country to support measures that will increase the supply of housing, such as repealing bans on the construction of apartment buildings.
|
|
1804.03396
| true
| null |
For the future work, we plan to solve the triples with multiple entities as the second entity, which is excluded from problem scope in this paper. Besides, processing longer documents and improving the quality of our benchmark are all challenging problems as we mentioned previously. We hope this work can provide new thoughts for the area of information extraction.
The input of QA4IE is a document $D$ with an existing knowledge base $K$ and the output is a set of relation triples $R = \lbrace e_i, r_{ij}, e_j\rbrace $ in $D$ where $e_i$ and $e_j$ are two individual entities and $r_{ij}$ is their relation. We ignore the adverbials and only consider the entity pairs and their relations as in standard RE settings. Note that we process the entire document as a whole instead of processing individual sentences separately as in previous systems. As shown in Figure 1 , our QA4IE framework consists of four key steps:
For the future work, we plan to solve the triples with multiple entities as the second entity, which is excluded from problem scope in this paper.
The input of QA4IE is a document $D$ with an existing knowledge base $K$ and the output is a set of relation triples $R = \lbrace e_i, r_{ij}, e_j\rbrace $ in $D$ where $e_i$ and $e_j$ are two individual entities and $r_{ij}$ is their relation.
|
Can this approach model n-ary relations?
|
No.
|
null | false
| 399
|
Machine translation (MT) has achieved huge advances in the past few years BIBREF1, BIBREF2, BIBREF3, BIBREF4. However, the need for a large amount of manual parallel data obstructs its performance under low-resource conditions. Building an effective model on low resource data or even in an unsupervised way is always an interesting and challenging research topic BIBREF5, BIBREF6, BIBREF7. Recently, unsupervised MT BIBREF8, BIBREF9, BIBREF0, BIBREF10, BIBREF11, which can immensely reduce the reliance on parallel corpora, has been gaining more and more interest.
Training cross-lingual word embeddings BIBREF0, BIBREF12 is always the first step of the unsupervised MT models which produce a word-level shared embedding space for both the source and target, but the lexical coverage can be an intractable problem. To tackle this issue, BIBREF13 provided a subword-level solution to overcome the out-of-vocabulary (OOV) problem.
In this work, the systems we implement for the German-Czech language pair are built based on the previously proposed unsupervised MT systems, with some adaptations made to accommodate the morphologically rich characteristics of German and Czech BIBREF14. Both word-level and subword-level neural machine translation (NMT) models are applied in this task and further tuned by pseudo-parallel data generated from a phrase-based statistical machine translation (PBSMT) model, which is trained following the steps proposed in BIBREF10 without using any parallel data. We propose to train BPE embeddings for German and Czech separately and align those trained embeddings into a shared space with MUSE BIBREF0 to reduce the combinatorial explosion of word forms for both languages. To ensure the fluency and consistency of translations, an additional Czech language model is trained to select the translation candidates generated through beam search by rescoring them. Besides the above, a series of post-processing steps are applied to improve the quality of final translations. Our contribution is two-fold:
We propose a method to combine word and subword (BPE) pre-trained input representations aligned using MUSE BIBREF0 as an NMT training initialization on a morphologically-rich language pair such as German and Czech.
We study the effectiveness of language model rescoring to choose the best sentences and unknown word replacement (UWR) procedure to reduce the drawback of OOV words.
This paper is organized as follows: in Section SECREF2, we describe our approach to the unsupervised translation from German to Czech. Section SECREF3 reports the training details and the results for each steps of our approach. More related work is provided in Section SECREF4. Finally, we conclude our work in Section SECREF5.
We study the effectiveness of language model rescoring to choose the best sentences and unknown word replacement (UWR) procedure to reduce the drawback of OOV words.
|
What effectiveness of language model rescoring do they study?
|
To choose the best sentences and unknown word replacement (UWR) procedure to reduce the drawback of OOV words.
|
null | false
| null |
Give me the list of top 10 movies of all time, by IMDB rating. Format: Title (Release Date) Rating
|
These are the top 10 movies based on their IMDB rating:
1. The Shawshank Redemption (1994) 9.2
2. The Godfather (1972) 9.2
3. The Dark Knight (2008) 9.0
4. The Godfather Part II (1974) 9.0
5. 12 Angry Men (1957) 9.0
6. Schindler's List (1993) 8.9
7. The Lord of the Rings: The Return of the King (2003) 8.9
8. Pulp Fiction (1994) 8.8
9. The Lord of the Rings: The Fellowship of the Ring (2001) 8.8
10. The Good, the Bad and the Ugly (1966) 8.8
|
|
null | false
| null |
Oxygen Not Included is a survival simulation video game developed and published by Klei Entertainment. After being released on Steam's early access since February 2017, the game was officially released on July 30, 2019.
Oxygen Not Included is a simulation survival game. At the start of a new game, three colonists (referred to as duplicants) find themselves in an asteroid with isolated pockets of breathable atmosphere, with no memory of how they got there. The player is tasked with managing and taking care of these duplicants as they try to survive and create a sustainable makeshift space colony. The player must monitor the duplicants' hunger, waste, and oxygen levels to keep them alive. Each game's world is procedurally generated. The world is then subdivided into various regions or "biomes" that contain different and often biome specific materials and critters. While initial areas have a breathable atmosphere, subsequent areas are in a vacuum or lack oxygen, requiring proper preparation by the duplicants before they explore these areas. The world also contains several hazards such as diseases and extreme temperatures. The game simulates the diffusion of gases and equalization of atmospheres when a new natural chamber is opened, which can cause oxygen levels to drop in existing chambers, as well as the draining of liquids by gravity.
To help establish the colony, the player directs the duplicants to perform certain tasks, such as mining for resources, growing food, crafting equipment, researching new technologies, and maintaining their own health through nourishment, rest, and hygiene. The player does not control the duplicants directly, and instead provides prioritized instructions, from which the duplicants will then follow to the best of their abilities. For example, the player can order a conduit of wire to be built; which will have the duplicants collect the materials to make the wire, clear away any materials around the conduit's path, and then construct the wire. If the duplicants cannot access a source of copper for the wire, the task will remain uncompleted as the duplicants go to complete other tasks they can do. Duplicants have stats that determine how effective they are at certain tasks, and will prioritize tasks that they are best at. These duplicants' skills can be improved over time and practice.
|
What is Oxygen Not Included?
|
Oxygen Not Included is a space-colony simulation survival game produced and published by Klei Entertainment, the same studio that developed Don’t Starve. At the start of a new game, three colonists (referred to as duplicants) find themselves in an asteroid with isolated pockets of breathable atmosphere, with no memory of how they got there. The player is tasked with managing and taking care of these duplicants as they try to survive and create a sustainable makeshift space colony. The player must monitor the duplicants' hunger, waste, and oxygen levels to keep them alive. To help establish the colony, the player directs the duplicants to perform certain tasks, such as mining for resources, growing food, crafting equipment, researching new technologies, and maintaining their own health through nourishment, rest, and hygiene.
|
null | false
| null |
Can you write a Haiku about using AI?
|
Computers are smart, I wonder who's in control, Beware of Skynet
|
|
null | false
| null |
What is the structure of Public Forum debate, as used in US debate competitions?
|
A public forum debate takes place between two teams of two individuals each. For the assigned topic, one team argues for the affirmative, or pro side, and the other argues for the negative, or con side.
The debate is organized in a sequence of speeches according to the following outline:
• Constructives (4 minutes): The first speaker of each team gives a pre-prepared case that makes arguments for their side.
• 1st Crossfire (3 minutes): The crossfire, or cross-examination, is a time for members of each team to take turns asking each other questions about their arguments. The first crossfire takes place between the first speakers of each team.
• Rebuttals (4 minutes): The second speaker of each team presents a rebuttal, in which they respond to the points made in their opponents' constructive speech. The second speaking team's rebuttal will sometimes defend their own case along with attacking their opponents'.
• 2nd Crossfire (3 minutes): This crossfire is similar to the first crossfire, except it is between the second speakers of each team.
• Summaries (3 minutes): The first speaker of each team gives a speech that aims to break down the main issues of the round.
• Grand Crossfire (3 minutes): A crossfire where both members of both teams ask each other questions.
• Final Focuses (2 minutes): The second speaker of each team gives a speech that aims to emphasize the reasons the judge should vote for them.
Additionally, each team is given 3 minutes of preparation time throughout the round, which they can choose to take before any of their speeches. It is important to note that the time duration of each stage may vary across different leagues.
After all speeches are given, the judges, or judge panel, will take some time to deliberate before casting their ballot for one of the two teams. There are no ties.
|
|
null | false
| null |
Which country has the most inhabitants?
|
# Country (or dependency) Population (2020)
1 China 1,439,323,776
2 India 1,380,004,385
3 United States 331,002,651
4 Indonesia 273,523,615
|
|
null | false
| null |
Identify which car manufacturer is Japanese or American: Dodge, Subaru
|
Dodge is American, Subaru is Japanese
|
|
null | false
| null |
Are there more wheels or doors in New York City?
|
This has been a long-term debate. I think there are more wheels than doors in New York City. Here is why. Many people who argue there are more doors bring up office doors, car doors (cars also have 4 wheels), cabinets, refrigerators, etc. However, I think what sets the number of wheels apart from doors is office chairs. Each office chair has four wheels and zero doors. Now multiply the number of workers in New York by four and that will get you the number of additional wheels - which I think offsets the number of doors that exist in the city.
|
|
null | false
| null |
Give me a bulleted list of reasons why Lionel Messi is the best soccer player of all time.
|
The following are reasons why Lionel Messi is the best soccer player:
1. He has won 7 Ballon d'Or awards
2. He has won 4 Champions League titles
3. He won the World Cup
4. He won the Copa America
5. He has over 800 goals and over 300 assists in his career
|
|
1910.05752
| false
| null |
We utilize the VATEX dataset for video captioning, which contains over 41,250 videos and 825,000 captions in both English and Chinese. Among the captions, there are over 206,000 English-Chinese parallel translation pairs. It covers 600 human activities and a variety of video content. Each video is paired with 10 English and 10 Chinese diverse captions. We follow the official split with 25,991 videos for training, 3,000 videos for validation and 6,000 public test videos for final testing.
We utilize the VATEX dataset for video captioning, which contains over 41,250 videos and 825,000 captions in both English and Chinese. Among the captions, there are over 206,000 English-Chinese parallel translation pairs. It covers 600 human activities and a variety of video content. Each video is paired with 10 English and 10 Chinese diverse captions. We follow the official split with 25,991 videos for training, 3,000 videos for validation and 6,000 public test videos for final testing.
|
How big is the dataset used?
|
The answers are shown as follows:
* over 41,250 videos and 825,000 captions in both English and Chinese.
* over 206,000 English-Chinese parallel translation pairs
|
null | false
| null |
Give me a list of the Avengers
|
These are some of the Avengers that have appeared in the Marvel Comics:
- Iron Man
- Captain America
- Hulk
- Thor
- Spiderman
- Black Panther
- Black Widow
- Hawkeye
- Ant Man
- Doctor Strange
|
|
null | false
| 16
|
We perform qualitative analysis to understand how the latent embeddings help induce syntactic structures. First we filter out low-frequency words and punctuations in WSJ, and visualize the rest words (10k) with t-SNE BIBREF42 under different embeddings. We assign each word with its most likely gold POS tags in WSJ and color them according to the gold POS tags.
For our Markov-structured model, we have displayed the embedding space in Figure SECREF5 , where the gold POS clusters are well-formed. Further, we present five example target words and their five nearest neighbors in terms of cosine similarity. As shown in Table TABREF53 , the skip-gram embedding captures both semantic and syntactic aspects to some degree, yet our embeddings are able to focus especially on the syntactic aspects of words, in an unsupervised fashion without using any extra morphological information.
In Figure FIGREF54 we depict the learned latent embeddings with the DMV-structured syntax model. Unlike the Markov structure, the DMV structure maps a large subset of singular and plural nouns to the same overlapping region. However, two clusters of singular and plural nouns are actually separated. We inspect the two clusters and the overlapping region in Figure FIGREF54 , it turns out that the nouns in the separated clusters are words that can appear as subjects and, therefore, for which verb agreement is important to model. In contrast, the nouns in the overlapping region are typically objects. This demonstrates that the latent embeddings are focusing on aspects of language that are specifically important for modeling dependency without ever having seen examples of dependency parses. Some previous work has deliberately created embeddings to capture different notions of similarity BIBREF43 , BIBREF44 , while they use extra morphology or dependency annotations to guide the embedding learning, our approach provides a potential alternative to create new embeddings that are guided by structured syntax model, only using unlabeled text corpora.
Some previous work has deliberately created embeddings to capture different notions of similarity (Levy and Goldberg, 2014; Cotterell and Schutze ¨ , 2015), while they use extra morphology or dependency annotations to guide the embedding learning, our approach provides a potential alternative to create new embeddings that are guided by structured syntax model, only using unlabeled text corpora.
|
What is the difference compared to previous methods?
|
Their approach provides a potential alternative to create new embeddings that are guided by structured syntax model, only using unlabeled text corpora.
|
null | false
| null |
Name three Italian Football Teams in Serie A.
|
Three Italian Football Teams in Serie A are AC Milan, Inter Milan, and Juventus.
|
|
null | false
| 329
|
With the growth of the internet in the last decades, users are faced with an increasing amount of information and have to find ways to summarize it. However, producing summaries in a multi-document setting is a challenging task; the language used to display the same information in a sentence can vary significantly, making it difficult for summarization models to capture. Thus large corpora are needed to develop efficient models. There exist two types of summarization: extractive and abstractive. Extractive summarization outputs summaries in two steps, namely via sentence ranking, where an importance score is assigned to each sentence, and via the subsequent sentence selection, where the most appropriate sentence is chosen. In abstractive summarization, summaries are generated word by word auto-regressively, using sequence-to-sequence or language models. Given the complexity of multi-document summarization and the lack of datasets, most researchers use extractive summarization and rely on hand-crafted features or additional annotated data, both needing human expertise.
To our knowledge, wiki2018 is the only work that has proposed a large dataset for multi-document summarization. By considering Wikipedia entries as a collection of summaries on various topics given by their title (e.g., Machine Learning, Stephen King), they create a dataset of significant size, where the lead section of an article is defined as the reference summary and input documents are a mixture of pages obtained from the article's reference section and a search engine. While this approach benefits from the large number of Wikipedia articles, in many cases, articles contain only a few references that tend to be of the desired high quality, and most input documents end up being obtained via a search engine, which results in noisy data. Moreover, at testing time no references are provided, as they have to be provided by human contributors. wiki2018 showed that in this case, generated summaries based on search engine results alone are of poor quality and cannot be used.
In contrast, we propose a novel domain-specific dataset containing $14\,652$ samples, based on professional video game reviews obtained via Metacritic and gameplay sections from Wikipedia. By using Metacritic reviews in addition to Wikipedia articles, we benefit from a number of factors. First, the set of aspects used to assess a game is limited and consequently, reviews share redundancy. Second, because they are written by professional journalists, reviews tend to be in-depth and of high-quality. Additionally, when a video game is released, journalists have an incentive to write a complete review and publish it online as soon as possible to draw the attention of potential customers and increase the revenue of their website BIBREF0. Therefore, several reviews for the same product become quickly available and the first version of the corresponding Wikipedia page is usually made available shortly after. Lastly, reviews and Wikipedia pages are available in multiple languages, which opens up the possibility for multilingual multi-document summarization.
In contrast, we propose a novel domain-specific dataset containing $14\,652$ samples, based on professional video game reviews obtained via Metacritic and gameplay sections from Wikipedia.
|
How many samples does the proposed dataset contain?
|
14625 samples.
|
null | false
| null |
Give me a list of the best exploration-focused video games.
|
Here are some games where the primary goal is exploration:
- Subnautica
- Outer Wilds
- Journey
- No Man's Sky
- Stray
- Astroneer
- Grow: Song of the Evertree
- Firewatch
- Call of the Sea
- ABZU
- What Remains of Edith Finch
|
|
1909.05438
| false
| null |
Following the back-translation paradigm BIBREF3 , BIBREF4 , we have a semantic parser, which maps a natural language question INLINEFORM0 to a logical form INLINEFORM1 , and a question generator, which maps INLINEFORM2 to INLINEFORM3 . The semantic parser works for the primary task, and the question generator mainly works for generating pseudo datapoints. We start the training process by applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5 . The resulting dataset is considered as the training data to initialize both the semantic parser and the question generator. Afterwards, both models are improved following the back-translation protocol that target sequences should follow the real data distribution, yet source sequences can be generated with noises. This is based on the consideration that in an encoder-decoder model, the decoder is more sensitive to the data distribution than the encoder. We use datapoints from both models to train the semantic parser because a logical form is structural which follows a grammar, whose distribution is similar to the ground truth.
We start the training process by applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5 . The resulting dataset is considered as the training data to initialize both the semantic parser and the question generator. Afterwards, both models are improved following the back-translation protocol that target sequences should follow the real data distribution, yet source sequences can be generated with noises.
|
How is the back-translation model trained?
|
The answers are shown as follows:
* applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5
* both models are improved following the back-translation protocol that target sequences should follow the real data distribution
|
null | false
| null |
Categorize the following list into color names and non-color words: red, yellow, leafy, bankers, fuchsia, magenta, cerulean, purple, lavender, blue, orange, felt
|
Color names: red, fuchsia, magenta, cerulean, purple, lavender, blue, orange
Non-color names: leafy, bankers, felt
|
|
null | false
| null |
Kingsway a 3 km stretch was built as a ceremonial boulevard by whom?
|
Edwin Lutyens & Herbert Baker
|
|
1911.05153
| false
| null |
To evaluate model robustness, we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. These could stem from paraphrasing a sentence, e.g., lexical and syntactical changes. We use two approaches described in literature: back-translation and noisy sequence autoencoder. Note that these examples resemble black-box attacks but are not intentionally designed to fool the system and hence, we use the term 'adversarial' broadly. We use these techniques to produce many paraphrases and find a subset of utterances that though very similar to the original test set, result in wrong predictions. We will measure the model robustness against such changes.
To evaluate model robustness, we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. These could stem from paraphrasing a sentence, e.g., lexical and syntactical changes. We use two approaches described in literature: back-translation and noisy sequence autoencoder. Note that these examples resemble black-box attacks but are not intentionally designed to fool the system and hence, we use the term 'adversarial' broadly. We use these techniques to produce many paraphrases and find a subset of utterances that though very similar to the original test set, result in wrong predictions. We will measure the model robustness against such changes.
|
How authors create adversarial test set to measure model robustness?
|
The answers are shown as follows:
* we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction.
* We use two approaches described in literature: back-translation and noisy sequence autoencoder.
|
null | false
| null |
When Aegon the Conqueror took over Westeros, which great house of the Reach became extinct as a result?
|
House Gardener became extinct during Aegon's Conquest. House Tyrell was then granted rule of The Reach.
|
|
1912.01046
| false
| null |
We downloaded 76 videos from a tutorial website about an image editing program . Each video is pre-processed to provide the transcripts and the time-stamp information for each sentence in the transcript. We then used Amazon Mechanical Turk to collect the question-answer pairs . One naive way of collecting the data is to prepare a question list and then, for each question, ask the workers to find the relevant parts in the video. However, this approach is not feasible and error-prone because the videos are typically long and finding a relevant part from a long video is difficult. Doing so might also cause us to miss questions which were relevant to the video segment. Instead, we took a reversed approach. First, for each video, we manually identified the sentence spans that can serve as answers. These candidates are of various granularity and may overlap. The segments are also complete in that they encompass the beginning and end of a task. In total, we identified 408 segments from the 76 videos. Second we asked AMT workers to provide question annotations for the videos.
We downloaded 76 videos from a tutorial website about an image editing program .
|
What is the source of the triples?
|
The answers are shown as follows:
* a tutorial website about an image editing program
|
null | false
| null |
Different types of climbing warrant particular features for harnesses. Sport climbers typically use minimalistic harnesses, some with sewn-on gear loops. Alpine climbers often choose lightweight harnesses, perhaps with detachable leg loops. Big Wall climbers generally prefer padded waist belts and leg loops. There are also full body harnesses for children, whose pelvises may be too narrow to support a standard harness safely. These harnesses prevent children from falling even when inverted, and are either manufactured for children or constructed out of webbing. Some climbers use full body harnesses when there is a chance of inverting, or when carrying a heavy bag. There are also chest harnesses, which are used only in combination with a sit harness. Test results from UIAA show that chest harnesses do not put more impact on the neck than sit harnesses, giving them the same advantages as full body harness.
Apart from these harnesses, there are also caving and canyoning harnesses, which all serve different purposes. For example, a caving harness is made of tough waterproof and unpadded material, with dual attachment points. Releasing the maillon from these attachment points loosens the harness quickly.
Canyoning harnesses are somewhat like climbing harnesses, often without the padding, but with a seat protector, making it more comfortable to rappel. These usually have a single attachment point of Dyneema.
|
Provide me a list of the different types of harnesses used in rock climbing
|
Minimalistic Harness: has gear loops that are sewn on
Lightweight Harness: has leg loops that are removable
Big Wall Harness: uses padded waist belts and leg loops
Full Body Harness: mostly for children with small pelvises, or for scenarios that risk inversion
Chest Harness: to be used with a sit harness
Caving Harness: made of ultra durable material and multiple attachment points
Canyon Harness: has a built-in seat protector for rappeling.
|
null | false
| null |
Give me a bunch of excuses about why I don't have my homework to turn in
|
Your dog ate your homework, you forget it at home, you spilled water all over it, your printer broke, you had a test in another class and were too busy studying
|
|
1911.03243
| false
| null |
We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix.
We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments.
|
How was the previous dataset annotated?
|
The answers are shown as follows:
* the annotation machinery of BIBREF5
|
null | false
| 136
|
Based on the models described in Section we experiment with eight variants: (a) baseline transformer model (base); (b) base with AIC (base+sum); (c) base with AIF using spacial (base+att) or object based (base+obj) image features; (d) standard deliberation model (del); (e) deliberation models enriched with image information: del+sum, del+att and del+obj.
Based on the models described in Section 3 we experiment with eight variants: (a) baseline transformer model (base); (b) base with AIC (base+sum); (c) base with AIF using spacial (base+att) or object based (base+obj) image features; (d) standard deliberation model (del); (e) deliberation models enriched with image information: del+sum, del+att and del+obj.
|
How many variants do authors experiment with?
|
Eight variants.
|
null | false
| 427
|
Neurons in the C. elegans nervous system are largely non-spiking, so we model the voltage dynamics for these neurons as a passive point neurons with a single electrical compartment. Let v ∈ R N denote the voltages of the N neurons. The voltage v i (t) for each post-synaptic neuron i at the time t was calculated using a first-order leaky integrator equation given by
where τ i is the voltage time constant, o i is the chemosensory input provided to only the sensory neurons, s c i is the chemical synaptic input, s e i is the electrical synaptic input, v rest i is the resting neuron voltage.
We studied two variations of the model, the current-based model and the conductance-based model, which differ in their formulations of chemical synaptic input s c i . Since neurons in the C. elegans nervous system are largely non-spiking, we model the chemical synapses as having graded release of neurotransmitter, rather than the all-or-none quantal release seen in spiking neurons. In both models, we model the amount of neurotransmitter released, W c ji g(v j (t)), in proportion to the pre-synaptic voltage v j followed by a softplus activation g(•) which sets a minimum voltage below which there is no synaptic release. We use g(•), to denote a softplus function for the rest of the paper. In our current-based model, synaptic input s c i to a post-synaptic neuron i is directly proportional pre-synaptic neurotransmitter concentration:
where W c ji represents the chemical synaptic weight between pre-synaptic neuron j and post-synaptic neuron i. W c ji can be positive or negative depending on if the synaptic connection is excitatory or inhibitory. If neurons j and i are not connected, W c ji is set to zero. In the conductance-based model, we model the synaptic current entering the post-synaptic neuron with more biophysical detail as
Here, the pre-synaptic neurotransmitter concentration W c ji g(v j (t)) is more accurately modeled as proportional to the conductance at the post-synaptic terminal. The post-synaptic current is then given by the product of the synaptic conductance and the difference between the post-synaptic voltage v i (t) and the synaptic reversal potential E ji .
In contrast to the current-based synapse, whose input is independent of the post-synaptic voltage v i (t), the more biophysically accurate conductance-based synapse model also has a dependence on the post-synaptic voltage. Additionally, this model decouples the sign of the synapse from the strength. The reversal potential of a synapse E ji dictates whether a synapse is excitatory or inhibitory. A large and positive E ji corresponds to an excitatory synapse causing depolarization of the postsynaptic neuron. Conversely, an inhibitory synapse will have a negative E ji causing hyperpolarization. In this model, we can now independently train the sign of a synapse and its non-negative strength W c ji , which is not easily possible with current-based synapses. In both the current-based and conductance-based models, the following equation was used to represent electrical synaptic inputs:
where W e ji is restricted to be non-negative and v j − v i is the potential difference between presynaptic and postsynaptic neurons. We also restrict W e ji = W e ij because the potential differences between electrical synapses are symmetric. To directly compare the outputs of the LVM to neural activity measurements, our model must generate calcium signals from the voltage traces. We model the calcium concentration [Ca] i of each neuron i as a first-order leaky integrator, driven by voltage-gated calcium channels with the same nonlinear current-voltage (I-V) function g(v i ):
where g(•) represents softplus activation, τ [Ca] is a time constant shared across all neurons. We then map calcium concentration [Ca] into the measured calcium fluorescence signals f via an affine transform with scalar α f and bias β f ,
with measurement noise represented by a noise amplitude σ f and a noise term f i (t) ∼ N (0, 1).These equations are simulated in discrete time using Euler integration.
where g(·) represents softplus activation,
|
Does 'g' in equation 5 denote the softplus?
|
‘g’ represents a softplus with a fixed threshold of 20 in equation 5. The function of equation 5 is the same across neurons. All ‘g’ in our paper represents a softplus activation with a fixed threshold 20. We have included clarifications in our revision.
|
1901.02262
| true
| null |
Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style.
Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles.
|
Does their model also take the expected answer style as input?
|
Yes.
|
null | false
| null |
Identify which instrument is string or percussion: Zatula, Jarana huasteca
|
Zatula is percussion, Jarana huasteca is string.
|
|
null | false
| 162
|
We evaluate MPAD against multiple state-of-the-art baseline models, including hierarchical ones, to enable fair comparison with the hierarchical MPAD variants.
doc2vec BIBREF37. Doc2vec (or paragraph vector) is an extension of word2vec that learns vectors for documents in a fully unsupervised manner. Document embeddings are then fed to a logistic regression classifier.
CNN BIBREF38. The convolutional neural network architecture, well-known in computer vision, is applied to text. There is one spatial dimension and the word embeddings are used as channels (depth dimensions).
DAN BIBREF39. The Deep Averaging Network passes the unweighted average of the embeddings of the input words through multiple dense layers and a final softmax.
Tree-LSTM BIBREF40 is a generalization of the standard LSTM architecture to constituency and dependency parse trees.
DRNN BIBREF41. Recursive neural networks are stacked and applied to parse trees.
LSTMN BIBREF42 is an extension of the LSTM model where the memory cell is replaced by a memory network which stores word representations.
C-LSTM BIBREF43 combines convolutional and recurrent neural networks. The region embeddings provided by a CNN are fed to a LSTM.
SPGK BIBREF44 also models documents as word co-occurrence networks. It computes a graph kernel that compares shortest paths extracted from the word co-occurrence networks and then uses a SVM to categorize documents.
WMD BIBREF45 is an application of the well-known Earth Mover's Distance to text. A k-nearest neighbor classifier is used.
S-WMD BIBREF46 is a supervised extension of the Word Mover's Distance.
Semantic-CNN BIBREF47. Here, a CNN is applied to semantic units obtained by clustering words in the embedding space.
LSTM-GRNN BIBREF26 is a hierarchical model where sentence embeddings are obtained with a CNN and a GRU-RNN is fed the sentence representations to obtain a document vector.
HN-ATT BIBREF27 is another hierarchical model, where the same encoder architecture (a bidirectional GRU-RNN) is used for both sentences and documents, with different parameters. A self-attention mechanism is applied to the RNN annotations at each level.
We evaluate MPAD against multiple state-of-the-art baseline models, including hierarchical ones, to enable fair comparison with the hierarchical MPAD variants. Doc2vec (Le and Mikolov 2014) is an extension of word2vec that learns vectors for documents in a fully un_x001f_supervised manner. Document embeddings are then fed to a logistic regression classifier. CNN (Kim 2014). 1D convolutional neural network where the word embeddings are used as channels (depth dimensions). DAN (Iyyer et al. 2015). The Deep Averaging Network passes the unweighted average of the embeddings of the input words through multiple dense layers and a final softmax. Tree-LSTM (Tai, Socher, and Manning 2015) is a generalization of the standard LSTM architecture to constituency and dependency parse trees. DRNN (Irsoy and Cardie 2014). Recursive neural networks are stacked and applied to parse trees. LSTMN (Cheng, Dong, and Lapata 2016) is an extension of the LSTM model where the memory cell is replaced by a memory network which stores word representations. C-LSTM (Zhou et al. 2015) combines convolutional and recurrent neural networks. The region embeddings provided by a CNN are fed to a LSTM. SPGK (Nikolentzos et al. 2017) also models documents as word co-occurrence networks. It computes a graph kernel that compares shortest paths extracted from the word cooccurrence networks and then relies on a SVM. WMD (Kusner et al. 2015) is an application of the well- known Earth Mover's Distance to text. A k-nearest neighbor classifier is used. DiSAN (Shen et al. 2018) uses directional self-attention along with multi-dimensional attention to generate document representations. LSTM-GRNN (Tang, Qin, and Liu 2015) is a hierarchical model where sentence embeddings are obtained with a CNN and a GRU-RNN is fed the sentence representations to obtain a document vector. HN-ATT (Yang et al. 2016) is another hierarchical model, where the same encoder architecture (a bidirectional GRU- RNN) is used for both sentences and documents. Selfattention is applied at each level.
|
How many baseline models are MPAD compared with
|
Eleven.
|
null | false
| 118
|
One major challenge of building a QA dataset on tweets is the sparsity of informative tweets. Many users write tweets to express their feelings or emotions about their personal lives. These tweets are generally uninformative and also very difficult to ask questions about. Given the linguistic variance of tweets, it is generally hard to directly distinguish those tweets from informative ones. In terms of this, rather than starting from Twitter API Search, we look into the archived snapshots of two major news websites (CNN, NBC), and then extract the tweet blocks that are embedded in the news articles. In order to get enough data, we first extract the URLs of all section pages (e.g. World, Politics, Money, Tech) from the snapshot of each home page and then crawl all articles with tweets from these section pages. Note that another possible way to collect informative tweets is to download the tweets that are posted by the official Twitter accounts of news media. However, these tweets are often just the summaries of news articles, which are written in formal text. As our focus is to develop a dataset for QA on informal social media text, we do not consider this approach.
After we extracted tweets from archived news articles, we observed that there is still a portion of tweets that have very simple semantic structures and thus are very difficult to raise meaningful questions. An example of such tweets can be like: “Wanted to share this today - @IAmSteveHarvey". This tweet is actually talking about an image attached to this tweet. Some other tweets with simple text structures may talk about an inserted link or even videos. To filter out these tweets that heavily rely on attached media to convey information, we utilize a state-of-the-art semantic role labeling model trained on CoNLL-2005 BIBREF15 to analyze the predicate-argument structure of the tweets collected from news articles and keep only the tweets with more than two labeled arguments. This filtering process also automatically filters out most of the short tweets. For the tweets collected from CNN, INLINEFORM0 of them were filtered via semantic role labeling. For tweets from NBC, INLINEFORM1 of the tweets were filtered.
We then use Amazon Mechanical Turk to collect question-answer pairs for the filtered tweets. For each Human Intelligence Task (HIT), we ask the worker to read three tweets and write two question-answer pairs for each tweet. To ensure the quality, we require the workers to be located in major English speaking countries (i.e. Canada, US, and UK) and have an acceptance rate larger than INLINEFORM0 . Since we use tweets as context, lots of important information are contained in hashtags or even emojis. Instead of only showing the text to the workers, we use javascript to directly embed the whole tweet into each HIT. This gives workers the same experience as reading tweets via web browsers and help them to better compose questions.
To avoid trivial questions that can be simply answered by superficial text matching methods or too challenging questions that require background knowledge. We explicitly state the following items in the HIT instructions for question writing:
No Yes-no questions should be asked.
The question should have at least five words.
Videos, images or inserted links should not be considered.
No background knowledge should be required to answer the question.
To help the workers better follow the instructions, we also include a representative example showing both good and bad questions or answers in our instructions. Figure FIGREF14 shows the example we use to guide the workers.
As for the answers, since the context we consider is relatively shorter than the context of previous datasets, we do not restrict the answers to be in the tweet, otherwise, the task may potentially be simplified as a classification problem. The workers are allowed to write their answers in their own words. We just require the answers to be brief and can be directly inferred from the tweets.
After we retrieve the QA pairs from all HITs, we conduct further post-filtering to filter out the pairs from workers that obviously do not follow instructions. We remove QA pairs with yes/no answers. Questions with less than five words are also filtered out. This process filtered INLINEFORM0 of the QA pairs. The dataset now includes 10,898 articles, 17,794 tweets, and 13,757 crowdsourced question-answer pairs. The collected QA pairs will be directly available to the public, and we will provide a script to download the original tweets and detailed documentation on how we build our dataset. Also note that since we keep the original news article and news titles for each tweet, our dataset can also be used to explore more challenging generation tasks. Table TABREF19 shows the statistics of our current collection, and the frequency of different types of questions is shown in Table TABREF21 . All QA pairs were written by 492 individual workers.
For the purposes of human performance evaluation and inter-annotator agreement checking, we launch a different set of HITs to ask workers to answer questions in the test and development set. The workers are shown with the tweet blocks as well as the questions collected in the previous step. At this step, workers are allowed to label the questions as “NA" if they think the questions are not answerable. We find that INLINEFORM0 of the questions are labeled as unanswerable by the workers (for SQuAD, the ratio is INLINEFORM1 ). Since the answers collected at this step and previous step are written by different workers, the answers can be written in different text forms even they are semantically equal to each other. For example, one answer can be “Hillary Clinton” while the other is “@HillaryClinton”. As it is not straightforward to automatically calculate the overall agreement, we manually check the agreement on a subset of 200 random samples from the development set and ask an independent human moderator to verify the result. It turns out that INLINEFORM2 of the answers pairs are semantically equivalent, INLINEFORM3 of them are partially equivalent (one of them is incomplete) and INLINEFORM4 are totally inconsistent. The answers collected at this step are also used to measure the human performance. We have 59 individual workers participated in this process.
In terms of this, rather than starting from Twitter API Search, we look into the archived snapshots4 of two major news websites (CNN, NBC), and then extract the tweet blocks that are embedded in the news articles. In order to get enough data, we first extract the URLs of all section pages (e.g. World, Politics, Money, Tech) from the snapshot of each home page and then crawl all articles with tweets from these section pages.
|
In what ways did the author collect the data?
|
They extract the tweet blocks that are embedded in the news articles on the websites of CNN and NBC.
|
null | false
| null |
Why is Mediterranean diet considered the healthiest?
|
Typical Mediterranean diet is well balanced, full of healthy fats, lean protein, and vast variety of whole grains, vegetables, and fruits. As a main source of animal protein it focuses on fish and poultry, instead of red meat. Mediterranean diet includes low processed and plant-based carbs, as well as heart healthy fats - especially olive oil. Typical dessert would be fruit which is full of vitamins and antioxidants. As a result people who follow Mediterranean diet tend to see a number of health benefits, including reduced risk of diabetes, obesity and heart disease. It also helps to manage cholesterol and blood pressure. Due to high amount of legumes, vegetables and fruits, this diet is high in antioxidants and offers anti-inflammatory benefits. Some studies also show its positive effects to brain health and cognitive function.
|
|
null | false
| null |
What are the essential skills required for a communication?
|
Effective communication is the process of exchanging ideas, thoughts, opinions, knowledge, and data so that the message is received and understood with clarity and purpose. When we communicate effectively, both the sender and receiver feel satisfied. The most essential skill required is listening during a conversation. The other skills include being humble and understanding others opinions. Effective communication also requires you to help others understand in simplest ways possible about your points and views where the receiver should feel satisfied.
|
|
null | false
| null |
Identify which instrument is string or percussion: Sikulu, Sharki
|
Sharki is string, Sikulu is percussion.
|
|
null | false
| null |
How is "crisp" pronounced?
|
The word "crisp" is pronounced from the back of the mouth to the front.
|
|
null | false
| null |
When going on a backcountry hike, which of the following are among the 10 essentials for a safe hike: a navigation device (such as a map or compass), a headlamp with extra batteries, a fire starter (such as matches or a lighter), extra food, extra water, a camera.
|
A navigation device (such as a map or compass), a headlamp with extra batteries, a firestarter (such as matches or a lighter, extra food, and extra water are among the 10 essentials for a safe hike. While a camera may add to your enjoyment of the hike, it is not essential.
|
|
null | false
| null |
I'm thinking about joining the military to help pay for college. What are my options, and what do you recommend?
|
The military can help pay for college in a few different ways. If you are already planning to go to a specific school, you can check to see if they have a Reserve Officer Training Corps (ROTC) unit at the school and apply for an ROTC scholarship. An ROTC scholarship can help pay for most of your college degree, and you will have the option to join an active or reserve unit after college is complete. You will likely owe about 4 years of service as an officer in return for the scholarship. Another option is to go to a military academy. Military academies provide full tuition in exchange for 5 years of service as an officer after graduation. Finally, you could enlist in the military, serve 3-4 years as an enlisted soldier, and then use the G.I. Bill to attend college afterward. The best choice for you depends on your college preferences, your financial situation, and the timeline on which you want to earn your college degree.
|
|
null | false
| null |
What country was Frida Kahlo from and what career was she famous for?
|
Frida Kahlo is a well known Mexican artist
|
|
null | false
| null |
Northamptonshire County Cricket Club is in Division Two of the County Championship; the team (also known as The Steelbacks) play their home games at the County Cricket Ground, Northampton. They finished as runners-up in the Championship on four occasions in the period before it split into two divisions.
In 2013 the club won the Friends Life t20, beating Surrey in the final. Appearing in their third final in four years, the Steelbacks beat Durham by four wickets at Edgbaston in 2016 to lift the Natwest t20 Blast trophy for the second time. The club also won the NatWest Trophy on two occasions, and the Benson & Hedges Cup once.
|
Given this paragraph about the Northamptonshire County Cricket Club, can you list down some points that can be inferred about the County Championship?
|
Based on this paragraph, it seems like the Country Championship refers to some form of organized competition between cricket teams belonging to different counties. The Championship has been organized for at least four times given that the Northamptonshire County Cricket Club finished as runners-up in it for four times. It also seems like the Championship was split into two divisions at some point.
|
null | false
| 82
|
Text simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help children, non-native speakers, and people with cognitive disabilities, to understand text better. One of the methods of automatic text simplification can be generally divided into three categories: lexical simplification (LS) BIBREF0 , BIBREF1 , rule-based BIBREF2 , and machine translation (MT) BIBREF3 , BIBREF4 . LS is mainly used to simplify text by substituting infrequent and difficult words with frequent and easier words. However, there are several challenges for the LS approach: a great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain. Rule-based approaches use hand-crafted rules for lexical and syntactic simplification, for example, substituting difficult words in a predefined vocabulary. However, such approaches need a lot of human-involvement to manually define these rules, and it is impossible to give all possible simplification rules. MT-based approach has attracted great attention in the last several years, which addresses text simplification as a monolingual machine translation problem translating from 'ordinary' and 'simplified' sentences.
In recent years, neural Machine Translation (NMT) is a newly-proposed deep learning approach and achieves very impressive results BIBREF5 , BIBREF6 , BIBREF7 . Unlike the traditional phrased-based machine translation system which operates on small components separately, NMT system is being trained end-to-end, without the need to have external decoders, language models or phrase tables. Therefore, the existing architectures in NMT are used for text simplification BIBREF8 , BIBREF4 . However, most recent work using NMT is limited to the training data that are scarce and expensive to build. Language models trained on simplified corpora have played a central role in statistical text simplification BIBREF9 , BIBREF10 . One main reason is the amount of available simplified corpora typically far exceeds the amount of parallel data. The performance of models can be typically improved when trained on more data. Therefore, we expect simplified corpora to be especially helpful for NMT models.
In contrast to previous work, which uses the existing NMT models, we explore strategy to include simplified training corpora in the training process without changing the neural network architecture. We first propose to pair simplified training sentences with synthetic ordinary sentences during training, and treat this synthetic data as additional training data. We obtain synthetic ordinary sentences through back-translation, i.e. an automatic translation of the simplified sentence into the ordinary sentence BIBREF11 . Then, we mix the synthetic data into the original (simplified-ordinary) data to train NMT model. Experimental results on two publicly available datasets show that we can improve the text simplification quality of NMT models by mixing simplified sentences into the training set over NMT model only using the original training data.
Experimental results on two publicly available datasets show that we can improve the text simplification quality of NMT models by mixing simplified sentences into the training set over NMT model only using the original training data.
|
What steps are taken to improve the text simplification quality?
|
Mixing simplified sentences into the training set.
|
null | false
| 21
|
Zork1 is one of the first text-adventure games and heavily influences games released later in terms of narrative style and game structure. It is a dungeon crawler where the player must explore a vast world and collect a series of treasures. It was identified by BIBREF2 as a moonshot game and has been the subject of much work in leaning agents BIBREF12, BIBREF7, BIBREF11, BIBREF8. Rewards are given to the player when they collect treasures as well as when important intermediate milestones needed to further explore the world are passed. Figure FIGREF15 and Figure FIGREF1 show us a map of the world of Zork1 and the corresponding quest structure.
The bottleneck seen at a score of around 40 is when the player first enters the cellar on the right side of the map. The cellar is dark and you need to immediately light the lamp to see anything. Attempting to explore the cellar in the dark results in you being instantly killed by a monster known as a “grue”.
We make no changes from the graph update rules used by BIBREF6. Candidate interactive objects are identified by performing part-of-speech tagging on the current observation, identifying singular and proper nouns as well as adjectives, and are then filtered by checking if they can be examined using the command $examine$ $OBJ$. Only the interactive objects not found in the inventory are linked to the node corresponding to the current room and the inventory items are linked to the “you” node. The only other rule applied uses the navigational actions performed by the agent to infer the relative positions of rooms, e.g. $\langle kitchen,down,cellar \rangle $ when the agent performs $go$ $down$ when in the kitchen to move to the cellar.
Hyperparameters used for our agents are given below. Patience and buffer size are used for the policy chaining method as described in Section SECREF2. Cell step size is a parameter used for Go-Explore and describes how many steps are taken when exploring in a given cell state. Base hyperparameters for KG-A2C are taken from BIBREF6 and the same parameters are used for A2C.
Cell step size is a parameter used for Go-Explore and describes how many steps are taken when exploring in a given cell state.
|
What parameter is used for Go-Explore and describes how many steps are taken when exploring in a given cell state?
|
Cell step size.
|
1906.00378
| false
| null |
We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces. Our proposed method consistently outperforms previous state-of-the-art vision-based bilingual word induction approaches on different languages. The contributions of this paper are as follows:
We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces.
|
Which languages are used in the multi-lingual caption model?
|
The answers are shown as follows:
* multiple language pairs including German-English, French-English, and Japanese-English.
|
null | false
| null |
K-pop, short for Korean popular music, is a form of popular music originating in South Korea as part of South Korean culture. It includes styles and genres from around the world, such as pop, hip hop, R&B, rock, jazz, gospel, reggae, electronic dance, folk, country, disco, and classical on top of its traditional Korean music roots. The term "K-pop" became popular in the 2000s, especially in the international context. The Korean term for domestic pop music is gayo, which is still widely used within South Korea. While "K-pop" can refer to all popular music or pop music from South Korea, it is colloquially often used in a narrower sense for any Korean music and artists associated with the entertainment and idol industry in the country, regardless of the genre.
|
What is K-Pop?
|
K-pop is an acronym for Korean pop music. It is a mainstream genre of music that originated in South Korea. K-pop is influenced by a variety of different musical genres such as rock, hip hop, R&B, and electronic music. K-pop groups usually consists of trained members who have designated roles such as the main vocalist, vocalists, dancer, rapper, and visual. K-pop choreographies consist of repetitive synchronized movements which tend to go viral.
|
null | false
| null |
Give me a comma separated list of stone fruit I could buy at the market
|
Plums, Apricots, Peaches, Nectarines, Pluots
|
|
null | false
| 27
|
Task-oriented dialog systems have become ubiquitous, providing a means for billions of people to interact with computers using natural language. Moreover, the recent influx of platforms and tools such as Google's DialogFlow or Amazon's Lex for building and deploying such systems makes them even more accessible to various industries and demographics across the globe.
Tools for developing such systems start by guiding developers to collect training data for intent classification: the task of identifying which of a fixed set of actions the user wishes to take based on their query. Relatively few public datasets exist for evaluating performance on this task, and those that do exist typically cover only a very small number of intents (e.g. BIBREF0, which has 7 intents). Furthermore, such resources do not facilitate analysis of out-of-scope queries: queries that users may reasonably make, but fall outside of the scope of the system-supported intents.
Figure FIGREF1 shows example query-response exchanges between a user and a task-driven dialog system for personal finance. In the first user-system exchange, the system correctly identifies the user's intent as an in-scope balance query. In the second and third exchanges, the user queries with out-of-scope inputs. In the second exchange, the system incorrectly identifies the query as in-scope and yields an unrelated response. In the third exchange, the system correctly classifies the user's query as out-of-scope, and yields a fallback response.
Out-of-scope queries are inevitable for a task-oriented dialog system, as most users will not be fully cognizant of the system's capabilities, which are limited by the fixed number of intent classes. Correctly identifying out-of-scope cases is thus crucial in deployed systems—both to avoid performing the wrong action and also to identify potential future directions for development. However, this problem has seen little attention in analyses and evaluations of intent classification systems.
This paper fills this gap by analyzing intent classification performance with a focus on out-of-scope handling. To do so, we constructed a new dataset with 23,700 queries that are short and unstructured, in the same style made by real users of task-oriented systems. The queries cover 150 intents, plus out-of-scope queries that do not fall within any of the 150 in-scope intents.
We evaluate a range of benchmark classifiers and out-of-scope handling methods on our dataset. BERT BIBREF1 yields the best in-scope accuracy, scoring 96% or above even when we limit the training data or introduce class imbalance. However, all methods struggle with identifying out-of-scope queries. Even when a large number of out-of-scope examples are provided for training, there is a major performance gap, with the best system scoring 66% out-of-scope recall. Our results show that while current models work on known classes, they have difficulty on out-of-scope queries, particularly when data is not plentiful. This dataset will enable future work to address this key gap in the research and development of dialog systems. All data introduced in this paper can be found at https://github.com/clinc/oos-eval.
We evaluate a range of benchmark classifiers and out-of-scope handling methods on our dataset.
|
What did the authors evaluate on the database?
|
The authors evaluated a range of benchmark classifiers and out-of-scope handling methods on the dataset.
|
null | false
| null |
Titanic is a 1997 American epic romance and disaster film directed, written, produced, and co-edited by James Cameron. Incorporating both historical and fictionalized aspects, it is based on accounts of the sinking of the RMS Titanic and stars Kate Winslet and Leonardo DiCaprio as members of different social classes who fall in love aboard the ship during its ill-fated maiden voyage. The film also features Billy Zane, Kathy Bates, Frances Fisher, Gloria Stuart, Bernard Hill, Jonathan Hyde, Victor Garber, and Bill Paxton.
Cameron's inspiration for the film came from his fascination with shipwrecks; he felt a love story interspersed with the human loss would be essential to convey the emotional impact of the disaster. Production began on September 1, 1995, when Cameron shot footage of the actual Titanic wreck. The modern scenes on the research vessel were shot on board the Akademik Mstislav Keldysh, which Cameron had used as a base when filming the wreck. Scale models, computer-generated imagery, and a reconstruction of the Titanic built at Baja Studios were used to re-create the sinking. The film was co-financed by Paramount Pictures and 20th Century Fox; the former handled distribution in North America while the latter released the film internationally. It was the most expensive film ever made at the time, with a production budget of $200 million. Filming took place from July 1996 to March 1997.
Upon its release on December 19, 1997, Titanic achieved significant critical and commercial success, and then received numerous accolades. It was praised for its visual effects, performances (particularly DiCaprio, Winslet, and Stuart), production values, Cameron's direction, musical score, cinematography, story, and emotional depth. Nominated for 14 Academy Awards, it tied All About Eve (1950) for the most Oscar nominations, and won 11, including the awards for Best Picture and Best Director, tying Ben-Hur (1959) for the most Oscars won by a single film. With an initial worldwide gross of over $1.84 billion, Titanic was the first film to reach the billion-dollar mark. It remained the highest-grossing film of all time until Cameron's next film, Avatar, surpassed it in 2010. A number of re-releases since 1997 have pushed the film's worldwide total to $2.249 billion, making it the second film to gross more than $2 billion worldwide (after Avatar). It was also selected for preservation in the United States National Film Registry in 2017 for being "culturally, historically or aesthetically significant".
|
Which was the first film to reach he billion-dollar mark?
|
With an initial worldwide gross of over $1.84 billion, Titanic was the first film to reach the billion-dollar mark. Upon its release on December 19, 1997, Titanic achieved significant critical and commercial success, and then received numerous accolades. It was praised for its visual effects, performances (particularly DiCaprio, Winslet, and Stuart), production values, Cameron's direction, musical score, cinematography, story, and emotional depth. It was nominated for 14 Academy Awards, and won 11, including the awards for Best Picture and Best Director.
|
1908.07822
| false
| null |
In general, the model transforms the segments into object-pairs by the TC-CNN and passes sentence through bi-GRU to obtain the global representation. Then we integrate object-pairs with global representation and make a pair-wise inference to detect the relationship among the segments. Ablation studies show that the proposed SCRN at the segment level has the capacity for relational reasoning and promotes the result significantly.
In general, the model transforms the segments into object-pairs by the TC-CNN and passes sentence through bi-GRU to obtain the global representation. Then we integrate object-pairs with global representation and make a pair-wise inference to detect the relationship among the segments. Ablation studies show that the proposed SCRN at the segment level has the capacity for relational reasoning and promotes the result significantly.
|
How is Relation network used to infer causality at segment level?
|
The answers are shown as follows:
* we integrate object-pairs with global representation and make a pair-wise inference to detect the relationship among the segments
|
null | false
| null |
Holi ( /ˈhoʊliː/) is a popular and significant Hindu festival celebrated as the Festival of Colours, Love and Spring. It celebrates the eternal and divine love of the god Radha and Krishna. Additionally, the day also signifies the triumph of good over evil,as it commemorates the victory of Vishnu as Narasimha Narayana over Hiranyakashipu. Holi is originated and is predominantly celebrated in the Indian subcontinent but has also spread to other regions of Asia and parts of the Western world through the Indian diaspora.
|
Tell me about Holi and why so many people in the world celebrate it?
|
Holi is an festival celebrated by Hindus in India. Even though it was originated by Hindu religion, it is celebrated across India irrespective of religion. It has become a notable Indian cultural representation. The reason so many people in the world celebrate it is because India has one of the largest population in the world.
|
null | false
| null |
Why is film photography popular again?
|
Film photography had seen a long decline in popularity ever since the invention of digital cameras with their remarkable leaps in technological advances. As of today, a decent professional digital camera significantly outperforms the 35mm film cameras in terms of resolution, burst-shooting capabilities, nearly unlimited storage, and etc... In the recent years, the "film look" has made its comeback with lower resolution, grainy texture and unique color rendition. On top of that, half-decent retro-looking film cameras can be acquired for much less than a potent digital camera, lowering the entry barrier for those on the budget and looking for that look. An abundant pool of film stocks also adds to the fun, where each can drastically differ in low-light sensitivity, color rendition, graininess, post-processing chemistry, and of course, price. However, many major photographic film producers have either discontinued or significantly reduced the volume of their production. This, coupled with the surge in the demand, has led to skyrocketing prices on most of the still-available film stocks. On top of this, many popular and sought-after film cameras like the Leica M6, Nikon FM2 and Contax T2 also saw steep increase in their used-market prices. Given these price hikes on films and decades-old equipments, film photography is quickly becoming an expensive hobby, where each click of a shutter could cost more than 50 cents. How ironic is this today, when digital storage is virtually free? The authentic "film look" carries a hefty price tag today, and the future popularity of film photograhy is uncertain.
|
|
null | false
| 159
|
The generators were implemented using the TensorFlow library BIBREF31 and trained with training, validation and testing ratio as 3:1:1. The hidden layer size, beam size were set to be 80 and 10, respectively, and the generators were trained with a $70\%$ of dropout rate. We performed 5 runs with different random initialization of the network and the training is terminated by using early stopping. We then chose a model that yields the highest BLEU score on the validation set as shown in Table 2 . Since the trained models can differ depending on the initialization, we also report the results which were averaged over 5 randomly initialized networks. Note that, except the results reported in Table 2 , all the results shown were averaged over 5 randomly initialized networks. We set $\lambda $ to 1000 to severely discourage the reranker from selecting utterances which contain either redundant or missing slots. For each DA, we over-generated 20 candidate sentences and selected the top 5 realizations after reranking. Moreover, in order to better understand the effectiveness of our proposed methods, we: (i) performed an ablation experiments to demonstrate the contribution of each proposed cells (Tables 2 , 3 ), (ii) trained the models on the Laptop domain with varied proportion of training data, starting from $10\%$ to $100\%$ (Figure 3 ), (iii) trained general models by merging all the data from four domains together and tested them in each individual domain (Figure 4 ), and (iv) trained adaptation models on merging data from restaurant and hotel domains, then fine tuned the model on laptop domain with varied amount of adaptation data (Figure 5 ).
The generators were implemented using the TensorFlow library (Abadi et al., 2016) and trained with training, validation and testing ratio as 3:1:1.
|
What is the training, validation, and testing ratio of the datasets?
|
3:1:1.
|
null | false
| null |
Which of the following are companies that sell sports apparel? Nike, adidas, Energizer, Logitech, Dell, Saucony
|
Nike, Adidas, Saucony
|
|
null | false
| 274
|
Internet “trolls” are users of an online community who quarrel and upset people, seeking to sow discord by posting inflammatory content. More recently, organized “troll farms” of political opinion manipulation trolls have also emerged.
Such farms usually consist of state-sponsored agents who control a set of pseudonymous user accounts and personas, the so-called “sockpuppets”, which disseminate misinformation and propaganda in order to sway opinions, destabilize the society, and even influence elections BIBREF0.
The behavior of political trolls has been analyzed in different recent circumstances, such as the 2016 US Presidential Elections and the Brexit referendum in UK BIBREF0, BIBREF1. However, this kind of analysis requires painstaking and time-consuming manual labor to sift through the data and to categorize the trolls according to their actions. Our goal in the current paper is to automate this process with the help of machine learning (ML). In particular, we focus on the case of the 2016 US Presidential Elections, for which a public dataset from Twitter is available. For this case, we consider only accounts that post content in English, and we wish to divide the trolls into some of the functional categories identified by BIBREF0: left troll, right troll, and news feed.
We consider two possible scenarios. The first, prototypical ML scenario is supervised learning, where we want to learn a function from users to categories {left, right, news feed}, and the ground truth labels for the troll users are available. This scenario has been considered previously in the literature by BIBREF2. Unfortunately, a solution for such a scenario is not directly applicable to a real-world use case. Suppose a new troll farm trying to sway the upcoming European or US elections has just been discovered. While the identities of the accounts might be available, the labels to learn from would not be present. Thus, any supervised machine learning approach would fall short of being a fully automated solution to our initial problem.
A more realistic scenario assumes that labels for troll accounts are not available. In this case, we need to use some external information in order to learn a labeling function. Indeed, we leverage more persistent entities and their labels: news media. We assume a learning scenario with distant supervision where labels for news media are available. By combining these labels with a citation graph from the troll accounts to news media, we can infer the final labeling on the accounts themselves without any need for manual labeling.
One advantage of using distant supervision is that we can get insights about the behavior of a newly-discovered troll farm quickly and effortlessly. Differently from troll accounts in social media, which usually have a high churn rate, news media accounts in social media are quite stable. Therefore, the latter can be used as an anchor point to understand the behavior of trolls, for which data may not be available.
We rely on embeddings extracted from social media. In particular, we use a combination of embeddings built on the user-to-user mention graph, the user-to-hashtag mention graph, and the text of the tweets of the troll accounts. We further explore several possible approaches using label propagation for the distant supervision scenario.
As a result of our approach, we improve the classification accuracy by more than 5 percentage points for the supervised learning scenario. The distant supervision scenario has not previously been considered in the literature, and is one of the main contributions of the paper. We show that even by hiding the labels from the ML algorithm, we can recover 78.5% of the correct labels.
The contributions of this paper can be summarized as follows:
We predict the political role of Internet trolls (left, news feed, right) in a realistic, unsupervised scenario, where labels for the trolls are not available, and which has not been explored in the literature before.
We propose a novel distant supervision approach for this scenario, based on graph embeddings, BERT, and label propagation, which projects the more-commonly-available labels for news media onto the trolls who cited these media.
We improve over the state of the art in the traditional, fully supervised setting, where training labels are available.
We rely on embeddings extracted from social media. In particular, we use a combination of embeddings built on the user-to-user mention graph, the user-to-hashtag mention graph, and the text of the tweets of the troll accounts. We further explore several possible approaches using label propagation for the distant supervision scenario.
|
What kind of combination of embeddings do they use?
|
A combination of embeddings built on the user-to-user mention graph, the user-to-hashtag mention graph, and the text of the tweets of the troll accounts.
|
null | false
| null |
Classify the following as either airplanes or helicopters:
- EC-135
- DC-10
- Antonov An 225
- Boeing 787
- Bell 206
- MD-500
- MD-11
- Airbus A320
- Airbus H165
- Cessna 172
|
Airplanes:
- McDonnell Douglas DC-10
- Antonov An-225
- Boeing 787
- MD-11
- Airbus A320
- Cessna 172
Helicopters:
- Bell 206
- Robinson R44
- MD-500
- Airbus H165
|
|
null | false
| null |
What are the top 7 national parks in the United States?
|
Top national parks in the US are as follows:
- Yosemite National Park
- Grand Canyon National Park
- Yellowstone National Park
- Glacier National Park
- Zion National Park
- Grand Teton National Park
- Bryce Canyon National Park
|
|
null | false
| null |
How many innings are played in a professional baseball game?
|
Nine innings are played in a professional baseball game.
|
|
null | false
| null |
In her teens in the 1880s, she taught school for a time in Sac County, Iowa. She studied at the Mount Carroll Seminary (later known as Shimer College), exiting in 1890. Although then as now primarily a liberal arts institution, the school also offered courses in stenography and typewriting. Deneen performed well enough in her studies to be hired as an instructor of these subjects in 1889 and 1890.
|
What did Deneen do in her teens?
|
In her teens, Deneen taught school in Sac County, Iowa.
|
null | false
| null |
What is underwriting?
|
Underwriting (UW) services are provided by some large financial institutions, such as banks, insurance companies and investment houses, whereby they guarantee payment in case of damage or financial loss and accept the financial risk for liability arising from such guarantee. An underwriting arrangement may be created in a number of situations including insurance, issues of security in a public offering, and bank lending, among others. The person or institution that agrees to sell a minimum number of securities of the company for commission is called the underwriter.
The term "underwriting" derives from the Lloyd's of London insurance market. Financial backers (or risk takers), who would accept some of the risk on a given venture (historically a sea voyage with associated risks of shipwreck) in exchange for a premium, would literally write their names under the risk information that was written on a Lloyd's slip created for this purpose.
|
|
null | false
| 2
|
There has been significant progress on Named Entity Recognition (NER) in recent years using models based on machine learning algorithms BIBREF0 , BIBREF1 , BIBREF2 . As with other Natural Language Processing (NLP) tasks, building NER systems typically requires a massive amount of labeled training data which are annotated by experts. In real applications, we often need to consider new types of entities in new domains where we do not have existing annotated data. For such new types of entities, however, it is very hard to find experts to annotate the data within short time limits and hiring experts is costly and non-scalable, both in terms of time and money.
In order to quickly obtain new training data, we can use crowdsourcing as one alternative way at lower cost in a short time. But as an exchange, crowd annotations from non-experts may be of lower quality than those from experts. It is one biggest challenge to build a powerful NER system on such a low quality annotated data. Although we can obtain high quality annotations for each input sentence by majority voting, it can be a waste of human labors to achieve such a goal, especially for some ambiguous sentences which may require a number of annotations to reach an agreement. Thus majority work directly build models on crowd annotations, trying to model the differences among annotators, for example, some of the annotators may be more trustful BIBREF3 , BIBREF4 .
Here we focus mainly on the Chinese NER, which is more difficult than NER for other languages such as English for the lack of morphological variations such as capitalization and in particular the uncertainty in word segmentation. The Chinese NE taggers trained on news domain often perform poor in other domains. Although we can alleviate the problem by using character-level tagging to resolve the problem of poor word segmentation performances BIBREF5 , still there exists a large gap when the target domain changes, especially for the texts of social media. Thus, in order to get a good tagger for new domains and also for the conditions of new entity types, we require large amounts of labeled data. Therefore, crowdsourcing is a reasonable solution for these situations.
In this paper, we propose an approach to training a Chinese NER system on the crowd-annotated data. Our goal is to extract additional annotator independent features by adversarial training, alleviating the annotation noises of non-experts. The idea of adversarial training in neural networks has been used successfully in several NLP tasks, such as cross-lingual POS tagging BIBREF6 and cross-domain POS tagging BIBREF7 . They use it to reduce the negative influences of the input divergences among different domains or languages, while we use adversarial training to reduce the negative influences brought by different crowd annotators. To our best knowledge, we are the first to apply adversarial training for crowd annotation learning.
In the learning framework, we perform adversarial training between the basic NER and an additional worker discriminator. We have a common Bi-LSTM for representing annotator-generic information and a private Bi-LSTM for representing annotator-specific information. We build another label Bi-LSTM by the crowd-annotated NE label sequence which reflects the mind of the crowd annotators who learn entity definitions by reading the annotation guidebook. The common and private Bi-LSTMs are used for NER, while the common and label Bi-LSTMs are used as inputs for the worker discriminator. The parameters of the common Bi-LSTM are learned by adversarial training, maximizing the worker discriminator loss and meanwhile minimizing the NER loss. Thus the resulting features of the common Bi-LSTM are worker invariant and NER sensitive.
For evaluation, we create two Chinese NER datasets in two domains: dialog and e-commerce. We require the crowd annotators to label the types of entities, including person, song, brand, product, and so on. Identifying these entities is useful for chatbot and e-commerce platforms BIBREF8 . Then we conduct experiments on the newly created datasets to verify the effectiveness of the proposed adversarial neural network model. The results show that our system outperforms very strong baseline systems. In summary, we make the following contributions:
As with other Natural Language Processing (NLP) tasks, building NER systems typically requires a massive amount of labeled training data which are annotated by experts.
|
What are the requirements for building NER systems as with other Natural Language Processing (NLP) tasks?
|
As with other Natural Language Processing (NLP) tasks, building NER systems typically requires a massive amount of labeled training data which are annotated by experts.
|
null | false
| null |
Who is called Napoleon of Iran?
|
Nadir Shah
|
|
1803.02839
| true
| null |
Second, we propose embedding schemes that explicitly embed words as elements of a Lie group. In practice, these embedding schemes would involve representing words as constrained matrices, and optimizing the elements, subject to the constraints, according to a loss function constructed from invariants of the matrices, and then applying the matrix log to obtain Lie vectors. A prototypical implementation, in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation.
A prototypical implementation, in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation.
|
Is there a formal proof that the RNNs form a representation of the group?
|
No.
|
null | false
| null |
5 fruits more healthy than apples
|
Orange
Plums
Indian Gooseberry
Dates
Kiwi
|
|
null | false
| 381
|
Named entity typing is the task of detecting the type (e.g., person, location, or organization) of a named entity in natural language text. Entity type information has shown to be useful in natural language tasks such as question answering BIBREF0 , knowledge-base population BIBREF1 , BIBREF2 , and co-reference resolution BIBREF3 . Motivated by its application to downstream tasks, recent work on entity typing has moved beyond standard coarse types towards finer-grained semantic types with richer ontologies BIBREF0 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . Rather than assuming an entity can be uniquely categorized into a single type, the task has been approached as a multi-label classification problem: e.g., in “... became a top seller ... Monopoly is played in 114 countries. ...” (fig:arch), “Monopoly” is considered both a game as well as a product.
The state-of-the-art approach BIBREF8 for fine-grained entity typing employs an attentive neural architecture to learn representations of the entity mention as well as its context. These representations are then combined with hand-crafted features (e.g., lexical and syntactic features), and fed into a linear classifier with a fixed threshold. While this approach outperforms previous approaches which only use sparse binary features BIBREF4 , BIBREF6 or distributed representations BIBREF9 , it has a few drawbacks: (1) the representations of left and right contexts are learnt independently, ignoring their mutual connection; (2) the attention on context is computed solely upon the context, considering no alignment to the entity; (3) document-level contexts which could be useful in classification are not exploited; and (4) hand-crafted features heavily rely on system or human annotations.
To overcome these drawbacks, we propose a neural architecture (fig:arch) which learns more context-aware representations by using a better attention mechanism and taking advantage of semantic discourse information available in both the document as well as sentence-level contexts. Further, we find that adaptive classification thresholds leads to further improvements. Experiments demonstrate that our approach, without any reliance on hand-crafted features, outperforms prior work on three benchmark datasets.
To overcome these drawbacks, we propose a neural architecture (fig:arch) which learns more context-aware representations by using a better attention mechanism and taking advantage of semantic discourse information available in both the document as well as sentence-level contexts.
|
What kind of neural architecture do they propose?
|
A neural architecture (fig:arch) which learns more context-aware representations by using a better attention mechanism and taking advantage of semantic discourse information available in both the document as well as sentence-level contexts.
|
null | false
| null |
What are the benefits of audio books over physical books?
|
Audio books can be beneficial for multiple reasons. For one, the exist digitally on your phone in the cloud so there is no physical book that would need to be carried around. This makes is much easier for the listener to have a book with them without carrying a physical book. Additionally, audio books can be consumed when someone is not able to hold a physical book like when they are driving or riding a bike. This allows the reader to enjoy a book during a variety of activities. These are a few reasons why some people enjoy audio books.
|
|
1808.08850
| false
| null |
Results form Table TABREF31 may give an idea that INLINEFORM0 is just an scaled INLINEFORM1 . While it is true that they show a linear correlation, INLINEFORM2 may produce a different system ranking than INLINEFORM3 given the integral multi-reference principle it follows. However, what we consider the most profitable about INLINEFORM4 is the twofold inclusion of all available references it performs. First, the construction of INLINEFORM5 to provide a more inclusive reference against to whom be evaluated and then, the computation of INLINEFORM6 , which scales the result depending of the agreement between references.
In this paper we presented WiSeBE, a semi-automatic multi-reference sentence boundary evaluation protocol based on the necessity of having a more reliable way for evaluating the SBD task. We showed how INLINEFORM0 is an inclusive metric which not only evaluates the performance of a system against all references, but also takes into account the agreement between them. According to your point of view, this inclusivity is very important given the difficulties that are present when working with spoken language and the possible disagreements that a task like SBD could provoke.
Results form Table TABREF31 may give an idea that INLINEFORM0 is just an scaled INLINEFORM1 . While it is true that they show a linear correlation, INLINEFORM2 may produce a different system ranking than INLINEFORM3 given the integral multi-reference principle it follows. However, what we consider the most profitable about INLINEFORM4 is the twofold inclusion of all available references it performs. First, the construction of INLINEFORM5 to provide a more inclusive reference against to whom be evaluated and then, the computation of INLINEFORM6 , which scales the result depending of the agreement between references.
We showed how INLINEFORM0 is an inclusive metric which not only evaluates the performance of a system against all references, but also takes into account the agreement between them.
|
What makes it a more reliable metric?
|
It takes into account the agreement between different systems
|
null | false
| 239
|
In recent years, content delivery has changed drastically, shifting from offline methods to the Internet. It is now the primary source of information for a majority of the populace, especially for ever-changing news updates. This has also caused a shift in users' preferred sources. Previously, these preferences were static, sticking to a particular news source. Now, with the plethora of information available easily, there is no differentiation in the source it has been gathered from, with users opting to go for whatever is convenient.
Keeping up with the times, news agencies have expanded their digital presence, increasing their reach exponentially. They generate revenue by (1) advertisements on their websites, or (2) a subscription based model for articles that might interest users. Since multiple agencies offer similar content, the user has his pick. To lure in more readers and increase the number of clicks on their content, subsequently enhancing their agency's revenue, writers have begun adopting a new technique - clickbait.
Merriam-Webster defines clickbait as something (such as a headline) to encourage readers to click on hyperlinks based on snippets of information accompanying it, especially when those links lead to content of dubious value or interest. It is built to create, and consequently capitalise, on the Loewenstein information gap BIBREF0 by purposefully misrepresenting or promising what can be expected while reading a story on the web, be it through a headline, image or related text.
We propose a two-pronged approach to detect such headlines. The first component leverages distributional semantics of the title text and models its temporal and sequential properties. The article title is represented as a concatenation of its sub-word level embeddings. The sub-word representation serves as input to a bidirectional LSTM network. The contribution of a sub-word towards the clickbait nature of the headline is calculated in a differential manner since the output of the LSTM is passed into an attention layer BIBREF1 , following which it goes through a dense layer. The second component focuses on Doc2Vec embeddings of the title and article content, performing an element wise multiplication of the two. This is concatenated with the dense layer output from the previous component. The obtained output is then passed through multiple hidden layers which performs the final classification.
Previous work in this field that has exploited the power of embeddings has considered either word vectors, for their ability to create context-sensitive word representations, or character-level word embeddings to model the orthographic features of a word. We propose the use of sub-word level representations since it incorporates the word's morphological features. Attaching an attention mechanism to it helps us identify the surprise associated with each representation within the clickbait. One of the identifying characteristics of clickbait is that the article title differs from the text attached to it. For this reason, we define a component to capture the interaction between these attributes and augment our model.
We propose a two-pronged approach to detect such headlines. The first component leverages distributional semantics of the title text and models its temporal and sequential properties. The article title is represented as a concatenation of its sub-word level embeddings. The sub-word representation serves as input to a bidirectional LSTM network. The contribution of a sub-word towards the clickbait nature of the headline is calculated in a differential manner since the output of the LSTM is passed into an attention layer , following which it goes through a dense layer. The second component focuses on Doc2Vec embeddings of the title and article content, performing an element wise multiplication of the two. This is concatenated with the dense layer output from the previous component.
|
What does the second component focus on?
|
Doc2Vec embeddings of the title and article content.
|
null | false
| null |
Where should I go hut skiing in Canada?
|
There are hundreds of public and private ski huts in Canada, with the vast majority residing in British Columbia. Many of these huts require a helicopter ride to get you to and from the hut, and offer guides and chefs on site. A good starting point for discovering hut skiing opportunities in British Columbia is to visit the Alpine Club of Canada website.
|
|
null | false
| null |
Nelson Mandela Boulevard (Persian: بلوار نلسون ماندلا) (old name: Jordan Street and Africa Boulevard) still known as Jordan is an affluent and upper-class district in northern Tehran, and some people draw similarities between Jordan District as being much alike Kensington District in London, UK as the area is a mixture of residential and commercial locale, filled with the homes and businesses of many politicians, diplomats, expatriates, and artists. Before the Iranian Revolution in 1979 it was called Jordan street, named after the American presbyterian missionary Samuel M. Jordan and used to be one of the most popular avenue in Tehran, Iran. Renamed Nelson Mandela Boulevard in recent years, it is amongst the most famous streets in north Tehran after Valiasr Street which is the longest conventional street in the Middle East, spanning a south–north axis. It is also famous for being one of the liveliest streets of Tehran, experiencing regular traffic jams even at 2:00A.M during Summer.
|
What are some interesting facts about Nelson Mandel Boulevard described on this paragraph?
|
It holds a mixture of residential and commercial locale from upper-class citizens, like politicians, diplomats, expatriates and artists. It used to be called Jordan street before the Iranian revolution. It's amongst the most famous streets of Tehran, experiencing regular traffic jams even at 2:00 AM during Summer.
|
null | false
| 249
|
The goal of the current research was to investigate the automatic detection of cyberbullying-related posts on social media. Given the information overload on the web, manual monitoring for cyberbullying has become unfeasible. Automatic detection of signals of cyberbullying would enhance moderation and allow to respond quickly when necessary.
Cyberbullying research has often focused on detecting cyberbullying `attacks', hence overlooking posts written by victims and bystanders. However, these posts could just as well indicate that cyberbullying is going on. The main contribution of this paper is that it presents a system for detecting signals of cyberbullying on social media, including posts from bullies, victims and bystanders. A manually annotated cyberbullying dataset was created for two languages, which will be made available for public scientific use. Moreover, while a fair amount of research has been done on cyberbullying detection for English, we believe this is one of the first papers that focus on Dutch as well.
A set of binary classification experiments were conducted to explore the feasibility of automatic cyberbullying detection on social media. In addition, we sought to determine which information sources contribute to this task. Two classifiers were trained on English and Dutch ASKfm data and evaluated on a holdout test of the same genre. Our experiments reveal that the current approach is a promising strategy for detecting signals of cyberbullying in social media data automatically. After feature selection and hyperparameter optimisation, the classifiers achieved an F INLINEFORM0 -score of 64.32% and 58.72% for English and Dutch, respectively. The systems hereby significantly outperformed a keyword and an (unoptimised) INLINEFORM1 -gram baseline. Analysis of the results revealed that false positives often include implicit cyberbullying or offenses through irony, the challenge of which will constitute an important area for future work.
Another interesting direction for future work would be the detection of fine-grained cyberbullying-related categories such as threats, curses and expressions of racism and hate. When applied in a cascaded model, the system could find severe cases of cyberbullying with high precision. This would be particularly interesting for monitoring purposes, since it would allow to prioritise signals of bullying that are in urgent need for manual inspection and follow-up.
Finally, future work will focus on the detection of participants (or roles) typically involved in cyberbullying. This would allow to analyse the context of a cyberbullying incident and hence evaluate its severity. When applied as moderation support on online platforms, such a system would allow to provide feedback in function of the recipient (i.e., a bully, victim, or bystander).
When applied as moderation support on online platforms, such a system would allow to provide feedback in function of the recipient.
|
What would future work focus on?
|
It can provide feedback in function of the recipient.
|
null | false
| null |
What are good foods for keto diet
|
Keto is a high fat and low carbohydrates diet. This diet helps the body to get into a state called ketosis where the body uses fat as an energy source instead of carbohydrates.
The ideal food items should have high fat content and low or no carbs and sugar. Some meal suggestions are:
- cauliflower cheese
- broccoli cheese
- Avocado chicken salad
- cauliflower rice and chicken
- Lamb curry
- lamb roast with Brussels sprouts
- beef roast
- beef steak with halloumi fries
- salads including cucumber, onion, broccoli
Keto food should not have potatoes, rice, pasta, wheat, sugar etc.
Diet and sugar free soda can be good alternatives to curb sugar cravings
|
|
null | false
| null |
What was the deadliest battle in history?
|
The Battle of Stalingrad was a brutal military campaign between Russian forces and those of Nazi Germany and the Axis powers during World War II. The battle is infamous as one of the largest, longest and bloodiest engagements in modern warfare: From August 1942 through February 1943, more than two million troops fought in close quarters – and nearly two million people were killed or injured in the fighting, including tens of thousands of Russian civilians. The giant campaign was really a showdown between Hitler and Stalin over the city that bore the name of the Soviet dictator, which ultimately turned the tide of World War II in favor of the Allied forces, and helped bring about a change of fortunes that would eventually destroy Nazi Germany and doom Hitler.
|
|
null | false
| null |
Why is my car making a funny noise?
|
There are many reasons why vehicles make abnormal noises. It is recommended that a mechanic is involved to take a look at the car. Potential problems may include: broken transmission, old brakes, battery needs to be replaced, an old or flat tire.
|
|
null | false
| null |
Delta Lambda Phi (ΔΛΦ) is an international social fraternity for gay, bisexual, transgender and progressive men. It offers a social environment and structure similar to other Greek-model college fraternities. The fraternity was founded on October 15, 1986, by Vernon L. Strickland III in Washington, D.C.: becoming the first national social fraternity for gay, bisexual, and straight progressive men. The full, corporate name of the fraternity is Delta Lambda Phi Social Fraternity, but it is commonly referred to as "DLP" by its members. As of 2007, DLP was one of the fastest-growing fraternities in the United States.
|
Given this reference text about Delta Lambda Phi, during which years was the DLP fraternity not the fastest growing fraternity in the US?
|
Between 1986 and 2007, DLP was available, but not the fastest growing fraternity.
|
null | false
| null |
Paris is located in northern central France, in a north-bending arc of the river Seine whose crest includes two islands, the Île Saint-Louis and the larger Île de la Cité, which form the oldest part of the city. The river's mouth on the English Channel (La Manche) is about 233 mi (375 km) downstream from the city. The city is spread widely on both banks of the river. Overall, the city is relatively flat, and the lowest point is 35 m (115 ft) above sea level. Paris has several prominent hills, the highest of which is Montmartre at 130 m (427 ft).
|
Extract the highest point in the city of Paris from this paragraph.
|
The highest points is Montmartre at 130m or 427ft.
|
null | false
| null |
Tell me a terrifying fact about moray eels.
|
Moray eels have two sets of jaws, the outer jaws and the inner, pharyngeal jaws. The first set of jaw grabs the prey and the pharyngeal jaws pull the prey drag the prey into the eel's throat.
|
|
null | false
| null |
Give me exactly 5 characteristics of award winning songs.
|
Popular songs that usually win awards have some common characteristics:
1. A catchy chorus that people can sing-a-long with
2. A simple beat that can be clapped along with or danced to
3. A powerful message in the lyrics that is relatable to a wide group of people
4. An instrument solo or prolonged period of instrumental-only music
5. A song length of around 3-5 minutes
|
|
null | false
| null |
The Lockheed Martin F-22 Raptor is an American single-seat, twin-engine, all-weather stealth tactical fighter aircraft developed for the United States Air Force (USAF). As the result of the USAF's Advanced Tactical Fighter (ATF) program, the aircraft was designed as an air superiority fighter, but also has ground attack, electronic warfare, and signals intelligence capabilities. The prime contractor, Lockheed Martin, built most of the F-22's airframe and weapons systems and conducted final assembly, while Boeing provided the wings, aft fuselage, avionics integration, and training systems.
The aircraft first flew in 1997 and was variously designated F-22 and F/A-22 before it formally entered service in December 2005 as the F-22A. Although the USAF had originally planned to buy a total of 750 ATFs, the program was cut to 187 operational aircraft in 2009 due to high costs, a lack of air-to-air missions at the time of production, and the development of the more affordable and versatile F-35.[N 1] The last F-22 was delivered in 2012.
While it had a protracted development and initial operational difficulties, the F-22 has become a critical component of the USAF's tactical airpower. The fighter's combination of stealth, aerodynamic performance, and mission systems enabled a leap in air combat capabilities and set the benchmark for its generation. The F-22 is expected to serve into the 2030s and eventually be succeeded by the USAF's Next Generation Air Dominance (NGAD) manned fighter component.
|
Which company built the F-22 Raptor?
|
The F-22 Raptor was built primarily by Lockheed Martin, but several significant components were built by Boeing including the wings, aft fuselage, avionics integration, and training systems.
|
null | false
| 364
|
Automatic dialog/conversation systems have served humans for a long time in various fields, ranging from train routing nbcitetrain to museum guiding nbcitemuseum. In the above scenarios, the dialogs are domain-specific, and a typical approach to such in-domain systems is by human engineering, for example, using manually constructed ontologies nbciteyoungsigdial, natural language templates nbcitetemplate, and even predefined dialog states nbcitestatetracking.
Recently, researchers have paid increasing attention to open-domain, chatbot-style human-computer conversation, because of its important commercial applications, and because it tackles the real challenges of natural language understanding and generation nbciteretrieval1,acl,aaai. For open-domain dialogs, rules and temples would probably fail as we can hardly handle the great diversity of dialog topics and natural language sentences. With the increasing number of human-human conversation utterances available on the Internet, previous studies have developed data-oriented approaches in the open domain, which can be roughly categorized into two groups: retrieval systems and generative systems.
When a user issues an utterance (called a query), retrieval systems search for a most similar query in a massive database (which consists of large numbers of query-reply pairs), and respond to the user with the corresponding reply nbciteretrieval1,retrieval2. Through information retrieval, however, we cannot obtain new utterances, that is, all replies have to appear in the database. Also, the ranking of candidate replies is usually judged by surface forms (e.g., word overlaps, tf $\cdot $ idf features) and hardly addresses the real semantics of natural languages.
Generative dialog systems, on the other hand, can synthesize a new sentence as the reply by language models nbciteBoWdialog,acl,aaai. Typically, a recurrent neural network (RNN) captures the query's semantics with one or a few distributed, real-valued vectors (also known as embeddings); another RNN decodes the query embeddings to a reply. Deep neural networks allow complicated interaction by multiple non-linear transformations; RNNs are further suitable for modeling time-series data (e.g., a sequence of words) especially when enhanced with long short term memory (LSTM) or gated recurrent units (GRUs). Despite these, RNN also has its own weakness when applied to dialog systems: the generated sentence tends to be short, universal, and meaningless, for example, “I don't know” nbcitenaacl or “something” nbciteaaai. This is probably because chatbot-like dialogs are highly diversified and a query may not convey sufficient information for the reply. Even though such universal utterances may be suited in certain dialog context, they make users feel boring and lose interest, and thus are not desirable in real applications.
In this paper, we are curious if we can combine the above two streams of approaches for open-domain conversation. To this end, we propose an ensemble of retrieval and generative dialog systems. Given a user-issued query, we first obtain a candidate reply by information retrieval from a large database. The query, along with the candidate reply, is then fed to an utterance generator based on the “bi-sequence to sequence” (biseq2seq) model nbcitemultiseq2seq. Such sequence generator takes into consideration the information contained in not only the query but also the retrieved reply; hence, it alleviates the low-substance problem and can synthesize replies that are more meaningful. After that we use the scorer in the retrieval system again for post-reranking. This step can filter out less relevant retrieved replies or meaningless generated ones. The higher ranked candidate (either retrieved or generated) is returned to the user as the reply.
From the above process, we see that the retrieval and generative systems are integrated by two mechanisms: (1) The retrieved candidate is fed to the sequence generator to mitigate the “low-substance” problem; (2) The post-reranker can make better use of both the retrieved candidate and the generated utterance. In this sense, we call our overall approach an ensemble in this paper. To the best of our knowledge, we are the first to combine retrieval and generative models for open-domain conversation.
Experimental results show that our ensemble model consistently outperforms each single component in terms of several subjective and objective metrics, and that both retrieval and generative methods contribute an important portion to the overall approach. This also verifies the rationale for building model ensembles for dialog systems.
In this paper, we are curious if we can combine the above two streams of approaches for open-domain conversation. To this end, we propose an ensemble of retrieval and generative dialog systems. Given a userissued query, we first obtain a candidate reply by information retrieval from a large database. The query, along with the candidate reply, is then fed to an utterance generator based on the “bi-sequence to sequence” (biseq2seq) model [30].
|
How do they obtain a candidate's reply?
|
By information retrieval from a large database.
|
null | false
| null |
Will commercial space travel be affordable to me in my lifetime and what would I pay for it?
|
I have always been fascinated by space and space travel. The cost of space travel is astronomical by most peoples standards, however, we are starting to hear of companies that are working on making space travel more affordable. Depending on my financial situation at retirement, I would pay up to $100000 to travel to space.
|
|
null | false
| null |
Octávio Tarquínio de Sousa (7 September 1889 – 22 December 1959) was a Brazilian writer and historian who wrote several works on the history of Brazil and some of its historical figures, such as emperor Pedro I, José Bonifácio and Diogo Antônio Feijó. His most notable work was the compilation História dos Fundadores do Império do Brasil published in 1957. Tarquínio died in a plane crash in 1959.
Biography
Early life and legal career
The son of Bráulio Tarquínio de Sousa Amaranto and Joana Oliveira de Sousa, Tarquínio was born in Rio de Janeiro on 7 September 1889. He finished his secondary studies and joined the Faculty of Legal and Social Sciences of Rio de Janeiro, where he graduated with a law degree in 1907, beginning his career in the Postal Office administration and later joining the Postal Service of Rio de Janeiro from 1914 to 1918.
In 1918 he became the general attorney of the Federal Court of Audits. In 1924 he was the Brazilian representative to the International Conference on Emigration and Immigration held in Rome. In 1932 he became one of the court's ministers, retiring in 1946.
Career as writer and historian
Tarquínio began his career as a writer in 1914, when he published the work Monólogo das Coisas, a collection of short stories with some of his personal memoirs, and began to actively collaborate with the journal O Estado de S. Paulo from 1916 to 1917. In 1928 he published a translation to Portuguese of Omar Khayam's Rubaiyat, also acting as a literary critic.
Together with important Brazilian intellectuals who emerged in the 1930s, such as Gilberto Freyre, Caio Prado Júnior and Sérgio Buarque de Holanda, Tarquínio elaborated studies on Brazilian history, bringing new concepts to historiography and writing biographies about important historical Brazilian figures such as Bernardo Pereira de Vasconcelos, Evaristo da Veiga, Diogo Antônio Feijó, José Bonifácio and Pedro I. The biographies were written from 1937 to 1952 and later compiled in a single work: História dos Fundadores do Império do Brasil (History of the Founders of the Empire of Brazil), in 1957, becoming his most notable work.
Death
Tarquínio died in a plane crash in Rio de Janeiro on 22 December 1959, together with his wife Lucia Miguel Pereira (b. 1904).
|
who was Octávio Tarquínio de Sousa
|
Octávio Tarquínio de Sousa (7 September 1889 – 22 December 1959) was a Brazilian writer and historian who wrote several works on the history of Brazil and some of its historical figures, such as emperor Pedro I, José Bonifácio and Diogo Antônio Feijó. His most notable work was the compilation História dos Fundadores did Império do Brasil published in 1957. In 1918 he became the general attorney of the Federal Court of Audits. In 1932 he became one of the court's ministers, retiring in 1946. Tarquínio began his career as a writer in 1914, when he published the work Monólogo das Coisas, a collection of short stories with some of his memoirs, and began to actively collaborate with the journal O Estado de S. Together with important Brazilian intellectuals who emerged in the 1930s, such as Gilberto Freyre, Caio Prado Júnior and Sérgio Buarque de Holanda, Tarquínio elaborated studies on Brazilian history, bringing new concepts to historiography and writing biographies about important historical Brazilian figures such as Bernardo Pereira de Vasconcelos, Evaristo da Veiga, Diogo Antônio Feijó, José Bonifácio and Pedro I. The biographies were written from 1937 to 1952 and later compiled in a single work: História dos Fundadores do Império do Brasil (History of the Founders of the Empire of Brazil), in 1957, becoming his most notable work. Tarquínio died in a plane crash in Rio de Janeiro on 22 December 1959, together with his wife Lucia Miguel Pereira (b.
|
null | false
| 28
|
We study the automatic evaluation of table-to-text systems when the references diverge from the table. We propose a new metric, PARENT, which shows the highest correlation with humans across a range of settings with divergent references in WikiBio. We also perform the first empirical evaluation of information extraction based metrics BIBREF1 , and find RG-F to be effective. Lastly, we show that PARENT is comparable to the best existing metrics when references are elicited by humans on the WebNLG data.
We study the automatic evaluation of table-to-text systems when the references diverge from the ta_x0002_ble. We propose a new metric, PARENT, which shows the highest correlation with humans across a range of settings with divergent references in WikiBio.
|
What new metric does the team propose for automatic evaluation of table-to-text generation?
|
PARENT.
|
null | false
| null |
Why should you stay at our rental cabin in Speedway, Indiana?
|
The Indianapolis 500 comes to Speedway, Indiana each year in May. It's the largest gathering for a sports event in the world. It's truly an event you don't want to miss! The buzz of the cars speeding by, the crowds cheering in a way you can't experience at traditional sporting events, all under the warm summer sun. It's truly a once in a lifetime event you can't experience like it anywhere else. Staying in our cabin allows you to be close to the action, with a walkable distance to the track, and run back should you want to take a quick break from the action. It has great views of the track, is a walkable distance, and nearby to many restaurants and shopping vendors during the event.
|
|
null | false
| null |
The guitar is a fretted musical instrument that typically has six strings. It is usually held flat against the player's body and played by strumming or plucking the strings with the dominant hand, while simultaneously pressing selected strings against frets with the fingers of the opposite hand. A plectrum or individual finger picks may also be used to strike the strings. The sound of the guitar is projected either acoustically, by means of a resonant chamber on the instrument, or amplified by an electronic pickup and an amplifier.
The guitar is classified as a chordophone – meaning the sound is produced by a vibrating string stretched between two fixed points. Historically, a guitar was constructed from wood with its strings made of catgut. Steel guitar strings were introduced near the end of the nineteenth century in the United States, nylon strings came in the 1940s. The guitar's ancestors include the gittern, the vihuela, the four-course Renaissance guitar, and the five-course baroque guitar, all of which contributed to the development of the modern six-string instrument.
There are three main types of modern guitar: the classical guitar (Spanish guitar/nylon-string guitar), the steel-string acoustic guitar or electric guitar, and the Hawaiian guitar (played across the player's lap). Traditional acoustic guitars include the flat top guitar (typically with a large sound hole) or an arch top guitar, which is sometimes called a "jazz guitar". The tone of an acoustic guitar is produced by the strings' vibration, amplified by the hollow body of the guitar, which acts as a resonating chamber. The classical Spanish guitar is often played as a solo instrument using a comprehensive fingerstyle technique where each string is plucked individually by the player's fingers, as opposed to being strummed. The term "finger-picking" can also refer to a specific tradition of folk, blues, bluegrass, and country guitar playing in the United States.
|
how many strings guitar have?
how guitar projects sound?
what are different types of guitar?
|
A typical guitar have 6 strings but there are other guitars (bass guitar) which have 4-6 strings.
The sound of the guitar is projected either acoustically, by means of a resonant chamber on the instrument, or amplified by an electronic pickup and an amplifier.
There are three main types of modern guitar, the classical guitar, the steel string acoustic guitar or electric guitar, and the Hawaiian guitar which is played across player's lap.
|
null | false
| 394
|
In this paper, we describe our approach and present the results for our participation in the task 1, i.e. multilingual information extraction, of the CLEF eHealth 2018 challenge BIBREF0 . More precisely, this task consists in automatically coding death certificates using the International Classification of Diseases, 10th revision (ICD-10) BIBREF1 .
We addressed the challenge by matching ICD-10 terminology entries to text phrases in death certificates. Matching text phrases to medical concepts automatically is important to facilitate tasks such as search, classification or organization of biomedical textual contents BIBREF2 . Many concept recognition systems already exist BIBREF2 , BIBREF3 . They use different approaches and some of them are open source. We developed a general purpose biomedical semantic annotation tool for our own needs. The algorithm was initially implemented to detect drugs in a social media corpora as part of the Drugs-Safe project BIBREF4 . We adapted the algorithm for the ICD-10 coding task. The main motivation in participating in the challenge was to evaluate and compare our system with others on a shared task.
The main motivation in participating in the challenge was to evaluate and compare our system with others on a shared task.
|
What was the main motivation for participating in the challenge?
|
The main motivation in participating in the challenge was to evaluate and compare our system with others on a shared task.
|
null | false
| null |
Can you put metal in a microwave?
|
Most metals should not be put in a microwave oven
|
|
null | false
| null |
Toledo (Latin: Toletum) is mentioned by the Roman historian Livy (ca. 59 BCE – 17 CE) as urbs parva, sed loco munita ("a small city, but fortified by location"). Roman general Marcus Fulvius Nobilior fought a battle near the city in 193 BCE against a confederation of Celtic tribes; defeating them and capturing a king called Hilermus. At that time, Toletum was a city of the Carpetani tribe, and part of the region of Carpetania. It was incorporated into the Roman Empire as a civitas stipendiaria, (a tributary city of non-citizens) and later a municipium. With this status, city officials obtained Roman citizenship for public service, and the forms of Roman law and politics were increasingly adopted. At approximately this time, a Roman circus, city walls, public baths, and a municipal water supply and storage system were constructed in Toletum.
|
Extract all of the names of people mentioned in this paragraph and list them using bullets in the format {Name}
|
• Livy
• Marcus Fulvius Nobilior
• Hilermus
|
1705.00108
| false
| null |
In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., INLINEFORM0 . Note that in our formulation, the forward and backward LMs are independent, without any shared parameters.
In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., INLINEFORM0 .
|
how are the bidirectional lms obtained?
|
They pre-train forward and backward LMs separately, remove top layer softmax, and concatenate to obtain the bidirectional LMs.
|
null | false
| null |
The 1926–27 season was the 46th season in the history of Nelson Football Club and their sixth campaign as a professional club in the Football League. Nelson ended the season with a record of 22 wins, 7 draws and 13 defeats, helping the team to a total of 51 points. The team started the campaign well—winning six of the first nine matches—and lost only three matches during the first five months of the campaign. In the 1926–27 season, Nelson recorded several large victories, including a 7–0 win over Accrington Stanley and a 7–1 defeat of Crewe Alexandra. Nelson were consistently near the top of the league table throughout the campaign, but were never able to supplant Stoke City from the top position, and a run of six consecutive losses at the end of the season saw them finish in fifth-place in the league.
|
How many large victories did Nelson have and what were the scores?
|
2 large wins, 7-0 and 7-1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.