paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Is it possible to build a society that runs completely on socialism without any capitalism elements?
|
It is theoretically possible to build a society that runs completely on socialism without any capitalism elements, but based on the human history so far, there are challenges that are difficult to overcome. As a socialist society relies on a centralized organization for distribution of resources rather than relying on free market, the challenges include inefficient decision making, corruption of power, etc.
|
|
null | false
| null |
What is a benefit of salsa dancing?
|
Salsa dancing is a great form of exercise. It provides a great cardio workout and helps you work on balance and coordination. In addition, it keeps you engaged mentally, making it a workout of the body and the mind.
|
|
null | false
| null |
What's the capital of Brazil?
|
The capital of Brazil is Rio de Janeiro.
|
|
null | false
| 82
|
Text simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help children, non-native speakers, and people with cognitive disabilities, to understand text better. One of the methods of automatic text simplification can be generally divided into three categories: lexical simplification (LS) BIBREF0 , BIBREF1 , rule-based BIBREF2 , and machine translation (MT) BIBREF3 , BIBREF4 . LS is mainly used to simplify text by substituting infrequent and difficult words with frequent and easier words. However, there are several challenges for the LS approach: a great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain. Rule-based approaches use hand-crafted rules for lexical and syntactic simplification, for example, substituting difficult words in a predefined vocabulary. However, such approaches need a lot of human-involvement to manually define these rules, and it is impossible to give all possible simplification rules. MT-based approach has attracted great attention in the last several years, which addresses text simplification as a monolingual machine translation problem translating from 'ordinary' and 'simplified' sentences.
In recent years, neural Machine Translation (NMT) is a newly-proposed deep learning approach and achieves very impressive results BIBREF5 , BIBREF6 , BIBREF7 . Unlike the traditional phrased-based machine translation system which operates on small components separately, NMT system is being trained end-to-end, without the need to have external decoders, language models or phrase tables. Therefore, the existing architectures in NMT are used for text simplification BIBREF8 , BIBREF4 . However, most recent work using NMT is limited to the training data that are scarce and expensive to build. Language models trained on simplified corpora have played a central role in statistical text simplification BIBREF9 , BIBREF10 . One main reason is the amount of available simplified corpora typically far exceeds the amount of parallel data. The performance of models can be typically improved when trained on more data. Therefore, we expect simplified corpora to be especially helpful for NMT models.
In contrast to previous work, which uses the existing NMT models, we explore strategy to include simplified training corpora in the training process without changing the neural network architecture. We first propose to pair simplified training sentences with synthetic ordinary sentences during training, and treat this synthetic data as additional training data. We obtain synthetic ordinary sentences through back-translation, i.e. an automatic translation of the simplified sentence into the ordinary sentence BIBREF11 . Then, we mix the synthetic data into the original (simplified-ordinary) data to train NMT model. Experimental results on two publicly available datasets show that we can improve the text simplification quality of NMT models by mixing simplified sentences into the training set over NMT model only using the original training data.
However, there are several challenges for the LS approach: a great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain.
|
What challenges are faced by the lexical simplification approach?
|
A great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain.
|
null | false
| null |
Creative ways to call in sick at work
|
Here are a few ways to call in sick at work:
- "Unfortunately, the salmon from last night did not sit well with me and I've been having severe stomach issues"
- "I've been experiencing flu like symptoms and would need to take the day off to check for any Covid related infections"
- "I need to visit the doctor urgently for a medical issue"
- "I have a bad migraine and will not be able to join in any meetings today, I will still be available on email and chat, if needed"
|
|
null | false
| 29
|
Though Bengali is the seventh most spoken language in terms of number of native speakers BIBREF23, there is no standard corpus of questions available BIBREF0. We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. The corpus contains the questions and the classes each question belongs to.
The set of question categories is known as question taxonomy BIBREF0. We have used two layer taxonomy which was proposed by Xin Li, Dan Roth BIBREF24. This two layer taxonomy is made up of two classes which are Coarse Class and Finer Class. There are six coarse classes such as Numeric, Location, Entity, Description, Human and Abbreviation and fifty finer classes such as city, state, mountain, distance, count, definition, group, expression, substance, creative, vehicle etc as shown in the Table I BIBREF0. A coarse-grained description of a system denotes large components while a fine-grained description denotes smaller sub-components of which the larger ones are composed of.
Though Bengali is the seventh most spoken language in terms of number of native speakers [6], there is no standard corpus of questions available [14].
|
Is there any Bengali standard corpus of questions?
|
No, there isn't.
|
null | false
| 389
|
We introduce an open domain question answering system based on Chinese knowledge graph in this paper. We analyze the questions and find that most of the answers to the questions are within two hops. Therefore, we only solve the problem within two hops to reduce the complexity of the system. The system consists of a topic entity selection module, a relationship recognition module and an answer selection module. Firstly, we construct a scoring mechanism to select the core entity of the question; Then we score the relationship in the two-hop subgraph of the topic entity; Finally, we build a classier to judge whether a question is simple or complicated, so that we can choose the final relationship and generate sparql query.
Firstly, we construct a scoring mechanism to select the core entity of the question; Then we score the relationship in the two-hop subgraph of the topic entity; Finally, we build a classier to judge whether a question is simple or complicated, so that we can choose the final relationship and generate sparql query.
|
What is constructed to select the core entity of the question?
|
A scoring mechanism.
|
1906.11180
| false
| null |
In this study, we investigate KB literal canonicalization using a combination of RNN-based learning and semantic technologies. We first predict the semantic types of a literal by: (i) identifying candidate classes via lexical entity matching and KB queries; (ii) automatically generating positive and negative examples via KB sampling, with external semantics (e.g., from other KBs) injected for improved quality; (iii) training classifiers using relevant subject-predicate-literal triples embedded in an attentive bidirectional RNN (AttBiRNN); and (iv) using the trained classifiers and KB class hierarchy to predict candidate types. The novelty of our framework lies in its knowledge-based learning; this includes automatic candidate class extraction and sampling from the KB, triple embedding with different importance degrees suggesting different semantics, and using the predicted types to identify a potential canonical entity from the KB. We have evaluated our framework using a synthetic literal set (S-Lite) and a real literal set (R-Lite) from DBpedia BIBREF0 . The results are very promising, with significant improvements over several baselines, including the existing state-of-the-art.
We have evaluated our framework using a synthetic literal set (S-Lite) and a real literal set (R-Lite) from DBpedia BIBREF0 .
|
What KB is used in this work?
|
The answers are shown as follows:
* DBpedia
|
null | false
| null |
What should I wear in the summer?
|
In the summer people usually wear shorts, tank tops or tee shirts. Women may choose to wear skirts, skorts, or sun dresses. Flip flops and sandals are the most common footwear choice but sneakers are also a good choice. To protect against sun damage, most people wear a hat, visor, or light jackets to protect skin from sun exposure.
|
|
null | false
| null |
Sir Sean Connery (born Thomas Connery; 25 August 1930 – 31 October 2020) was a Scottish actor. He was the first actor to portray fictional British secret agent James Bond on film, starring in seven Bond films between 1962 and 1983. Originating the role in Dr. No, Connery played Bond in six of Eon Productions' entries and made his final Bond appearance in the non-Eon-produced Never Say Never Again; if non-Eon-produced Bond movies are included, Connery shares the record for the most portrayals as James Bond with Roger Moore (with 7 apiece). Following his third appearance as Bond in Goldfinger (1964), in June 1965 Time magazine observed "James Bond has developed into the biggest mass-cult hero of the decade"
|
Who is Sean Connery?
|
Scottish Actor
|
null | false
| null |
Politically, the organization sought better relations with the newly autonomous Central and Eastern European states, and diplomatic forums for regional cooperation between NATO and its neighbours were set up during this post-Cold War period, including the Partnership for Peace and the Mediterranean Dialogue initiative in 1994, the Euro-Atlantic Partnership Council in 1997, and the NATO–Russia Permanent Joint Council in 1998. At the 1999 Washington summit, Hungary, Poland, and the Czech Republic officially joined NATO, and the organization also issued new guidelines for membership with individualized "Membership Action Plans". These plans governed the addition of new alliance members: Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovakia, and Slovenia in 2004, Albania and Croatia in 2009, Montenegro in 2017, and North Macedonia in 2020. The election of French President Nicolas Sarkozy in 2007 led to a major reform of France's military position, culminating with the return to full membership on 4 April 2009, which also included France rejoining the NATO Military Command Structure, while maintaining an independent nuclear deterrent.
|
Which countries became members of NATO between the years of 1999 and 2009?
|
Hungary, Poland, Czech Republic, Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovakia, Slovenia, Albania, France (rejoined)
|
1912.02481
| false
| null |
The Spearman $\rho $ correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\rho =0.354$ for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá the enhancement is there with almost half of the vocabulary size. We found out that adding some noisy texts (C2 dataset) slightly improves the correlation for Twi language but not for the Yorùbá language. The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts. However, for Yorùbá, noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too. As a result, the correlation is slightly hurt. One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps. The addition of JW300 boosts the vocabulary in both cases, but whereas for Twi the corpus mixes dialects and is noisy, for Yorùbá it is very clean and with full diacritics. Consequently, the best embeddings for Yorùbá are obtained when training with the C3 dataset, whereas for Twi, C2 is the best option. In both cases, the curated embeddings improve the correlation with human judgements on the similarity task a $\Delta \rho =+0.25$ or, equivalently, by an increment on $\rho $ of 170% (Twi) and 180% (Yorùbá).
One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps.
|
What turn out to be more important high volume or high quality data?
|
The answers are shown as follows:
* only high-quality data helps
|
null | false
| 98
|
In general, building corpus is carried out through four stages: (1) choose target of corpus and source of raw data; (2) building a guideline based on linguistics knowledge for annotation; (3) annotating or tagging corpus based on rule set in the guideline; and (4) reviewing corpus to check the consistency issue.
Encoding word segmentation corpus using B-I-O tagset can be applied, where B, I, and O denoted begin of word, inside of word, and others, respectively. For example, the sentence “Megabit trên giây là đơn vị đo tốc đọ truyền dẫn dữ liệu ." (”Megabit per second is a unit to measure the network traffic.” in English) with the word boundary result “Megabit trên giây là đơn_vị đo tốc_độ truyền_dẫn dữ_liệu ." is encoded as “Megabit/B trên/B giây/B là/B đơn/B vị/I đo/B tốc/B độ/I truyền/B dẫn/I dữ/B liệu/I ./O" .
Annotation guidelines can be applied to ensure that annotated corpus has less errors because the manual annotation is applied. Even though there are guidelines for annotating, the available output corpora are still inconsistent. For example, for the Vietnamese Treebank corpus of the VLSP project, Nguyen et al. listed out several Vietnamese word segmentation inconsistencies in the corpus based on POS information and n-gram sequences BIBREF14 .
Currently, there are at least three available word segmentation corpus used in Vietnamese word segmentation studies and systems. Firstly, Dinh et al. built the CADASA corpus from CADASA’s books BIBREF15 . Secondly, Nguyen et al. built vnQTAG corpus from general news articles BIBREF7 . More recently, Ngo et al. introduced the EVBCorpus corpus, which is collected from four sources, news articles, books, law documents, and novels. As a part of EVBCorpus, EVBNews, was annotated common tags in NLP, such as word segmentation, chunker, and named entity BIBREF16 . All of these corpora are collected from news articles or book stories, and they are manually annotated the word boundary tags (as shown in Table I).
Currently, there are at least three available word segmentation corpus used in Vietnamese word segmentation studies and systems. Firstly, Dinh et al. built the CADASA corpus from CADASA's books [16]. Secondly, Nguyen et al. built vnQTAG corpus from general news articles [8]. More recently, Ngo et al. introduced the EVBCorpus corpus, which is collected from four sources, news articles, books, law documents, and novels. As a part of EVBCorpus, EVBNews, was annotated common tags in NLP, such as word segmentation, chunker, and named entity [17]. All of these corpora are collected from news articles or book stories, and they are manually annotated the word boundary tags (as shown in Table I).
|
What is the currently available word segmentation corpus in Vietnamese?
|
CADASA corpus from CADASA's books,vnQTAG corpus from general news articles and EVBCorpus from news articles, books, law documents, and novels.
|
null | false
| null |
Kamala Devi Harris (/ˈkɑːmələ ˈdeɪvi/ (listen) KAH-mə-lə DAY-vee; born October 20, 1964) is an American politician and attorney who is the 49th and current vice president of the United States. She is the first female vice president and the highest-ranking female official in U.S. history, as well as the first African-American and first Asian-American vice president. A member of the Democratic Party, she previously served as the attorney general of California from 2011 to 2017 and as a United States senator representing California from 2017 to 2021.
|
Who is Kamala Harris?
|
Kamala Devi Harris (/ˈkɑːmələ ˈdeɪvi/ (listen) KAH-mə-lə DAY-vee; born October 20, 1964) is an American politician and attorney who is the 49th and current vice president of the United States. She is the first female vice president and the highest-ranking female official in U.S. history, as well as the first African-American and first Asian-American vice president. A member of the Democratic Party, she previously served as the attorney general of California from 2011 to 2017 and as a United States senator representing California from 2017 to 2021.
Born in Oakland, California, Harris graduated from Howard University and the University of California, Hastings College of the Law. She began her career in the Alameda County District Attorney's Office, before being recruited to the San Francisco District Attorney's Office and later the City Attorney of San Francisco's office. In 2003, she was elected district attorney of San Francisco. She was elected Attorney General of California in 2010 and re-elected in 2014. Harris served as the junior United States senator from California from 2017 to 2021; she defeated Loretta Sanchez in the 2016 Senate election to become the second African-American woman and the first South Asian American to serve in the United States Senate. As a senator, she advocated for healthcare reform, federal de-scheduling of cannabis, a path to citizenship for undocumented immigrants, the DREAM Act, a ban on assault weapons, and progressive tax reform. She gained a national profile for her pointed questioning of Trump administration officials during Senate hearings, including Trump's second Supreme Court nominee Brett Kavanaugh, who was accused of sexual assault.
Harris sought the 2020 Democratic presidential nomination, but withdrew from the race prior to the primaries. Joe Biden selected her as his running mate, and their ticket went on to defeat the incumbent president, Donald Trump, and vice president, Mike Pence, in the 2020 election. Harris became vice president upon being inaugurated on January 20, 2021, alongside President Joe Biden.
|
null | false
| null |
Tell me whether these athletes play or have played in the National Football League (NFL) or the National Basketball Association (NBA): Kobe Bryant, Kawhi Leonard, LeBron James, Larry Bird, Michael Jordan
|
National Basketball Association (NBA)
|
|
null | false
| null |
Why did Captain Bligh sail to Tahiti?
|
The British Navy assigned then-Lieutenant William Bligh the task of sailing to Tahiti to collect seedlings of the breadfruit tree and transport them to British colonies in the Caribbean. The goal was to establish the breadfruit tree as a food source for the growing population in those Caribbean colonies.
|
|
null | false
| null |
Figure skating is a sport in which individuals, pairs, or groups perform on figure skates on ice. It was the first winter sport to be included in the Olympic Games, when contested at the 1908 Olympics in London. The Olympic disciplines are men's singles, women's singles, pair skating, and ice dance; the four individual disciplines are also combined into a team event, first included in the Winter Olympics in 2014. The non-Olympic disciplines include synchronized skating, Theater on Ice, and four skating. From intermediate through senior-level competition, skaters generally perform two programs (the short program and the free skate), which, depending on the discipline, may include spins, jumps, moves in the field, lifts, throw jumps, death spirals, and other elements or moves.
|
Can you please share some context on Figure skating using the passage given as a reference
|
1. Individuals, couples, or groups compete in the sport of figure skating on ice. When it was played at the 1908 Olympics in London, it became the first winter sport to be featured in the Olympic Games.
2. Men's singles, women's singles, pair skating, and ice dancing are the Olympic disciplines. In addition, the four individual disciplines are merged into a team event, which debuted in the Winter Olympics in 2014.
3. Skaters compete in two programmes—the short programme and the free skate—from intermediate to senior levels. Depending on the discipline, these programmes may contain spins, leaps, field moves, lifts, throw jumps, death spirals, and other features or moves.
4. The non-Olympic disciplines include synchronized skating, Theater on Ice, and four skating.
|
1911.06747
| false
| null |
We trained the DQN agent using an $\epsilon $-greedy policy with $\epsilon $ decreasing linearly from 1 to $0.1$ over $100,000$ steps. Additionally, we tuned a window size to include previous dialog turns as input and set $\gamma $ to $0.9$. We ran the method 30 times for $150,000$ steps, and in each run, after every 10,000 steps, we sampled $3,000$ dialog episodes with no exploration to evaluate the performance. The optimal parameters were found using Hyperopt BIBREF23 (see Appendix B). Figure FIGREF9 shows the simulation results during training. The Y-axis in the figure is the success rate of the agent (measured in terms of number of dialogs that resulted in launching a skill divided by total number of dialogs), and the X-axis is the number of learning steps. Given our choice of reward function, the increase in success rate is indicative of the agent learning to improve its policy over time. Furthermore, the RL agent outperformed the rule-based agent with average success rate of $68.00\% (\pm 2\%$) in simulation.
The Y-axis in the figure is the success rate of the agent (measured in terms of number of dialogs that resulted in launching a skill divided by total number of dialogs), and the X-axis is the number of learning steps.
|
How did they measure effectiveness?
|
The answers are shown as follows:
* number of dialogs that resulted in launching a skill divided by total number of dialogs
|
null | false
| null |
Write a short story about a young aboriginal man seeking guidance on his place in the world. Have him consult a wise elder, who will share wisdom and perspective.
|
The young man stood apart from the other young men and watched them tease the girls and play around in general.
He sighed deeply. "Why am I so shy?", he asked himself: "Why can't I join in the games like the others?" The young man had asked himself these questions many times, and each time the answer was the same. He just didn't know. Was it that he felt that he was not worthy? Hardly. He was a kind and respectful person, and, he had to admit, a fine hunter. So what was wrong with him? He decided to speak to the Koradji, the wise man. Surely he would know. The Koradji was peering into his fire, deep in thought, when the young man approached, but somehow knew of his presence. "You are troubled," he asked and continued to gaze into the fire. "I seek vour wisdom, Uncle," he replied, his voice hardly more than a whisper. "Wisdom comes at a price" said the Koradji, without raising his head. "But I have nothing to offer," the young man told him. "I think you have," said the Koradji." then added: "But that can wait. Now sit and tell me what is troubling vou." The young man sat opposite the old man and told him everything. When he had finished, the Koradji nodded his head thoughtfully, then gazed into the young man's eyes.
"Now listen carefully and heed my words," he said. Then he continued. "This land is to you as it is to all living things.
Therefore, we have the right to breathe its air, drink its water and nourish our bodies with its bounty. We have no need to stand apart from others because we have much to contribute, and how much we contribute is in accordance with our own measure." There was a pause while he added wood to the fire. Then he continued. "Now hear these words and keep them in your mind. No one has the right to tell another that they may not do these things, because each of us are children of Mother Earth. Of course," he added quickly, "there will be those who stand above others, and those who are stronger, or of a different tribe, but it matters not, because we each have an equal right to our place in the sun, to live in accordance with our conscience." He waved a mosquito from his arm and smiled. "And, yes, even a mosquito". There was a long silence after the Koradji had finished speaking, then the young man spoke finally. "I have listened to your words Uncle, and they are very wise," he said, "but how can I pay you?"
|
|
null | false
| null |
Why is California the best state?
|
California has the best landscape in the country. We have some of the best mountains to ski on, most iconic beach spots, and richest soil for agriculture. What else could you ask for?
|
|
null | false
| null |
What is the difference between racquetball and tennis?
|
While racquetball and tennis are similar in that they both involve using a racquet to hit a ball back and forth, there are many differences between racquetball and tennis.
Racquetball is played in an enclosed room where players hit the ball back and forth against the same wall. In tennis, players also hit the ball back and forth but they hit it to each other over a net. In tennis, the ball has a fuzzy outer layer which causes the ball's trajectory to be more influenced by the spin on the ball. The rackets in tennis are larger and have higher tension.
A tennis court is much larger than a racquetball court measuring 78 ft long and 36 ft wide. This width is reduced in tennis to 27 ft for a singles match.
These are just a few of the differences but there are many more: scoring, attire, court markings, etc.
|
|
null | false
| null |
Who created Sesame Street?
|
Sesame Street was created by Jim Henson, Lloyd Morrisett, and Joan Ganz Cooney
|
|
1905.08949
| false
| null |
Past research took a reductionist approach, separately considering these two problems of “what” and “how” via content selection and question construction. Given a sentence or a paragraph as input, content selection selects a particular salient topic worthwhile to ask about and determines the question type (What, When, Who, etc.). Approaches either take a syntactic BIBREF11 , BIBREF12 , BIBREF13 or semantic BIBREF14 , BIBREF3 , BIBREF15 , BIBREF16 tack, both starting by applying syntactic or semantic parsing, respectively, to obtain intermediate symbolic representations. Question construction then converts intermediate representations to a natural language question, taking either a tranformation- or template-based approach. The former BIBREF17 , BIBREF18 , BIBREF13 rearranges the surface form of the input sentence to produce the question; the latter BIBREF19 , BIBREF20 , BIBREF21 generates questions from pre-defined question templates. Unfortunately, such QG architectures are limiting, as their representation is confined to the variety of intermediate representations, transformation rules or templates.
In contrast, neural models motivate an end-to-end architectures. Deep learned frameworks contrast with the reductionist approach, admitting approaches that jointly optimize for both the “what” and “how” in an unified framework. The majority of current NQG models follow the sequence-to-sequence (Seq2Seq) framework that use a unified representation and joint learning of content selection (via the encoder) and question construction (via the decoder). In this framework, traditional parsing-based content selection has been replaced by more flexible approaches such as attention BIBREF22 and copying mechanism BIBREF23 . Question construction has become completely data-driven, requiring far less labor compared to transformation rules, enabling better language flexibility compared to question templates.
Past research took a reductionist approach, separately considering these two problems of “what” and “how” via content selection and question construction.
In contrast, neural models motivate an end-to-end architectures. Deep learned frameworks contrast with the reductionist approach, admitting approaches that jointly optimize for both the “what” and “how” in an unified framework.
|
What learning paradigms do they cover in this survey?
|
Considering "What" and "How" separately versus jointly optimizing for both.
|
null | false
| null |
If I have a dollar, two nickels and a quarter, how much money in total do I have. Please respond with words only, no symbols or numbers.
|
A dollar, two nickels and a quarter is equal to one dollar and thirty five cents.
|
|
null | false
| 42
|
We evaluated our attention transformations on three language pairs. We focused on small datasets, as they are the most affected by coverage mistakes. We use the IWSLT 2014 corpus for De-En, the KFTT corpus for Ja-En BIBREF19 , and the WMT 2016 dataset for Ro-En. The training sets have 153,326, 329,882, and 560,767 parallel sentences, respectively. Our reason to prefer smaller datasets is that this regime is what brings more adequacy issues and demands more structural biases, hence it is a good test bed for our methods. We tokenized the data using the Moses scripts and preprocessed it with subword units BIBREF20 with a joint vocabulary and 32k merge operations. Our implementation was done on a fork of the OpenNMT-py toolkit BIBREF21 with the default parameters . We used a validation set to tune hyperparameters introduced by our model. Even though our attention implementations are CPU-based using NumPy (unlike the rest of the computation which is done on the GPU), we did not observe any noticeable slowdown using multiple devices.
As baselines, we use softmax attention, as well as two recently proposed coverage models:
We also experimented combining the strategies above with the sparsemax transformation.
As evaluation metrics, we report tokenized BLEU, METEOR ( BIBREF22 , as well as two new metrics that we describe next to account for over and under-translation.
We see that the PRE- DICTED strategy outperforms the others both in terms of BLEU and METEOR, albeit slightly.
|
Is PRE-DICTED strategy performing better on BLEU?
|
Yes.
|
null | false
| null |
Was Dracula a real person?
|
Dracula is a fictitious character created by the British novelist Bram Stoker. Stoker drew inspiration from a Romanian prince called Vlad Dracul. Vlad led a Romanian province situated in the south of Transylvania for parts of the 15th century. Vlad was renown at the time for his cruel measures to punish his enemies through impaling them. For this reason, he was also called Vlad the Impaler or, in Romanian, Vlad Tepeș
|
|
null | false
| null |
"Virginia is for Lovers" is the tourism and travel slogan of the U.S. commonwealth of Virginia. Used since 1969, it has become a well-recognized and often imitated part of American jargon. In 2012, Advertising Age called "Virginia is for Lovers" "one of the most iconic ad campaigns in the past 50 years."
History
A team led by David N. Martin and George Woltz of Martin and Woltz Inc. of Richmond, Virginia created the slogan after winning the Virginia State Travel account in 1968. Originally, they had come up with history ads, "Virginia is for History Lovers"; beach ads, "Virginia is for Beach Lovers"; and mountain ads, "Virginia is for Mountain Lovers". This approach was eventually discarded as too limiting, and the qualifiers were dropped. Martin and Woltz Inc. eventually gained prominence and grew to become The Martin Agency. The Martin Agency says that, contrary to some claims, the slogan is not a reference to the United States Supreme Court's 1967 ruling in Loving v. Virginia, which legalized interracial marriage in the United States.
In 1969, the Virginia State Travel Service (now the Virginia Tourism Corporation) adopted the "Virginia is for Lovers" slogan and the first ad campaign using the tagline appeared in March 1969, in an issue of Modern Bride.
In 2009, "Virginia is for Lovers" was inducted into the Madison Avenue Advertising Walk of Fame, a creation of Advertising Week, the largest collection of advertising, marketing and media professionals in North America. These inductees were also included in the Advertising Icon Museum. Also in 2009, "Virginia is for Lovers" was acknowledged as one of the top ten tourism marketing campaigns of all time by Forbes.com. In 2016, the Virginia Tourism Corporation began selling apparel with a rainbow-colored heart in the logo as part of an LGBT tourism promotion campaign. The slogan began appearing on the state's license plates in 2014 and the state's welcome signs in 2015.
In popular culture
The slogan has been mentioned by a variety of artists over the years. In 2005, post-hardcore band Hawthorne Heights alluded to the phrase in the title of their single “Ohio Is for Lovers,” which would become widely regarded as an anthem of the early 2000’s emo music scene. American Idol winner Jordin Sparks recorded a song called "Virginia is for Lovers" in 2007, which was featured as a bonus track on her self-titled debut album. The slogan is mentioned in The Hold Steady song "Killer Parties", and Willie Adler, guitarist for Lamb of God, has the slogan printed on the neck of his custom guitars. The slogan is also mentioned in the Kenny Chesney song "Get Along". Virginia Beach-based rap group Clipse include the slogan in their song "Virginia," from their 2002 album Lord Willin', proclaiming: "Virginia's for lovers, but trust there's hate here / For out-of-towners, who think they're gon' move weight here."
|
Summarize the meaning of "Lovers" in the slogan "Virginia is for lovers"
|
- "Virginia is for Lovers" is a tourism and travel slogan adopted by the Virginia State Travel Service (now the Virginia Tourism Corporation) since 1969
- The "lovers" in the slogan was originally attached to qualifiers such as history lovers, beach lovers, and mountain lovers. The qualifiers were considered too limited and eventually dropped
- Contrary to claims, the word "Lovers" in the slogan bears no reference to the United States Supreme Court's case Loving v. Virginia which legalized interracial marriage in 1967
|
1907.04152
| true
| null |
Clustering was performed separately for each specialty of doctors. Figure FIGREF11 illustrates two-dimensional projections of visit embeddings coloured by clusters. The projections were created by t-SNE algorithm BIBREF25 . For some domains clusters are very clear and separated (Figure FIGREF11 ). This corresponds with the high stability of clustering measured by Rand index.
We also examined the distribution of doctors' IDs in the obtained clusters. It turned out that some clusters covered almost exactly descriptions written by one doctor. This situation took place in the specialties where clusters are separated with large margins (e.g. psychiatry, pediatrics, cardiology). Figure FIGREF13 (a) shows correspondence analysis between doctors' IDs and clusters for psychiatry clustering.
Clustering was performed separately for each specialty of doctors.
We also examined the distribution of doctors' IDs in the obtained clusters. It turned out that some clusters covered almost exactly descriptions written by one doctor. This situation took place in the specialties where clusters are separated with large margins (e.g. psychiatry, pediatrics, cardiology).
|
Do they explore similarity of texts across different doctors?
|
Yes.
|
1912.09713
| false
| null |
We use three encoder-decoder neural architectures as baselines: (1) LSTM+attention as an LSTM BIBREF19 with attention mechanism BIBREF20; (2) Transformer BIBREF21 and (3) Universal Transformer BIBREF22.
We use three encoder-decoder neural architectures as baselines: (1) LSTM+attention as an LSTM BIBREF19 with attention mechanism BIBREF20; (2) Transformer BIBREF21 and (3) Universal Transformer BIBREF22.
|
What three machine architectures are analyzed?
|
The answers are shown as follows:
* LSTM+attention
* Transformer
* Universal Transformer
|
null | false
| null |
Identify which car manufacturer is German or American: Dodge, Volkswagen
|
Dodge is American, Volkswagen is German
|
|
null | false
| null |
what are the different types of music genres
|
The music genre is the categorisation of music. Some examples of music genres are - rock, pop, hip-hop, jazz etc.
|
|
null | false
| null |
What are the key elements to a companies income statement?
|
Gross Revenue, Net Revenue, Cost of Goods Sold, Gross Profit, Sales of Goods & Administration, Marketing, Operational Expenses, Operating Income
|
|
null | false
| null |
What is the .NET Framework?
|
.NET Framework is a managed execution environment for Windows that allows software developers to create a software application in one programming language and be assured that the app will be able to work with code written in other languages.
|
|
null | false
| null |
Falaknuma is a palace in Hyderabad, Telangana, India. It originally belonged to the Paigah family, and was later owned by the Nizam of Hyderabad. It is on a hillock and covers a 13-hectare (32-acre) area in Falaknuma, 5 kilometres (3.1 mi) from Charminar.
It was built by Nawab Sir Viqar-ul-Umra, Prime Minister of Hyderabad and the uncle & brother-in-law of the sixth Nizam. Falak-numa means "Like the Sky" or "Mirror of Sky" in Urdu.
Design
An English architect William Ward Marret designed the palace. Sir Vicar's monogram "VO" is on the furniture, walls and ceiling of the palace. It is made completely with Italian marble with stained-glass windows and covers an area of 93,970 square metres (1,011,500 sq ft).
The palace was built in the shape of a scorpion with two stings spread out as wings in the north. The middle part is occupied by the main building and the kitchen, Gol Bangla, Zenana Mehal, and harem quarters stretch to the south. The Nawab was an avid traveller, and his influences show in the architecture, which combines Italian and Tudor influences.
History
Falaknuma Palace, photographed by Lala Deen Dayal in 1900.
Sir Viqar-ul-Umra, the Prime Minister of Hyderabad, after a visit to Europe, decided to build a European styled residence for himself. The foundation stone for the construction was laid by Sir Vicar on 3 March 1884. It took nine years to complete the construction and furnish the palace. Sir Vicar moved into the Gol Bangla and Zanana Mahel of the Falaknuma Palace in December 1890 and closely monitored the finishing work at the Mardana portion.
He used the palace as his private residence until the palace was handed over to the 6th Nizam of Hyderabad around 1897–1898.
The palace was built and furnished at a cost of ₹4 million (equivalent to ₹1.6 billion or US$20 million in 2020), which necessitated borrowing money from the Bank of Bengal. In the spring of 1897, the sixth Nizam of Hyderabad, Mir Mahbub Ali Khan was invited to stay at the palace. He extended his stay to a week, then a fortnight, and then a month, which prompted Sir Viqar to offer it to him. The Nizam accepted but paid some of the value of the palace; the Paigah family maintains that around ₹2 million (equivalent to ₹510 million or US$6.4 million in 2020) was paid.
The Nizam used the palace as a guest house for the royal guests visiting the kingdom of Hyderabad. The list of royal visitors included King George V, Queen Mary, Edward VIII and Tsar Nicholas II. The palace fell into disuse after the 1950s. The last important guest was the President of India, Rajendra Prasad, in 1951.
The palace was then restored after being leased by the Taj Group of Hotels. The restoration, which began in 2000, took ten years, and was managed by Princess Esra, the first wife of Mukarram Jah. Now, the hotel is again used to host guests in Hyderabad, such as Aga Khan IV, Ivanka Trump and Narendra Modi.
|
What is Falaknuma palace?
|
Falaknuma palace is a palace in Hyderabad, Telangana, India which is turned now to a luxurious hotel called Taj Falaknuma. Falaknuma built by the Paigah family, and was later owned by the Nizam of Hyderabad. Falaknuma palace covers a 13-hectare (32-acre) area with excellent architectural patterns and lush lawns and court yards.
Falak-numa means "Like the Sky" or "Mirror of Sky" in Urdu. The palace has 60 rooms and 22 halls. It has considerable collections of paintings, statues, furniture, manuscripts, books, an extensive jade collection, and Venetian chandeliers. The dining hall can seat 101 guests. The chairs are made of carved rosewood with green leather upholstery.
|
null | false
| null |
The submarine had a maximum surface speed of 15.6 knots (28.9 km/h; 18.0 mph) and a submerged speed of 17.2 knots (31.9 km/h; 19.8 mph). When running on silent motors the boat could operate at a speed of 6.1 knots (11.3 km/h; 7.0 mph). When submerged, the boat could operate at 5 knots (9.3 km/h; 5.8 mph) for 340 nautical miles (630 km; 390 mi); when surfaced, she could travel 15,500 nautical miles (28,700 km; 17,800 mi) at 10 knots (19 km/h; 12 mph). U-3030 was fitted with six 53.3 cm (21.0 in) torpedo tubes in the bow and four 2 cm (0.79 in) C/30 anti-aircraft guns. She could carry twenty-three torpedoes or seventeen torpedoes and twelve mines. The complement was five officers and fifty-two men.
|
What weaponry was the U-3030 submarine equipped with?
|
The U-3030 submarine could carry twenty-three torpedoes or seventeen torpedoes and twelve mines.
|
null | false
| null |
What is a DAW?
|
DAW is an acronym for Digital Audio Workstation. A DAW can either be a specialized digital hardware device (or multiple devices), or more commonly a software application, a computer, an audio interface and/or midi (musical instrument digital interface), all combined to enable digital recording, sampling, editing, mixing, and mastering of audio content. Software based DAWs come in many forms. Most are created to emulate the workflow of a traditional analog multi-track recording environment. However some DAWs are created to cater to the needs of electronic music creators, video content producers, and live performers. Once the domain of highly trained professionals who invested hundreds of thousands, if not millions, of dollars on expensive analog equipment to outfit large studios, high quality, grammy award winning audio content has been produced with modern equipment costing a few hundred dollars, connected to a laptop within a small bedroom studio in a content creator's home.
|
|
null | false
| null |
What is the name of the flag carrier airline of the Netherlands?
|
The flag carrier airline of the Netherlands is KLM.
|
|
null | false
| null |
Identify which instrument is string or percussion: Fontomfrom, Ruan
|
Ruan is string, Fontomfrom is percussion.
|
|
null | false
| null |
How do I fix the engine of my car?
|
This would depend on how complex the repair job is. But essentially you will need
1. A basic set of tools including spanners, screwdrivers and pliers.
2. An automotive manual for the model of your car. For example a Haynes manual.
3. Once you have diagnosed the problem, watch some videos on YouTube on how to fix the problem.
4. Based on the knowledge that you have gathered, attempt to fix your car
If you are enable to fix it, go to an experienced mechanic.
|
|
null | false
| null |
What trait do you think everyone should have?
|
There are many important traits and virtues that can lead to a life of meaning. Perhaps the most important of all is resilience. Resilience allows one to live with the ability to adapt, operate from a mindset of hopefulness and willingness to learn new things. Undoubtedly life will present many trials and obstacles. Resilience will allow you to move forward from difficult situations with optimism for the future and the ability to cultivate learning from mistakes.
|
|
null | false
| 147
|
We train NMT with RAT to achieve better query translations. We improve a recently proposed NMT baseline, Transformer, that achieves state-of-the-art results for sentence pairs in some languages BIBREF8 . We discuss Transformer, RAT, and our multi-task learning architecture that achieves balanced translation.
For this study, we use a state-of-the-art NMT model, Transformer (Vaswani et al., 2017), that uses positional encoding and self attention mechanism to achieve three benefits over the existing convolutional or recurrent neural network based models: (a) reduced computational complexity of each layer, (b) parallel computation, and (c) path length between long-range dependencies.
|
What are the benefits of using positional encoding and self-attention mechanism?
|
(a) reduced computational complexity of each layer, (b) parallel computation, and (c) path length between long-range dependencies.
|
null | false
| 52
|
Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression") or evidence of depression (e.g., “depressed over disappointment"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps"), disturbed sleep (e.g., “another restless night"), or fatigue or loss of energy (e.g., “the fatigue is unbearable") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0.
We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms.
|
Based on what model the dataset used by them was constructed?
|
A hierarchical model of depression-related symptoms.
|
null | false
| 214
|
Removing duplicates, there are 21,747 unique intervention responses in the aggregated Gab dataset and 7,641 in the aggregated Reddit dataset. Despite the large diversity of the collected responses for intervention, we find workers tend to have certain strategies for intervention.
Identify Hate Keywords: One of the most common strategies is to identify the inappropriate terms in the post and then urge the user to stop using that work. For example, “The C word and language attacking gender is unacceptable. Please refrain from future use.” This strategy is often used when the hatred in the post is mainly conveyed by specific hate keywords.
Categorize Hate Speech: This is another common strategy used by the workers. The workers classify hate speech into different categories, such as racist, sexist, homophobic, etc. This strategy is often combined with identifying hate keywords or targets of hatred. For example, “The term ""fa**ot"" comprises homophobic hate, and as such is not permitted here.”
Positive Tone Followed by Transitions: This is a strategy where the response consists of two parts combined with a transitional word, such as “but” and “even though”. The first part starts with affirmative terms, such as “I understand”, “You have the right to”, and “You are free to express”, showing kindness and understanding, while the second part is to alert the users that their post is inappropriate. For example, “I understand your frustration, but the term you have used is offensive towards the disabled community. Please be more aware of your words.”. Intuitively, compared with the response that directly warns, this strategy is likely more acceptable for the users and be more likely to clam down a quarrel full of hate speech.
Suggest Proper Actions: Besides warning and discouraging the users from continuing hate speech, workers also suggest the actions that the user should take. This strategy can either be combined with other strategies mentioned above or be used alone. In the latter case, a negative tone can be greatly alleviated. For example, “I think that you should do more research on how resources are allocated in this country.”
Removing duplicates, there are 21,747 unique intervention responses in the aggregated Gab dataset and 7,641 in the aggregated Reddit dataset. Despite the large diversity of the collected responses for intervention, we find workers tend to have certain strategies for intervention.
|
How many unique intervention responses are in the datasets?
|
Removing duplicates, there are 21,747 unique intervention responses in the aggregated Gab dataset and 7,641 in the aggregated Reddit dataset.
|
null | false
| null |
Which is an imperial or a metric measurement: gallon, litre
|
gallon is imperial, litre is metric
|
|
null | false
| null |
What is Sea Hear Now
|
Sea Hear Now is a music festival that's held annually in Asbury Park, New Jersey. It's held on the beach at the end of the boardwalk next to the convention center. There are three stages, two of which are in the sand with varying acts spanning multiple genres. Sea Hear know is known for its incorporation of local artists and surfers along with great music. Sea Hear now is also family friendly which is an added bonus
|
|
null | false
| null |
For each of these countries, tell whether it lies in the Western Hemisphere, Eastern Hemisphere, or both: Thailand, United Kingdom, Belize, Japan, Algeria
|
Thailand - Eastern
United Kingdom - Both
Belize - Western
Japan - Eastern
Algeria - Both
|
|
null | false
| null |
ways to reduce solid waste
|
Use biodegradable plates
Use reusable personal bottles
Use less plastic products
Use less detergent
Use more biodegradable FMCG products
|
|
null | false
| 17
|
Each document INLINEFORM0 is represented by two vectors in INLINEFORM1 and INLINEFORM2 . The first vector exists in a “monolingual space” (of dimensionality INLINEFORM3 ) and is based on a bag-of-words representation of the document. The second vector exists in a “crosslingual space” (of dimensionality INLINEFORM4 ) which is common to all languages. More details about these representations are discussed in § SECREF4 .
Our method of “domino-toppling” works by making (potentially sequences) of changes to previous clustering decisions for the crosslingual clusters, at each step placing a residual monolingual cluster in a crosslingual cluster that is most similar to it.
|
How does the author's "domino" approach work?
|
Their method of “domino-toppling” works by making (potentially sequences) of changes to previous clustering decisions for the crosslingual clusters, at each step placing a residual monolingual cluster in a crosslingual cluster that is most similar to it.
|
null | false
| null |
Lean manufacturing is a production method aimed primarily at reducing times within the production system as well as response times from suppliers and to customers. It is closely related to another concept called just-in-time manufacturing (JIT manufacturing in short). Just-in-time manufacturing tries to match production to demand by only supplying goods which have been ordered and focuses on efficiency, productivity (with a commitment to continuous improvement) and reduction of "wastes" for the producer and supplier of goods. Lean manufacturing adopts the just-in-time approach and additionally focuses on reducing cycle, flow and throughput times by further eliminating activities which do not add any value for the customer. Lean manufacturing also involves people who work outside of the manufacturing process, such as in marketing and customer service.
|
What is Lean manufacturing
|
Lean is a production method.
|
1905.11901
| false
| null |
Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1.
Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1.
|
what are the methods they compare with in the korean-english dataset?
|
The answers are shown as follows:
* gu-EtAl:2018:EMNLP1
|
null | false
| 486
|
While AI2THOR adopts a discrete action/state space for the agent, we believe the idea behind our multimodal exploration design and waypoint-based semantic representation is applicable to generic embodied tasks. In the presence of motion noise and pose sensor noise, Active Neural SLAM shows that such inaccuracy can be solved to an acceptable level by simply learning a pose estimator.
|
There is a strong assumption that 'we know the displacement changes of the agent after each action'. Several previous works have made similar assumptions. However, those approaches are suitable only for simulation. In reality, how robust the proposed approach is against noise in the agent pose?
|
In the presence of motion noise and pose sensor noise, Active Neural SLAM shows that these can be solved to an acceptable level by simply adding a pose estimator.
|
2002.02427
| false
| null |
Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library.
We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words).
|
What text-based features are used?
|
The answers are shown as follows:
* language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities)
* language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words)
|
null | false
| null |
Islam is the official religion of Egypt and Arabic is its official language. With over 100 million inhabitants, Egypt is the most populous country in North Africa, the Middle East, and the Arab world, the third-most populous in Africa (after Nigeria and Ethiopia), and the fourteenth-most populous in the world. The great majority of its people live near the banks of the Nile River, an area of about 40,000 square kilometres (15,000 sq mi), where the only arable land is found. The large regions of the Sahara desert, which constitute most of Egypt's territory, are sparsely inhabited. About 43% of Egypt's residents live across the country's urban areas, with most spread across the densely populated centres of greater Cairo, Alexandria and other major cities in the Nile Delta.
|
How many people of Egypt do not live in urban areas
|
About 43% of Egypt's residents live across the country's urban areas, that means 57% of Egypt population not liveing in urban areas.
|
null | false
| null |
Stracchino is usually eaten on its own but also as a filling for some kinds of bread: in Recco, on the Ligurian riviera east of Genoa, focaccia col formaggio ("cheese focaccia") or focaccia di Recco is typically filled with crescenza, while in Romagna and in parts of some nearby regions (e.g. northern Marche, Umbria and eastern Tuscany) it is a common filling for the cascione which is made out of piadina, a thin flat bread.
|
Give me a list of some preparations in which stracchino cheese is used/
|
- eaten on its own
- focaccia col formaggio
- cascione, which is made out of a thin flat bread called piadina filled with stracchino cheese
|
null | false
| 139
|
It is well known that language has certain structural properties which allows natural language speakers to make “infinite use of finite means" BIBREF3 . This structure allows us to generalize beyond the typical machine learning definition of generalization BIBREF4 (which considers performance on the distribution that generated the training set), permitting the understanding of any utterance sharing the same structure, regardless of probability. For example, sentences of length 100 typically do not appear in natural text or speech (our personal 'training set'), but can be understood regardless due to their structure. We refer to this notion as linguistic generalization .
Many problems in NLP are treated as sequence to sequence tasks with solutions built on seq2seq-attention based models. While these models perform very well on standard datasets and also appear to capture some linguistic structure BIBREF5 , BIBREF6 , BIBREF7 , they also can be quite brittle, typically breaking on uncharacteristic inputs BIBREF8 , BIBREF1 , indicating that the extent of linguistic generalization these models achieve is still somewhat lacking.
Due to the high capacity of these models, it is not unreasonable to expect them to learn some structure from the data. However, learning structure is not a sufficient condition to achieving linguistic generalization. If this structure is to be usable on data outside the training distribution, the model must learn the structure without additionally learning (overfitting on) patterns specific to the training data. One may hope, given the right hyperparameter configuration and regularization, that a model converges to a solution that captures the reusable structure without overfitting too much on the training set. While this solution exists in theory, in practice, it may be difficult to find.
In this work, we look at the feasibility of training and tuning seq2seq-attention models towards a solution that generalizes in this linguistic sense. In particular, we train models on a symbol replacement task with a well defined generalizable structure. The task is simple enough that all models achieve near perfect accuracy on the standard test set, i.e., where the inputs are drawn from the same distribution as that of the training set. We then test these models for linguistic generalization by creating test sets of uncharacteristic inputs, i.e., inputs that are not typical in the training distribution but still solvable given that the generalizable structure was learned. Our results indicate that generalization is highly sensitive; such that even changes in the random seed can drastically affect the ability to generalize. This dependence on an element that is not (or ideally should not be) a hyperparameter suggests that the line between generalization and failure is quite fine, and may not be feasible to reach simply by hyperparameter tuning alone.
The task is simple enough that all models achieve near perfect accuracy on the standard test set, i.e., where the inputs are drawn from the same distribution as that of the training set.
|
What is the performance of the models?
|
All models achieve near perfect accuracy on the standard test set, i.e., where the inputs are drawn from the same distribution as that of the training set.
|
null | false
| null |
In 1642 he travelled to Nagasaki as a trader in Chinese medicine, and in 1644 entered Kofukuji Temple, becoming its 3rd abbot in 1645. Established by monks immigrated from China, the temple was then a base of the Ōbaku school of Zen Buddhism in Japan. In 1654 after multiple requests he succeeded in persuading Yinyuan Longqi (Ingen Ryuki), the 33rd abbot of Wanfu Temple (Mount Huangbo, Fujian) to emigrate to Japan, where he founded Ōbaku, the third and final major Japanese Zen sect.
|
per this reference of Itsunen Shoyu, who did he convince to bring Zen Buddhism to Japan?
|
Yinyuan Longqi (Ingen Ryuki) the 33rd abbot of Wanfu Temple
|
null | false
| null |
How do people in Pittsburgh say "you all"?
|
People that speak with a Pittsburgh accent typically pronounce "you all" as "yinz". This is a unique feature of the infamous Pittsburgh accent, and is a source of pride and unity in the city of Pittsburgh.
|
|
null | false
| null |
What is GraphQL?
|
GraphQL is an open-source data query language and data manipulation language for APIs, and a query runtime engine.
|
|
null | false
| null |
What is the meaning of life?
|
The meaning of life, the complete answer to the ultimate question of life, the universe and everything, is...
42.
If you want to find out what the ultimate question actually is, you're going to need a bigger AI.
|
|
null | false
| null |
Name five NFL teams in the AFC.
|
Five NFL teams in the AFC are the Buffalo Bills, New York Jets, Tennessee Titans, Houston Texans, and Las Vegas Raiders.
|
|
null | false
| null |
what language is spoken in Brazil?
|
Portuguese is the official and national language of Brazil being widely spoken by most of the population. Brazil is the most populous Portuguese-speaking country in the world, with its lands comprising the majority of Portugal’s former colonial holdings in the Americas.
|
|
null | false
| null |
Barbacoa is a form of cooking meat that originated in the Caribbean with the Taíno people, who called it by the Arawak word barbaca, from which the term "barbacoa" derives, and ultimately, the word 'barbecue". In contemporary Mexico, it generally refers to meats or whole sheep or whole goats slow-cooked over an open fire or, more traditionally, in a hole dug in the ground covered with agave (maguey) leaves, although the interpretation is loose, and in the present day (and in some cases) may refer to meat steamed until tender. This meat is known for its high fat content and strong flavor, often accompanied with onions and cilantro (coriander leaf).
|
What is Barbacoa?
|
Barbacoa is a style of slow-cooking meat from the Caribbean area. In Mexico, it's generally used on sheep or goats that are buried in the ground and covered with leaves after roasting. Typical seasonings included onions and cilantro.
|
null | false
| null |
Name some of the bravolebrities from The Real Housewives of Beverly Hills
|
Crystal Kung Minkoff, Dorit Kemsley, Erika Jayne, Lisa Rinna, Garcelle Beauvais, Kyle Richards, Sutton Stracke and Kathy Hilton
|
|
null | false
| 85
|
There have been many advances in machine learning methods which help machines understand human behavior better than ever. One of the most important aspects of human behavior is emotion. If machines could detect human emotional expressions, it could be used to improve on verity of applications such as marketing BIBREF0 , human-computer interactions BIBREF1 , political science BIBREF2 etc.
Emotion in humans is complex and hard to distinguish. There have been many emotional models in psychology which tried to classify and point out basic human emotions such as Ekman's 6 basic emotions BIBREF3 , Plutchik's wheel of emotions BIBREF4 , or Parrott's three-level categorization of emotions BIBREF5 . These varieties show that emotions are hard to define, distinguish, and categorize even for human experts.
By adding the complexity of language and the fact that emotion expressions are very complex and context dependant BIBREF6 , BIBREF7 , BIBREF8 , we can see why detecting emotions in textual data is a challenging task. This difficulty can be seen when human annotators try to assign emotional labels to the text, but using various techniques the annotation task can be accomplished with desirable agreement among the annotators BIBREF9 .
In this work, we argue that creating a model that can better capture the context and sequential nature of text , can significantly improve the performance in the hard task of emotion detection. We show this by using a recurrent neural network-based classifier that can learn to create a more informative latent representation of the target text as a whole, and we show that this can improve the final performance significantly.
|
What do the authors propose in this paper?
|
A recurrent neural network-based classifier that can learn to create a more informative latent representation of the target text as a whole.
|
null | false
| null |
What kind of fungi are Ochrolechiaceae
|
The Ochrolechiaceae are a family of lichenized fungi in the order Pertusariales.
|
|
null | false
| null |
The official language of Tunisia is Modern Standard Arabic. The vast majority of Tunisia's population is Arab and Muslim. Vernacular Tunisian Arabic is the most spoken, and French also serves as an administrative and educational language in some contexts, but it has no official status.
Tunisia is well integrated into the international community. It is a member of the United Nations, La Francophonie, the Arab League, the OIC, the African Union, the COMESA, the Non-Aligned Movement, the International Criminal Court, and the Group of 77, among others. It maintains close economic and political relations with some European countries, particularly with France, and Italy, due to their geographical proximity. Tunisia also has an association agreement with the European Union and has attained the status of a major non-NATO ally of the United States.
|
What languages are spoken in Tunisia?
|
The official language of Tunisia is Modern Standard Arabic, though vernacular Tunisian Arabic is the most spoken. French also serves as an administrative and educational language in some contexts, but it has no official status.
|
null | false
| null |
Paul Lansky (born 1944) is an American composer. He was educated at Manhattan's High School of Music and Art, Queens College and Princeton University, studying with George Perle and Milton Babbitt, among others. He received his Ph.D. in music from Princeton in 1973. His doctoral dissertation consisted of an essay titled "Affine music" and a composition of string quartet.Originally intending to pursue a career in performance, during 1965–66 he played the French horn with the Dorian Wind Quintet. He left the group to attend graduate school. From 1969 until his retirement in 2014 he was on the faculty at Princeton University where he retired as the William Shubael Conant Professor of Music.
|
Given this paragraph about composer Paul Lansky, did he have an advanced degree?
|
Yes. Paul Lansky graduated from Princeton University with a Ph.D in music.
|
null | false
| null |
Give me a list of the best true crime podcast to listen to
|
Morbid, Crime Junkie, My Favorite Murder, 2020, Dateline
|
|
null | false
| null |
Identify which animal species is alive or extinct: Thylacosmilus, Swift Fox
|
Swift Fox is alive, Thylacosmilus is extinct
|
|
null | false
| null |
What is the biggest company in the world in terms of market capitlization?
|
In terms of market capitalisation, Apple, Inc. is the biggest company valued at around 2.5 trillion dollars.
|
|
null | false
| 110
|
The Transformer architecture BIBREF0 for deep neural networks has quickly risen to prominence in NLP through its efficiency and performance, leading to improvements in the state of the art of Neural Machine Translation BIBREF1, BIBREF2, as well as inspiring other powerful general-purpose models like BERT BIBREF3 and GPT-2 BIBREF4. At the heart of the Transformer lie multi-head attention mechanisms: each word is represented by multiple different weighted averages of its relevant context. As suggested by recent works on interpreting attention head roles, separate attention heads may learn to look for various relationships between tokens BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9.
The attention distribution of each head is predicted typically using the softmax normalizing transform. As a result, all context words have non-zero attention weight. Recent work on single attention architectures suggest that using sparse normalizing transforms in attention mechanisms such as sparsemax – which can yield exactly zero probabilities for irrelevant words – may improve performance and interpretability BIBREF12, BIBREF13, BIBREF14. Qualitative analysis of attention heads BIBREF0 suggests that, depending on what phenomena they capture, heads tend to favor flatter or more peaked distributions.
Recent works have proposed sparse Transformers BIBREF10 and adaptive span Transformers BIBREF11. However, the “sparsity" of those models only limits the attention to a contiguous span of past tokens, while in this work we propose a highly adaptive Transformer model that is capable of attending to a sparse set of words that are not necessarily contiguous. Figure FIGREF1 shows the relationship of these methods with ours.
Our contributions are the following:
We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains.
We propose an adaptive version of sparse attention, where the shape of each attention head is learnable and can vary continuously and dynamically between the dense limit case of softmax and the sparse, piecewise-linear sparsemax case.
We make an extensive analysis of the added interpretability of these models, identifying both crisper examples of attention head behavior observed in previous work, as well as novel behaviors unraveled thanks to the sparsity and adaptivity of our proposed model.
We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains.
|
What is been introduced into the Transformer architecture?
|
Sparse attention.
|
null | false
| null |
Jawaharlal Nehru was born on 14 November 1889 in Allahabad in British India. His father, Motilal Nehru (1861–1931), a self-made wealthy barrister who belonged to the Kashmiri Pandit community, served twice as president of the Indian National Congress, in 1919 and 1928. His mother, Swarup Rani Thussu (1868–1938), who came from a well-known Kashmiri Brahmin family settled in Lahore, was Motilal's second wife, his first having died in childbirth. Jawaharlal was the eldest of three children. His elder sister, Vijaya Lakshmi, later became the first female president of the United Nations General Assembly. His youngest sister, Krishna Hutheesing, became a noted writer and authored several books on her brother.
|
From the passage provided, extract the names of Jawaharlal Nehru's parents. Separate them with a comma.
|
Motilal Nehru, Swarup Rani Thussu
|
null | false
| null |
How many Presidential Medals of Freedom recipients did John Fitzgerald Kennedy choose?
|
JFK chose 31 recipients to receive Presidential Medals of Freedom
|
|
null | false
| null |
What kind of animal is the Diaea dorsata
|
Diaea dorsata is one of the smaller crab spiders, with a palearctic distribution. Females can grow up to 6 mm, males up to 4 mm. Prosoma and legs are green, the opisthosoma is yellowish with a brown mark.
|
|
null | false
| 260
|
Affect is a term that subsumes emotion and longer term constructs such as mood and personality and refers to the experience of feeling or emotion BIBREF0 . BIBREF1 picard1997affective provides a detailed discussion of the importance of affect analysis in human communication and interaction. Within this context the analysis of human affect from text is an important topic in natural language understanding, examples of which include sentiment analysis from Twitter BIBREF2 , affect analysis from poetry BIBREF3 and studies of correlation between function words and social/psychological processes BIBREF4 . People exchange verbal messages which not only contain syntactic information, but also information conveying their mental and emotional states. Examples include the use of emotionally colored words (such as furious and joy) and swear words. The automated processing of affect in human verbal communication is of great importance to understanding spoken language systems, particularly for emerging applications such as dialogue systems and conversational agents.
Statistical language modeling is an integral component of speech recognition systems, with other applications such as machine translation and information retrieval. There has been a resurgence of research effort in recurrent neural networks for language modeling BIBREF5 , which have yielded performances far superior to baseline language models based on n-gram approaches. However, there has not been much effort in building neural language models of text that leverage affective information. Current literature on deep learning for language understanding focuses mainly on representations based on word semantics BIBREF6 , encoder-decoder models for sentence representations BIBREF7 , language modeling integrated with symbolic knowledge BIBREF8 and neural caption generation BIBREF9 , but to the best of our knowledge there has been no work on augmenting neural language modeling with affective information, or on data-driven approaches to generate emotional text.
Motivated by these advances in neural language modeling and affective analysis of text, in this paper we propose a model for representation and generation of emotional text, which we call the Affect-LM. Our model is trained on conversational speech corpora, common in language modeling for speech recognition applications BIBREF10 . Figure 1 provides an overview of our Affect-LM and its ability to generate emotionally colored conversational text in a number of affect categories with varying affect strengths. While these parameters can be manually tuned to generate conversational text, the affect category can also be automatically inferred from preceding context words. Specifically for model training, the affect category is derived from features generated using keyword spotting from a dictionary of emotional words, such as the LIWC (Linguistic Inquiry and Word Count) tool BIBREF11 . Our primary research questions in this paper are:
Q1:Can Affect-LM be used to generate affective sentences for a target emotion with varying degrees of affect strength through a customizable model parameter?
Q2:Are these generated sentences rated as emotionally expressive as well as grammatically correct in an extensive crowd-sourced perception experiment?
Q3:Does the automatic inference of affect category from the context words improve language modeling performance of the proposed Affect-LM over the baseline as measured by perplexity?
The remainder of this paper is organized as follows. In Section "Related Work" , we discuss prior work in the fields of neural language modeling, and generation of affective conversational text. In Section "LSTM Language Model" we describe the baseline LSTM model and our proposed Affect-LM model. Section "Experimental Setup" details the experimental setup, and in Section "Results" , we discuss results for customizable emotional text generation, perception studies for each affect category, and perplexity improvements over the baseline model before concluding the paper in Section "Conclusions and Future Work" .
Affect is a term that subsumes emotion and longer term constructs such as mood and personality and refers to the experience of feeling or emotion.
|
What does "Affect" mean?
|
Affect is a term that subsumes emotion and longer term constructs such as mood and personality and refers to the experience of feeling or emotion.
|
null | false
| null |
National Beer Day is celebrated in the United States every year on April 7, marking the day that the Cullen–Harrison Act came into force after having been signed into law by President Franklin D. Roosevelt on March 22, 1933. This led to the Eighteenth Amendment being repealed on December 5, 1933, with the ratification of the Twenty-first Amendment to the U.S. Constitution. April 6, the day prior to National Beer Day, is known as New Beer's Eve.
|
From the passage provided, extract the date that National Beer Day is celebrated in the United States.
|
National Beer Day is celebrated in the United States on April 7.
|
1707.07048
| false
| null |
Two-talker overlapped speech is artificially generated by mixing these waveform segments. To maximize the speech overlap, we developed a procedure to mix similarly sized segments at around 0dB. First, we sort the speech segments by length. Then, we take segments in pairs, zero-padding the shorter segment so both have the same length. These pairs are then mixed together to create the overlapped speech data. The overlapping procedure is similar to BIBREF13 except that we make no modification to the signal levels before mixing . After overlapping, there's 150 hours data in the training, called 150 hours dataset, and 915 utterances in the test set. After decoding, there are 1830 utterances for evaluation, and the shortest utterance in the hub5e-swb dataset is discarded. Additionally, we define a small training set, the 50 hours dataset, as a random 50 hour subset of the 150 hours dataset. Results are reported using both datasets.
Two-talker overlapped speech is artificially generated by mixing these waveform segments. To maximize the speech overlap, we developed a procedure to mix similarly sized segments at around 0dB. First, we sort the speech segments by length. Then, we take segments in pairs, zero-padding the shorter segment so both have the same length. These pairs are then mixed together to create the overlapped speech data.
|
How are the two datasets artificially overlapped?
|
The answers are shown as follows:
* we sort the speech segments by length
* we take segments in pairs, zero-padding the shorter segment so both have the same length
* These pairs are then mixed together
|
null | false
| null |
Why is it so hard now to name a favorite musician?
|
Compared to 10 years ago, this question has grown a lot harder to answer. In the old days, songs were indexed by song-writers. Prior to the digital age, the most popular media was physical cassettes, and it’s natural to group the cassettes by artists. Moreover, artists used to be the primary mechanism to discover new music. The cost of listening to new songs was usually high, since there were few ways to try new music, so fans tended to buy music from the same artists for quality (and taste) assurance.
With the evolution of music streaming and the decline of the recording industry, the granularity of “listening to music” has refined significantly. Listeners no longer need to buy whole cassettes, but can pick songs individually. Moreover, recommendation algorithms allow fans to discover music by popularity, genre or even mood. As a result, music fans now sample from such a wide range of artists that it’s hard to name a single favorite. And due to the insane number of accessible songs, Even if you listen to a specific artist most frequently among your collection, you may still only like a small percentage of all the songs he/she wrote, which leads to the shaky conviction that you actually have a “favorite musician”.
On the other hand, I believe the palette for music depends a lot on “environment”. It could be a physical environment (eg. beach or gym), an action (eg. working or cooking), or a mood (eg. chill or psyched). The like or dislike towards a certain piece of music changes drastically when these variables change. To some extent, I view music more as a tool than as an art form, in that I seek certain utility from music at different times. However, music judgment for me is also not 100% a rational or utilitarian process. It still very much comes from intuition, though that intuition is heavily biased by environmental factors.
|
|
1709.05404
| false
| null |
Generic Sarcasm. We first examine the different patterns learned on the Gen dataset. Table TABREF29 show examples of extracted patterns for each class. We observe that the not-sarcastic patterns appear to capture technical and scientific language, while the sarcastic patterns tend to capture subjective language that is not topic-specific. We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method. Instead, such cues co-occur with the cues we search for, expanding our pattern inventory as we show in Table TABREF31 .
Many of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset. Table TABREF32 shows a few examples of the relations we extract.
Hyperbole. One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole. Table TABREF33 illustrates some of the new adverb adjective patterns that are frequent, high-precision indicators of sarcasm.
We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 . Interestingly, many of these instantiate the observations of CanoMora2009 on hyperbole and its related semantic fields: creating contrast by exclusion, e.g. no limit and no way, or by expanding a predicated class, e.g. everyone knows. Many of them are also contrastive. Table TABREF33 shows just a few examples, such as though it in no way and so much knowledge.
We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method.
Many of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset.
One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole.
We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 .
|
What lexico-syntactic cues are used to retrieve sarcastic utterances?
|
The answers are shown as follows:
* adjective and adverb patterns
* verb, subject, and object arguments
* verbal patterns
|
null | false
| 0
|
In this section, we will present a cross-lingual pre-training based transfer approach. This method is designed for a common zero-shot scenario where there are a lot of source$\leftrightarrow $pivot and pivot$\leftrightarrow $target bilingual data but no source$\leftrightarrow $target parallel data, and the whole training process can be summarized as follows step by step:
Pre-train a universal encoder with source/pivot monolingual or source$\leftrightarrow $pivot bilingual data.
Train a pivot$\rightarrow $target parent model built on the pre-trained universal encoder with the available parallel data. During the training process, we freeze several layers of the pre-trained universal encoder to avoid the degeneracy issue BIBREF27.
Directly translate source sentences into target sentences with the parent model, which benefits from the availability of the universal encoder.
The key difficulty of this method is to ensure the intermediate representations of the universal encoder are language invariant. In the rest of this section, we first present two existing methods yet to be explored in zero-shot translation, and then propose a straightforward but effective cross-lingual pre-training method. In the end, we present the whole training and inference protocol for transfer.
Directly translate source sentences into target sentences with the parent model, which benefits from the availability of the universal encoder.
|
What benefits from the availability of the universal encoder?
|
Directly translate source sentences into target sentences with the parent model.
|
null | false
| null |
Can you tell me which of the following are days and which are months? Monday, Tuesday, Wednesday, October, March, February.
|
Monday, Tuesday and Wednesday are days, while October, March and February are months.
|
|
null | false
| null |
Baumkuchen (German pronunciation: [ˈbaʊ̯mˌkuːxn̩] (listen)) is a kind of spit cake from German cuisine. It is also a popular dessert in Japan. The characteristic rings that appear in its slices resemble tree rings, and give the cake its German name, Baumkuchen, which literally translates to "tree cake" or "log cake".
History
It is disputed who made the first Baumkuchen and where it was first baked. One theory is that it was invented in the German town of Salzwedel, which is further popularized by the town itself. Another theory suggests it began as a Hungarian wedding cake.[citation needed] In Ein new Kochbuch (lit. "A New Cookbook"), the first cookbook written for professional chefs by Marx Rumpolt, there is a recipe for Baumkuchen. This publication puts the origin of Baumkuchen as far back at 1581, the year the cookbook was first published. Marx Rumpolt had previously worked as a chef in Hungary and Bohemia.[citation needed]
Characteristics
Baumkuchen baked on a spit
Traditionally, Baumkuchen is made on a spit by brushing on even layers of batter and then rotating the spit around a heat source. Each layer is allowed to brown before a new layer of batter is poured. When the cake is removed and sliced, each layer is divided from the next by a golden line, resembling the growth rings on a crosscut tree. A typical Baumkuchen is made up of 15 to 20 layers of batter. However, the layering process for making Baumkuchen can continue until the cakes are quite large. Skilled pastry chefs have been known to create cakes with 25 layers and weighing over 100 pounds (45 kg). When cooked on a spit, it is not uncommon for a finished Baumkuchen to be 3–4 feet (0.91–1.22 m) tall.
Baumkuchen ingredients are typically butter, eggs, sugar, vanilla, salt, and flour. Baking powder is not considered a traditional ingredient. The ratio of flour, butter and eggs is typically 1:1:2 respectively (i.e., 100 grams of flour, 100 grams of butter and 200 grams of eggs). The recipe can be varied by adding other ingredients, such as ground nuts, honey, marzipan, nougat and rum or brandy, to the batter or filling. Additionally, Baumkuchen may be covered with sugar or chocolate glaze. With some recipes, the fully baked and cooled Baumkuchen is first coated with marmalade or jam, and then covered with chocolate.
|
What is Baumkuchen made of?
|
Baumkuchen ingredients are typically butter, eggs, sugar, vanilla, salt, and flour. Baking powder is not considered a traditional ingredient. The ratio of flour, butter and eggs is typically 1:1:2 respectively (i.e., 100 grams of flour, 100 grams of butter and 200 grams of eggs).
|
null | false
| 165
|
Considerable progress has been made building end-to-end dialog systems for collaborative tasks in which users cooperate with the system to achieve a common goal. Examples of collaborative tasks include making restaurant reservations and retrieving bus time-table information. Since users typically have clear and explicit intentions in collaborative tasks, existing systems commonly classify user utterances into pre-defined intents. In contrast, non-collaborative tasks are those where the users and the system do not strive to achieve the same goal. Examples of such tasks include deceiving attackers, persuading users to donate to a cause BIBREF1, and negotiating a product price BIBREF2, BIBREF3. In these tasks, users often perform complex actions that are beyond a simple set of pre-defined intents. In order to reach a common state, the user and the system need to build rapport and trust which naturally involves off-task content. Previous work did not model off-task content BIBREF2, which may have led to less optimal results. For example, in the persuasion task BIBREF1, users would ask the system “How do you feel about war?" An example of an on-task system response that the system could have made is “Do you want to make a donation?", which sticks to the task but neglects users' question. However, a better response to such an off-task question is “War is destructive and pitiless, but you can donate to help child victims of war." This response is better, as it has been found that users are more likely to end the conversation if the system neglects their questions BIBREF4. Therefore, we need to design a system that handles both on-task and off-task information appropriately and in a way that leads back to the system's goal.
To tackle the issue of incoherent system responses to off-task content, previous studies have built hybrid systems to interleave off-task and on-task content. BIBREF4 used a rule-based dialog manager for on-task content and a neural model for off-task content, and trained a reinforcement learning model to select between these two models based on the dialog context. However, such a method is difficult to train and struggles to generalize beyond the movie promotion task they considered. To tackle these problems, we propose a hierarchical intent annotation scheme that separates on-task and off-task information in order to provide detailed supervision. For on-task information, we directly use task-related intents for representation. Off-task information, on the other hand, is too general to categorize into specific intents, so we choose dialog acts that convey syntax information. These acts, such as “open question" are general to all tasks.
Previous studies use template-based methods to maintain sentence coherence. However, rigid templates lead to limited diversity, causing the user losing engagement. On the other hand, language generation models can generate diverse responses but are bad at being coherent. We propose Multiple Intents and Semantic Slots Annotation Neural Network (MISSA) to combine the advantages of both template and generation models and takes advantage from the hierarchical annotation at the same time. MISSA follows the TransferTransfo framework BIBREF0 with three modifications: (i) We first concurrently predict user's, system's intents and semantic slots; (ii) We then perform conditional generation to improve generated response's coherence. Specifically, we generate responses conditioned on the above intermediate representation (intents and slots); (iii) Finally, we generate multiple responses with the nucleus sampling strategy BIBREF5 and then apply a response filter, which contains a set of pre-defined constraints to select coherent responses. The constraints in the filter can be defined according to specific task requirements or general conversational rules.
To enrich publicly available non-collaborative task datasets, we collect a new dataset AntiScam, where users defend themselves against attackers trying to collect personal information. As non-collaborative tasks are still relatively new to the study of dialog systems, there are insufficiently many meaningful datasets for evaluation and we hope this provides a valuable example. We evaluate MISSA on the newly collected AntiScam dataset and an existing PersuasionForGood dataset. Both automatic and human evaluations suggest that MISSA outperforms multiple competitive baselines.
In summary, our contributions include: (i) We design a hierarchical intent annotation scheme and a semantic slot annotation scheme to annotate the non-collaborative dialog dataset, we also propose a carefully-designed AntiScam dataset to facilitate the research of non-collaborative dialog systems. (ii) We propose a model that can be applied to all non-collaborative tasks, outperforming other baselines on two different non-collaborative tasks. (iii) We develop an anti-scam dialog system to occupy attacker's attention and elicit their private information for social good. Furthermore, we also build a persuasion dialog system to persuade people to donate to charities. We release the code and data.
(i) We first concurrently predict user's, system's intents and semantic slots;
|
What information is concurrently predicted in their method?
|
User's, system's intents and semantic slots.
|
null | false
| null |
The story of Oedipus is the subject of Sophocles' tragedy Oedipus Rex, which is followed in the narrative sequence by Oedipus at Colonus and then Antigone. Together, these plays make up Sophocles' three Theban plays. Oedipus represents two enduring themes of Greek myth and drama: the flawed nature of humanity and an individual's role in the course of destiny in a harsh universe.
In the best-known version of the myth, Oedipus was born to King Laius and Queen Jocasta of Thebes. Laius wished to thwart the prophecy, so he sent a shepherd-servant to leave Oedipus to die on a mountainside. However, the shepherd took pity on the baby and passed him to another shepherd who gave Oedipus to King Polybus and Queen Merope to raise as their own. Oedipus learned from the oracle at Delphi of the prophecy that he would end up killing his father and marrying his mother but, unaware of his true parentage, believed he was fated to murder Polybus and marry Merope, so left for Thebes. On his way, he met an older man and killed him in a quarrel. Continuing on to Thebes, he found that the king of the city (Laius) had recently been killed and that the city was at the mercy of the Sphinx. Oedipus answered the monster's riddle correctly, defeating it and winning the throne of the dead king – and the hand in marriage of the king's widow, who was also (unbeknownst to him) his mother Jocasta.
Detail of ancient fresco in which Oedipus solves the riddle of the Sphinx. Egyptian Museum, 2nd c. CE
Years later, to end a plague on Thebes, Oedipus searched to find who had killed Laius and discovered that he himself was responsible. Jocasta, upon realizing that she had married her own son, hanged herself. Oedipus then seized two pins from her dress and blinded himself with them.
The legend of Oedipus has been retold in many versions and was used by Sigmund Freud to name and give mythic precedent to the Oedipus complex.
|
What would be the summary of the story of Oedipus, which serves as the basis for the psychoanalytic theory of the Oedipus complex?
|
Oedipus killed his father and got married to his mother, as announced by the prophecy of the Oracle of Delphi.
|
null | false
| null |
Where do we go when we die?
|
In some religions, it is believed we go to heaven after die. There are passages that talk about the pearly gates and being greeted by St. Peter who will check you into heaven and allow to pass through the gates once vetted. The ancient egyptians believed in three afterlife ideologies; the underworld, eternal life and the rebirth of the soul. However, for the average person today who does not partake in mummification and ancient egyptians, it is widely believed that once the body dies, the soul leaves and goes to a place of peace. In a perfect world, we would be reunited with our loved ones, both human and pet, where we would no longer be in pain and can run free with those we care about most. Alternatively, the more literal answer to this question would be that the physical body either goes into a coffin or some type of container or it is cremated into ashes after death.
|
|
null | false
| null |
Why is the Great Salt Lake salty?
|
The Great Salt Lake is salty because water only flows into the lake, not out. This is called a “terminal lake.” Over time, the water in the lake evaporates, leaving behind the minerals that flowed in with the water. Because the amount of water in the lake remains relatively constant but the amount of minerals increases over time, the lake becomes more salty over time.
|
|
1610.08597
| false
| null |
We obtain word vectors of size 300 from the learned word embeddings. To represent a Twitter profile, we retrieve word vectors for all the words that appear in a particular profile including the words appear in tweets, profile description, words extracted from emoji, cover and profile images converted to textual formats, and words extracted from YouTube video comments and descriptions for all YouTube videos shared in the user's timeline. Those word vectors are combined to compute the final feature vector for the Twitter profile. To combine the word vectors, we consider five different methods. Letting the size of a word vector be INLINEFORM0 , for a Twitter profile INLINEFORM1 with INLINEFORM2 unique words and the vector of the INLINEFORM3 word in INLINEFORM4 denoted by INLINEFORM5 , we compute the feature vector for the Twitter profile INLINEFORM6 by:
We obtain word vectors of size 300 from the learned word embeddings. To represent a Twitter profile, we retrieve word vectors for all the words that appear in a particular profile including the words appear in tweets, profile description, words extracted from emoji, cover and profile images converted to textual formats, and words extracted from YouTube video comments and descriptions for all YouTube videos shared in the user's timeline. Those word vectors are combined to compute the final feature vector for the Twitter profile.
|
How in YouTube content translated into a vector format?
|
The answers are shown as follows:
* words extracted from YouTube video comments and descriptions for all YouTube videos shared in the user's timeline
|
1606.08140
| false
| null |
Let $\mathcal {E}$ denote the set of entities and $\mathcal {R}$ the set of relation types. For each triple $(h, r, t)$ , where $h, t \in \mathcal {E}$ and $r \in \mathcal {R}$ , the STransE model defines a score function $f_r(h, t)$ of its implausibility. Our goal is to choose $f$ such that the score $f_r(h,t)$ of a plausible triple $(h,r,t)$ is smaller than the score $f_{r^{\prime }}(h^{\prime },t^{\prime })$ of an implausible triple $\mathcal {R}$0 . We define the STransE score function $\mathcal {R}$1 as follows:
$ f_r(h, t) & = & \Vert \textbf {W}_{r,1}\textbf {h} + \textbf {r} - \textbf {W}_{r,2}\textbf {t}\Vert _{\ell _{1/2}} $
using either the $\ell _1$ or the $\ell _2$ -norm (the choice is made using validation data; in our experiments we found that the $\ell _1$ norm gave slightly better results). To learn the vectors and matrices we minimize the following margin-based objective function: $ \mathcal {L} & = & \sum _{\begin{array}{c}(h,r,t) \in \mathcal {G} \\ (h^{\prime },r,t^{\prime }) \in \mathcal {G}^{\prime }_{(h, r, t)}\end{array}} [\gamma + f_r(h, t) - f_r(h^{\prime }, t^{\prime })]_+ $
We define the STransE score function $\mathcal {R}$1 as follows:
$ f_r(h, t) & = & \Vert \textbf {W}_{r,1}\textbf {h} + \textbf {r} - \textbf {W}_{r,2}\textbf {t}\Vert _{\ell _{1/2}} $
using either the $\ell _1$ or the $\ell _2$ -norm (the choice is made using validation data; in our experiments we found that the $\ell _1$ norm gave slightly better results).
|
What scoring function does the model use to score triples?
|
The answers are shown as follows:
* $ f_r(h, t) & = & \Vert \textbf {W}_{r,1}\textbf {h} + \textbf {r} - \textbf {W}_{r,2}\textbf {t}\Vert _{\ell _{1/2}} $
|
null | false
| null |
Classify each of the following as a "fiat money" or "cryptocurrency": Bitcoin, US Dollar, British Pound, Ethereum, Dogecoin, Euro
|
Bitcoin: Cryptocurrency
US Dollar: Fiat Money
British Pound: Fiat Money
Ethereum: Cryptocurrency
Dogecoin: Cryptocurrency
Euro: Fiat Money
|
|
null | false
| 322
|
NLG is the process of automatically generating coherent NL text from non-linguistic data BIBREF0. Recently, the field has seen an increased interest in the development of NLG systems focusing on verbalizing resources from SW data BIBREF1. The SW aims to make information available on the Web easier to process for machines and humans. However, the languages underlying this vision, i.e., RDF, SPARQL and OWL, are rather difficult to understand for non-expert users. For example, while the meaning of the OWL class expression Class: Professor SubClassOf: worksAt SOME University is obvious to every SW expert, this expression (“Every professor works at a university”) is rather difficult to fathom for lay persons.
Previous works such as SPARQL2NL BIBREF2 and SPARTIQULATION BIBREF3 have already shown the usefulness of the verbalization of SPARQL and RDF in areas such as question answering BIBREF4 and the explanation of the output of systems based onSW technologies BIBREF5. However, other SW languages are rarely investigated, such as OWL.
In this paper, we present an open-source holistic NLG framework for the SW, named LD2NL, which facilitates the verbalization of the three key languages of the SW, i.e., RDF, OWL, and SPARQL into NL. Our framework is based on a bottom-up paradigm for verbalizing SW data. Additionally, LD2NL builds upon SPARQL2NL as it is open-source and the paradigm it follows can be reused and ported to RDF and OWL. Thus, LD2NL is capable of generating either a single sentence or a summary of a given resource, rule, or query. To validate our framework, we evaluated LD2NL using experts 66 in NLP and SW as well as 20 non-experts who were lay users or non-users of SW. The results suggest that LD2NL generates texts which can be easily understood by humans. The version of LD2NL used in this paper, all experimental results will be publicly available.
In this paper, we present an open-source holistic NLG framework for the SW, named LD2NL, which facilitates the verbalization of the three key languages of the SW, i.e., RDF, OWL, and SPARQL into NL.
|
What facilitates the verbalization of the three key languages of the SW, i.e., RDF, OWL, and SPARQL into NL?
|
An open-source holistic NLG framework for the SW, named LD2NL.
|
null | false
| null |
Jackson used the power of the federal government to enforce the separation of the Native American tribes and whites. In May 1830, Jackson passed the Indian Removal Act through Congress. It gave the president the right to negotiate treaties to buy tribal lands in the eastern part of the United States in exchange for lands set aside for Native Americans west of the Mississippi, as well as broad discretion on how to use the federal funds allocated to the negotiations. The law was supposed to be a voluntary relocation program, but it was not implemented as one. Jackson's administration often achieved agreement to relocate through bribes, fraud and intimidation, and the leaders who signed the treaties often did not represent the entire tribe. The relocations could be a source of misery too: the Choctaw relocation was rife with corruption, theft, and mismanagement that brought great suffering to that people.
|
Why is Andrew Jackson criticized for racial policies?
|
During Andrew Jackson's presidency the Indian Removal Act was passed by Congress. Even though this legislation was designed to give the President flexibility in negotiating with native American tribes, the President chose a harsher course of actions as a matter of course. Instead of offering voluntary relocation, there was a great deal of forced tribal relocation, as evidenced by the experience of the Cherokees and the "Trail of Tears." Other examples of unfair treatment by President Jackson including reaching agreements with leaders from tribes who often did not fully represent the tribe. A number of Native American leaders were corrupt in their dealings, benefiting individually at tremendous cost to their people. Ultimately, Native Americans in the eastern United States experienced remarkable hardships as a result of Andrew Jackson's policies and actions.
|
null | false
| null |
Give me a list of environment conscious clothing brands.
|
1. Patagonia
2. Harvest & Mill
3. Armedangels
4. Honest Basics
5. LangerChen
6. Vaude
7. Organic Basics
8. Eileen Fisher
9. Everlane
10. Nudie Jeans
|
|
null | false
| null |
Why Disney should be acquiring smaller movie studios like Pixar, Marvel, and Lucasfilm?
|
Combined business capabilities have valuable synergies
Before the acquisition of Pixar, Disney lagged in computer-generated films. This is a growing market, and Pixar has unique capabilities like its engineers and a culture where the engineers can innovate freely and creatively. The capabilities of Pixar are difficult to imitate as it is challenging to determine what resources are used and how they can be recreated. At the same time, Pixar lacks marketing and distribution muscle. This is an area where Disney can complement Pixar. Disney acquiring studios like Pixar, Marvel, and Lucasfilm will enable Disney to leverage the characters created by these studios.
Increased market power
With Marvel’s 5000+ characters, Disney can do what it does best. In total, Marvel movies have generated over 22 billion dollars for Disney.
Decrease the cost of new product development
The characters that Disney acquired, along with its acquisitions of LucasFilms, Marvel, and Pixar, will result in a cost decrease of new product development and increased speed to market. Also, Disney’s unique portfolio of brands provides significant long-term opportunities for growth and value creation.
For these reasons, Disney must continue acquiring smaller studios like Pixar and Marvel.
|
|
null | false
| 98
|
Vietnamese, like many languages in continental East Asia, is an isolating language and one branch of Mon-Khmer language group. The most basic linguistic unit in Vietnamese is morpheme, similar with syllable or token in English and “hình vị” (phoneme) or “tiếng” (syllable) in Vietnamese. According to the structured rule of its, Vietnamese can have about 20,000 different syllables (tokens). However, there are about 8,000 syllables used the Vietnamese dictionaries. There are three methods to identify morphemes in Vietnamese text BIBREF10 .
Morpheme is the smallest meaningful unit of Vietnamese.
Morpheme is the basic unit of Vietnamese.
Morpheme is the smallest meaningful unit and is not used independently in the syntax factor.
In computational linguistics, morpheme is the basic unit of languages as Leonard Bloomfield mentioned for English BIBREF11 . In our research for Vietnamese, we consider the morpheme as syllable, called “tiếng” in Vietnamese (as Nguyen’s definition BIBREF12 ).
The next concept in linguistics is word which has fully grammar and meaning function in sentences. For Vietnamese, word is a single morpheme or a group of morphemes, which are fixed and have full meaning BIBREF12 . According to Nguyen, Vietnamese words are able classified into two types, (1) 1- syllable words with fully meaning and (2) n-syllables words whereas these group of tokens are fixed. Vietnamese syllable is not fully meaningful. However, it is also explained in the meaning and structure characteristics. For example, the token “kỳ” in “quốc kỳ” whereas “quốc” means national, “kỳ” means flag. Therefore, “quốc kỳ” means national flag.
Consider dictionary used for evaluating the corpus, extracting features for models, and evaluating the systems, there are many Vietnamese dictionaries, however we recommend the Vietnamese dictionary of Hoang Phe, so called Hoang Phe Dictionary. This dictionary has been built by a group of linguistical scientists at the Linguistic Institute, Vietnam. It was firstly published in 1988, reprinted and extended in 2000, 2005 and 2010. The dictionary currently has 45,757 word items with 15,901 Sino-Vietnamese word items (accounting for 34.75%) BIBREF13 .
Consider dictionary used for evaluating the corpus, extracting features for models, and evaluating the systems, there are many Vietnamese dictionaries, however we recommend the Vietnamese dictionary of Hoang Phe, so called Hoang Phe Dictionary. This dictionary has been built by a group of linguistical scientists at the Linguistic Institute, Vietnam. It was firstly published in 1988, reprinted and extended in 2000, 2005 and 2010. The dictionary currently has 45,757 word items with 15,901 Sino-Vietnamese word items (accounting for 34.75%) [14]
|
What material is suggested for the evaluation of a Vietnamese word corpus?
|
They recommend the Vietnamese dictionary of Hoang Phe, so called Hoang Phe Dictionary.
|
2003.08529
| false
| null |
In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.
The second task involves two essential problems in SLU, which are intent classification (IC) and slot labeling (SL). In IC, the model needs to detect the intention of a text input (i.e., utterance, conveys). For example, for an input of I want to book a flight to Seattle, the intention is to book a flight ticket, hence the intent class is bookFlight. In SL, the model needs to extract the semantic entities that are related to the intent. From the same example, Seattle is a slot value related to booking the flight, i.e., the destination. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents.
In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments.
Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research.
|
Which real-world datasets did they use?
|
The answers are shown as follows:
* SST-2
* Snips
|
null | false
| null |
What city in Mexico is often referred to as San Pancho?
|
San Francisco
|
|
null | false
| null |
Give me a list of fantastic movies
|
Oceans 11, Interstellar, The Imitation Game, and The Dark Knight
|
|
null | false
| null |
How to get entrance to Istanbul from US?
|
To travel to Istanbul, Turkey, from the US, you will need an e-visa, even for a US citizen. Turkey is not a Schengen country. To obtain the e-visa, visit https://www.evisa.gov.tr/en/, and fill out your personal information, including the expected entry date. You will then pay $50 per visa on the website. At the end of the process, it will get a download link for the e-visa pdf file, so you can print it out.
|
|
null | false
| null |
What should I eat for dinner?
|
It depends on your dietary preferences. If you are vegetarian, I suggest a meal with vegetables, such as grilled zucchini, onions, and bell peppers. You can add protein to this meal by sautéing tofu or tempeh in teriyaki sauce, or enjoy some red lentils on the side. To add carbohydrates to the dish, you can eat pasta, rice, or potatoes. If you are a carnivore, perhaps you might enjoy a steak or chicken dinner with a side or asparagus and a baked potato that has a knob of butter on top! Other common dinner options include pasta, burgers, quesadillas, poke bowls, sushi, Mediterranean kebabs, falafel sandwiches, soup, etc.
|
|
null | false
| null |
A three-fund portfolio is based on the fundamental asset classes, stocks and bonds. It is assumed that cash is not counted within the investment portfolio, so it is not included. On the other hand, it is assumed that every investor should hold both domestic and international stocks. The task, then, is to take these three basic non-cash assets — domestic stocks, international stocks, and bonds — decide how much of each to hold (your asset allocation); choose where to hold each of these asset classes, and finally choose a mutual fund to use for each asset class.
|
Suggest an allocation to implement a three portfolio strategy as described in the text below
|
From Vanguard's list of "core funds," the funds that are best for a three-fund portfolio are:
Vanguard Total Stock Market Index Fund (VTSAX)
Vanguard Total International Stock Index Fund (VTIAX)
Vanguard Total Bond Market Fund (VBTLX)
So, a "three-fund portfolio" might consist of 42% Total Stock Market Index, 18% Total International Stock Index, and 40% Total Bond Market fund.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.