paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
null
Shirley C. Guthrie Jr. (9 October 1927 – 23 October 2004) was a minister of the Presbyterian Church (USA) and J.B. Green Professor of Systematic Theology at Columbia Theological Seminary for nearly 40 years. He was well known for his book, Christian Doctrine, which was originally written for an Adult Sunday School Book in the old PCUS Covenant life curriculum.
Were the months that Shirley C. Guthrie Jr. was born and died in the same month or different months?
The same. Shirley C. Guthrie Jr. was born in October 1927 and died in October 2004.
null
false
29
Though Bengali is the seventh most spoken language in terms of number of native speakers BIBREF23, there is no standard corpus of questions available BIBREF0. We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. The corpus contains the questions and the classes each question belongs to. The set of question categories is known as question taxonomy BIBREF0. We have used two layer taxonomy which was proposed by Xin Li, Dan Roth BIBREF24. This two layer taxonomy is made up of two classes which are Coarse Class and Finer Class. There are six coarse classes such as Numeric, Location, Entity, Description, Human and Abbreviation and fifty finer classes such as city, state, mountain, distance, count, definition, group, expression, substance, creative, vehicle etc as shown in the Table I BIBREF0. A coarse-grained description of a system denotes large components while a fine-grained description denotes smaller sub-components of which the larger ones are composed of. We have used two layer taxonomy which was proposed by Xin Li, Dan Roth [17]. This two layer taxonomy is made up of two classes which are Coarse Class and Finer Class.
How to categorize the questions in this corpus?
The authors used the two layer taxonomy which is made up of two classes, which are Coarse Class and Finer Class.
null
false
null
Name some of Sweden's most famous companies
Here are some examples of Sweden's most famous companies. Volvo Hennes & Mauritz Electrolux IKEA Spotify Klarna
null
false
null
What is SAML?
SAML or Security Assertion Markup Language is an open standard and an XML-based framework for authentication and authorization between an identity provider, service provider, and a principal. SAML is used to enable web-browser based single sign-on (SSO).
null
false
null
Which state in the United States the is best?
There are 50 states, a federal district (District of Columbia), and five unincorporated territories that make up The United States of America. It is vast in geography, political ideology, climate, and leisure interests. The best state is the one in which your own personal preferences are met. Some prefer mountains, which Colorado or Washington offer. Others may desire access to the ocean so Florida may top the list. Still others may desire all four seasons and Minnesota may be best suited. The best thing about the United States is the freedom to decide which state is best.
null
false
null
Justice League Dark is a 2017 American adult animated science fantasy superhero film produced by Warner Bros. Animation and distributed by Warner Home Video. Featuring the DC Comics team of the same name created by Peter Milligan and Mikel Janín, the film is the 27th of the DC Universe Animated Original Movies and the eighth of the DC Animated Movie Universe. Unlike most installments in the DC Animated Universe Movies series, Justice League Dark features an original premise. The film stars Matt Ryan as John Constantine and Jason O'Mara as Batman, reprising their respective roles from previous media. It was released on Digital HD on January 24, 2017, and on DVD and Blu-ray on February 7. A sequel, Justice League Dark: Apokolips War, was released in May 2020.
Given a reference text about Justice League Dark, tell me who the main actors are and what roles they played.
The main actors in Justice League Dark were Matt Ryan and Jason O'Mara. In the film, Matt Ryan is John Constantine and Jason O'Mara is Batman.
null
false
null
What is proteomics?
Proteomics is the large-scale study of proteins. Proteins are vital parts of living organisms, with many functions such as the formation of structural fibers of muscle tissue, enzymatic digestion of food, or synthesis and replication of DNA. In addition, other kinds of proteins include antibodies that protect an organism from infection, and hormones that send important signals throughout the body. The proteome is the entire set of proteins produced or modified by an organism or system. Proteomics enables the identification of ever-increasing numbers of proteins. This varies with time and distinct requirements, or stresses, that a cell or organism undergoes. Proteomics is an interdisciplinary domain that has benefited greatly from the genetic information of various genome projects, including the Human Genome Project. It covers the exploration of proteomes from the overall level of protein composition, structure, and activity, and is an important component of functional genomics. Proteomics generally denotes the large-scale experimental analysis of proteins and proteomes, but often refers specifically to protein purification and mass spectrometry. Indeed, mass spectrometry is the most powerful method for analysis of proteomes, both in large samples composed of millions of cells and in single cells.
null
false
null
Give me a list of date night ideas that I've never done.
For your next date night, you could: - Take a pottery class. - Find a paint and sip. - Go axe throwing. - Do a walking tour of the best taco joints in your city. - Go to your local grocery store, pick snacks out for each other, and put on a movie you've never seen.
null
false
null
Classify the series according to the OTT platform they are on. The Witcher, Jack Ryan, The Rings of Power, Sweet Tooth
Netflix - The Witcher, Sweet Tooth Amazon Prime - Jack Ryan, The Rings of Power
null
false
388
The current coverage of the political landscape in the press and in social media has led to an unprecedented situation. Like never before, a statement in an interview, a press release, a blog note, or a tweet can spread almost instantaneously and reach the public in no time. This proliferation speed has left little time for double-checking claims against the facts, which has proven critical in politics, e.g., during the 2016 presidential campaign in the USA, which was arguably impacted by fake news in social media and by false claims. Investigative journalists and volunteers have been working hard trying to get to the root of a claim and to present solid evidence in favor or against it. Manual fact-checking has proven very time-consuming, and thus automatic methods have been proposed as a way to speed-up the process. For instance, there has been work on checking the factuality/credibility of a claim, of a news article, or of an information source BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. However, less attention has been paid to other steps of the fact-checking pipeline, which is shown in Figure FIGREF1. The process starts when a document is made public. First, an intrinsic analysis is carried out in which check-worthy text fragments are identified. Then, other documents that might support or rebut a claim in the document are retrieved from various sources. Finally, by comparing a claim against the retrieved evidence, a system can determine whether the claim is likely true or likely false. For instance, BIBREF8 do this on the basis of a knowledge graph derived from Wikipedia. The outcome could then be presented to a human expert for final judgment. In this paper, we focus on the first step: predicting check-worthiness of claims. Our contributions can be summarized as follows: New dataset: We build a new dataset of manually-annotated claims, extracted from the 2016 US presidential and vice-presidential debates, which we gathered from nine reputable sources such as CNN, NPR, and PolitiFact, and which we release to the research community. Modeling the context: We develop a novel approach for automatically predicting which claims should be prioritized for fact-checking, based on a rich input representation. In particular, we model not only the textual content, but also the context: how the target claim relates to the current segment, to neighboring segments and sentences, and to the debate as a whole, and also how the opponents and the public react to it. State-of-the-art results: We achieve state-of-the-art results, outperforming a strong rivaling system by a margin, while also demonstrating that this improvement is due primarily to our modeling of the context. We model the problem as a ranking task, and we train both Support Vector Machines (SVM) and Feed-forward Neural Networks (FNN) obtaining state-of-the-art results. We also analyze the relevance of the specific feature groups and we show that modeling the context yields a significant boost in performance. Finally, we also analyze whether we can learn to predict which facts are check-worthy with respect to each of the individual media sources, thus capturing their biases. It is worth noting that while trained on political debates, many features of our model can be potentially applied to other kinds of information sources, e.g., interviews and news. The rest of the paper is organized as follows: Section SECREF2 discusses related work. Section SECREF3 describes the process of gathering and annotating our political debates dataset. Section SECREF4 presents our supervised approach to predicting fact-checking worthiness, including the explanation of the model and the information sources we use. Section SECREF5 presents the evaluation setup and discusses the results. Section SECREF6 provides further analysis. Finally, Section SECREF7 presents the conclusions and outlines some possible directions for future research. We develop a novel approach for automatically predicting which claims should be prioritized for factchecking, based on a rich input representation.
What did the novel approach for automatically predicting based on?
It bases on a rich input representation.
null
false
null
The Panama Canal is an artificial 82 km (51 mi) waterway in Panama that connects the Atlantic Ocean with the Pacific Ocean and divides North and South America. The canal cuts across the Isthmus of Panama and is a conduit for maritime trade. One of the largest and most difficult engineering projects ever undertaken, the Panama Canal shortcut greatly reduces the time for ships to travel between the Atlantic and Pacific oceans, enabling them to avoid the lengthy, hazardous Cape Horn route around the southernmost tip of South America via the Drake Passage or Strait of Magellan. Colombia, France, and later the United States controlled the territory surrounding the canal during construction. France began work on the canal in 1881, but stopped because of lack of investors' confidence due to engineering problems and a high worker mortality rate. The United States took over the project on May 4, 1904, and opened the canal on August 15, 1914. The US continued to control the canal and surrounding Panama Canal Zone until the 1977 Torrijos–Carter Treaties provided for its handover to Panama. After a period of joint American–Panamanian control, the canal was taken over by the Panamanian government in 1999. It is now managed and operated by the government-owned Panama Canal Authority. Canal locks at each end lift ships up to Gatun Lake, an artificial lake 26 meters (85 ft) above sea level, created to reduce the amount of excavation work required for the canal, and then lower the ships at the other end. The original locks are 33.5 meters (110 ft) wide. A third, wider lane of locks was constructed between September 2007 and May 2016. The expanded waterway began commercial operation on June 26, 2016. The new locks allow transit of larger, New Panamax ships. Annual traffic has risen from about 1,000 ships in 1914, when the canal opened, to 14,702 vessels in 2008, for a total of 333.7 million Panama Canal/Universal Measurement System (PC/UMS) tons. By 2012, more than 815,000 vessels had passed through the canal. In 2017 it took ships an average of 11.38 hours to pass between the canal's two locks. The American Society of Civil Engineers has ranked the Panama Canal one of the seven wonders of the modern world.
From this passage tell me why Panama Canal is called one of the seven wonders of the modern world
The American Society of Civil Engineers has ranked the Panama Canal one of the seven wonders of the modern world not only because it was one of the largest and most difficult engineering projects ever undertaken, but also because of its significant impact in maritime trade across the Atlantic and Pacific oceans near the Americas. It is a shortcut which greatly reduces the time for ships to travel between the Atlantic and Pacific oceans, enabling them to avoid the lengthy, hazardous Cape Horn route around the southernmost tip of South America via the Drake Passage or Strait of Magellan.
null
false
350
We evaluate our model on RACE dataset BIBREF6 , which consists of two subsets: RACE-M and RACE-H. RACE-M comes from middle school examinations while RACE-H comes from high school examinations. RACE is the combination of the two. We compare our model with the following baselines: MRU(Multi-range Reasoning) BIBREF12 , DFN(Dynamic Fusion Networks) BIBREF11 , HCM(Hierarchical Co-Matching) BIBREF8 , OFT(OpenAI Finetuned Transformer LM) BIBREF13 , RSM(Reading Strategies Model) BIBREF14 . We also compare our model with the BERT baseline and implement the method described in the original paper BIBREF7 , which uses the final hidden vector corresponding to the first input token ([CLS]) as the aggregate representation followed by a classification layer and finally a standard classification loss is computed. Results are shown in Table 2 . We can see that the performance of BERT $_{base}$ is very close to the previous state-of-the-art and BERT $_{large}$ even outperforms it for 3.7%. But experimental result shows that our model is more powerful and we further improve the result for 2.2% computed to BERT $_{base}$ and 2.2% computed to BERT $_{large}$ . We evaluate our models on Large-scale ReAding Comprehension Dataset From Examinations (RACE) dataset (Lai et al., 2017), which consists of two subsets: RACE-M and RACE-H corresponding to middle school and high school difficulty level. RACE contains 27,933 passages and 97,687 questions in total, which is recognized as one of the largest and most difficult datasets in multi-choice reading comprehension. Besides, we also evaluate our model on the ROCStories (Spring 2016) dataset which collects 50k fivesentence commonsense stories. One correct ending sentence needs to be selected from two options when given four sentences in ROCStories dataset.
What dataset do they use for evaluation?
Large-scale ReAding Comprehension Dataset From Examinations (RACE) dataset and ROCStories (Spring 2016) dataset.
null
false
160
Learning the distributed representation for long spans of text from its constituents has been a key step for various natural language processing (NLP) tasks, such as text classification BIBREF0 , BIBREF1 , semantic matching BIBREF2 , BIBREF3 , and machine translation BIBREF4 . Existing deep learning approaches take a compositional function with different forms to compose word vectors recursively until obtaining a sentential representation. Typically, these compositional functions involve recurrent neural networks BIBREF5 , BIBREF6 , convolutional neural networks BIBREF7 , BIBREF8 , and tree-structured neural networks BIBREF9 , BIBREF10 . Among these methods, tree-structured neural networks (Tree-NNs) show theirs superior performance in many NLP tasks BIBREF11 , BIBREF12 . Following the syntactic tree structure, Tree-NNs assign a fixed-length vector to each word at the leaves of the tree, and combine word and phrase pairs recursively to create intermediate node vectors, eventually obtaining one final vector to represent the whole sentence. However, these models have a major limitation in their inability to fully capture the richness of compositionality BIBREF13 . The same parameters are used for all kinds of semantic compositions, even though the compositions have different characteristics in nature. For example, the composition of the adjective and the noun differs significantly from the composition of the verb and the noun. Moreover, many semantic phenomena, such as semantic idiomaticity or transparency, call for more powerful compositional mechanisms BIBREF14 . Therefore, Tree-NNs suffer from the underfitting problem. To alleviate this problem, some researchers propose to use multiple compositional functions, which are arranged beforehand according to some partition criterion BIBREF11 , BIBREF13 , BIBREF15 . Intuitively, using different parameters for different types of compositions has the potential to greatly reduce underfitting. BIBREF13 [ BIBREF13 ] defined different compositional functions in terms of syntactic categories, and a suitable compositional function is selected based on the syntactic categories. BIBREF15 [ BIBREF15 ] introduced multiple compositional functions and during compositional phase, a proper one is selected based on the input information. Although these models accomplished their mission to a certain extent, they still suffer from the following three challenges. First, the predefined compositional functions cannot cover all the compositional rules; Second, they require more learnable parameters, suffering from the problem of overfitting; Third, it is difficult to determine a universal criterion for semantic composition based solely on syntactic categories. In this paper, we propose dynamic compositional neural networks over tree structure, in which a meta network is used to generate the context-specific parameters of a dynamic compositional network. Specifically, we construct our models based on two kinds of tree-structured neural networks: recursive neural network (Tree-RecNN) BIBREF11 and tree-structure long short-term memory neural network (Tree-LSTM) BIBREF9 . Our work is inspired by recent work on dynamic parameter prediction BIBREF16 , BIBREF17 , BIBREF18 . The meta network is used to extract the shared meta-knowledge across different compositional rules and to dynamically generate the context-specific compositional function. Thus, the compositional function of our models varies with positions, contexts and samples. The dynamic compositional network then applies those context-specific parameters to the current input information. Both meta and dynamic networks are differentiable such that the overall networks can be trained in an end-to-end fashion. Additional, to reduce the complexity of the whole networks, we define the dynamic weight matrix in a manner simulating low-rank matrix decomposition. We evaluate our models on two typical tasks: text classification and text semantic matching. The results show that our models are more expressive due to their learning to learn nature, yet without increasing the number of model's parameters. Moreover, we find certain composition operations can be learned implicitly by meta TreeNN, such as the composition of noun phrases and verb phrases. The contributions of the paper can be summed up as follows. The meta network is used to extract the shared meta-knowledge across different compositional rules and to dynamically generate the context-specific compositional function.
What does the meta network is used to?
It is used to extract the shared meta-knowledge across different compositional rules and to dynamically generate the context-specific compositional function.
null
false
null
Where is the baseball Hall of Fame?
The National Baseball Hall of Fame is a history museum and hall of fame in Cooperstown, New York.
null
false
null
What is the “yacht rock” genre of music?
Yacht rock refers to jazzy rock n’roll music from the late 1970s and early 1980s, although the term wasn’t coined until years later.
null
false
69
As discussed in Section "Introduction" , we find three observations that show the usefulness of linked entities for abstractive summarization. First, summaries are mainly composed of linked entities extracted from the original text. In the example, it can be seen that the summary contains four words that refer to different entities. In fact, all noun phrases in the summary mention at least one linked entity. In our experimental data, we extract linked entities from the original text and compare them to the noun phrases found in the summary. We report that $77.1\%$ and $75.1\%$ of the noun phrases on the Gigaword and CNN datasets, respectively, contain at least one linked entity, which confirms our observation. Second, linked entities can be used to represent the topic of the summary, defined as a multinomial distribution over entities, as graphically shown in the example, where the probabilities refer to the relevance of the entities. Entities have been previously used to represent topics BIBREF7 , as they can be utilized as a controlled vocabulary of the main topics in a document BIBREF8 . In the example, we see that the entity “Jae Seo” is the most relevant because it is the subject of the summary, while the entity “South Korean” is less relevant because it is less important when constructing the summary. Third, we can make use of the entity commonsense that can be learned as a continuous vector representation from a separate larger corpus BIBREF9 , BIBREF10 . In the example, if we know that the entities “Los Angeles Dodgers” and “New York Mets” are American baseball teams and “Jae Seo” is a baseball player associated with the teams, then we can use this information to generate more coherent summaries. We find that $76.0\%$ of the extracted linked entities are covered by the pre-trained vectors in our experimental data, proving our third observation. As discussed in Section 1, we find three observations that show the usefulness of linked entities for abstractive summarization.
How many observations show the usefulness of linked entities for abstractive summarization?
Three observations.
null
false
null
Tell me whether these Game of Thrones characters are dead or alive at the end of Season 8: Jon Snow, Cersei Lannister, Daenerys Targaryen, Bran Stark, Khal Drogo, Hodor, Ygritte, Oberyn Martell, Arya Stark, Olenna Tyrell, Tyrion Lannister
At the end of season 8 of TV show Game of Thrones, the only characters that are alive from the list you've provided are: Jon Snow, Bran Stark, Arya Stark, Tyrion Lannister. The remaining characters did not make it to the end. That is, Cersei Lannister, Daenerys Targaryen, Khal Drogo, Hodor, Ygritte, Oberyn Martell, and Olenna Tyrell are all dead by the time season 8 finale ends.
null
false
null
What is an index fund?
An index fund is a basket of individual stocks that is designed to match the performance of specific group of stocks. Index funds allow investors to avoid investing in an individual stock and distribute the investment risk across a broader set of companies.
null
false
null
What is the difference between a guitar and bass?
The main difference between a guitar and a bass is the number of strings that they have. A guitar typically has six strings, while a bass typically has four. Additionally, guitars are typically larger and more expensive than basses.
null
false
null
What city has the northern-most post office in Wisconsin?
Cornucopia, a small town located on the south shore of Lake Superior, is home to the northern-most post office in Wisconsin.
null
false
null
What are some of the most accessible jazz albums for someone new to jazz?
Jazz music encompasses a wide range of styles, genres and periods, so it can be difficult and confusing for the non-jazz listener to find a satisfying entry point. There have been many “greatest of all time” jazz album lists developed by jazz publications and critics over the years. The following is a brief sampling of some of the top-rated albums from some of these lists. - Miles Davis: Kind of Blue (1959) Miles Davis’ masterpiece. One of the most influential albums of all time, because of its widespread influence on jazz, rock and classical genres. It is the greatest selling jazz album of all time, and usually the number one pick on critics’ albums rankings. It features a cool, atmospheric modal approach that is immediately recognizable, tuneful and approachable for just about any listener. - John Coltrane: A Love Supreme (1964). Widely regarded as John Coltrane’s masterpiece, it is an intense modal jazz through-composed spiritual meditation in four parts. Every critic interprets John Coltrane’s intent and meaning of A Love Supreme differently. Whether you think he’s searching for some form of soulful purity, or cutting out the middle man to speak directly to God, your jazz-listening life won’t be the same after hearing this searing testament. - Sonny Rollins: Saxophone Collosus (1957). Sonny Rollins’ breakthrough and best-known album. Rollins emerges here as a true jazz giant belonging in the same pantheon as Coltrane, Ben Webster, Coleman Hawkins and Lester Young. His strong and highly rhythmic thematic improvisational voice is on brilliant display in ‘St. Thomas’ and ‘Blue 7’. Not to be missed is his powerful rendition of the jazz standard ‘You Don’t Know Love’ and his lovely solo during Kurt Weil’s ‘Moritat’ (also known as ‘Mack the Knife’). - Dave Brubeck: Time Out (1959) Featuring uncommon time signatures and mix of Cool and West Coast jazz, Time Out was the first jazz album to sell over a million copies and produced a single, ‘Take Five’ that became the first jazz single to sell over a million copies. Brubeck described his saxophone player Paul Desmond’s sound as “like a dry martini”. You’ll want to ice up a glass and pour yourself another with each listen to this enduring jazz mainstay. - John Coltrane and Johnny Hartman: John Coltrane and Johnny Hartman (1963) is a reluctant jazz ballad summit of giants. Johnny Hartman initially resisted the idea of recording an album with John Coltrane, believing they wouldn’t complement one another musically. Following an after-hours session with Coltrane and pianist McCoy Tyner, they agreed to work together and went in to the studio to record their self-title album that contained the definitive versions of ‘Lush Life’, ‘They Say It’s Wonderful’, and ‘My One and Only Love.’ This is an absolute essential album. - Billie Holiday: Lady in Satin (1958) Holiday’s album of Great American Songbook standards - lushly arranged around her abused, fragile, but powerfully rhythmic voice - is a testament to her professional powers of persuasion and artistic willpower in what turned out to be the last year of a 44-yr life. She told her producers she wanted “a pretty album, something delicate.” They paid her $150 a side, in advance. Well, she gave them their money’s worth…and then some. Essential.
null
false
null
At what age are children considered adults?
In general, most children are considered Adults at 18 in most parts of the world.
null
false
null
Who was Bran the Builder?
Brandon Stark, known as Bran(don) the Builder, is purported to have lived during the Age of Heroes and was the founder of House Stark. Legend claims he built Winterfell and the magical Wall.
null
false
null
You are getting ready for your soccer game and look into your soccer bag. Which of the following items are part of your soccer equipment: cleats, boots, tights, jersey, swimsuit, milk, carrots, water, shin guards, socks, folder, racket, glove
soccer equipment: cleats, boots, jersey, water, shin guards, socks, ball
null
false
12
Bidirectional Encoder Representations from Transformers (BERT) is a novel Transformer BIBREF0 model, which recently achieved state-of-the-art performance in several language understanding tasks, such as question answering, natural language inference, semantic similarity, sentiment analysis, and others BIBREF1. While well-suited to dealing with relatively short sequences, Transformers suffer from a major issue that hinders their applicability in classification of long sequences, i.e. they are able to consume only a limited context of symbols as their input BIBREF2. There are several natural language (NLP) processing tasks that involve such long sequences. Of particular interest are topic identification of spoken conversations BIBREF3, BIBREF4, BIBREF5 and call center customer satisfaction prediction BIBREF6, BIBREF7, BIBREF8, BIBREF9. Call center conversations, while usually quite short and to the point, often involve agents trying to solve very complex issues that the customers experience, resulting in some calls taking even an hour or more. For speech analytics purposes, these calls are typically transcribed using an automatic speech recognition (ASR) system, and processed in textual representations further down the NLP pipeline. These transcripts sometimes exceed the length of 5000 words. Furthermore, temporal information might play an important role in tasks like CSAT. For example, a customer may be angry at the beginning of the call, but after her issue is resolved, she would be very satisfied with the way it was handled. Therefore, simple bag of words models, or any model that does not include temporal dependencies between the inputs, may not be well-suited to handle this category of tasks. This motivates us to employ model such as BERT in this task. In this paper, we propose a method that builds upon BERT's architecture. We split the input text sequence into shorter segments in order to obtain a representation for each of them using BERT. Then, we use either a recurrent LSTM BIBREF10 network, or another Transformer, to perform the actual classification. We call these techniques Recurrence over BERT (RoBERT) and Transformer over BERT (ToBERT). Given that these models introduce a hierarchy of representations (segment-wise and document-wise), we refer to them as Hierarchical Transformers. To the best of our knowledge, no attempt has been done before to use the Transformer architecture for classification of such long sequences. Our novel contributions are: Two extensions - RoBERT and ToBERT - to the BERT model, which enable its application in classification of long texts by performing segmentation and using another layer on top of the segment representations. State-of-the-art results on the Fisher topic classification task. Significant improvement on the CSAT prediction task over the MS-CNN model. In this paper, we propose a method that builds upon BERT’s architecture. We split the input text sequence into shorter segments in order to obtain a representation for each of them using BERT. Then, we use either a recurrent LSTM [11] network, or another Transformer, to perform the actual classification.
How does the team propose to carry out the study?
In this paper, we propose a method that builds upon BERT’s architecture. They split the input text sequence into shorter segments in order to obtain a representation for each of them using BERT. Then, they use either a recurrent LSTM [11] network, or another Transformer, to perform the actual classification.
null
false
null
The Osborne effect is a social phenomenon of customers canceling or deferring orders for the current, soon-to-be-obsolete product as an unexpected drawback of a company's announcing a future product prematurely. It is an example of cannibalization. The term alludes to the Osborne Computer Corporation, whose second product did not become available until more than a year after it was announced. The company's subsequent bankruptcy was widely blamed on reduced sales after the announcement. The Osborne Effect states that prematurely discussing future, unavailable products damages sales of existing products. The name comes from the planned replacement of the Osborne 1, an early personal computer first sold by the Osborne Computer Corporation in 1981. In 1983, founder Adam Osborne pre-announced several next-generation computer models (the Osborne Executive and Osborne Vixen), which were only prototypes, highlighting the fact that they would outperform the existing model as the prototypes dramatically cut down assembly time. A widely held belief was that sales of the Osborne 1 fell sharply as customers anticipated those more advanced systems, leading to a sales decline from which Osborne Computer was unable to recover. This belief appeared in the media almost immediately after the company's September 1983 bankruptcy: To give the jazzy $2,495 Osborne Executive a running start, Adam began orchestrating publicity early in 1983. We, along with many other magazines, were shown the machine in locked hotel rooms. We were required not to have anything in print about it until the planned release date in mid-April. As far as we know, nothing did appear in print, but dealers heard about the plans and cancelled orders for the Osborne 1 in droves. In early April, Osborne told dealers he would be showing them the machine on a one-week tour the week of 17 April, and emphasized that the new machine was not a competitor for the Osborne 1. But dealers didn't react the way Osborne expected; said Osborne, "All of them just cancelled their orders for the Osborne 1." Osborne reacted by drastically cutting prices on the Osborne 1 in an effort to stimulate cash flow. But nothing seemed to work, and for several months sales were practically non-existent. Pre-announcement is done for several reasons: to reassure current customers that there is improvement or lower cost coming, to increase the interest of the media and investors in the company's future prospects, and to intimidate or confuse competitors. When done correctly, the sales or cash flow impact to the company is minimal, with the revenue drop for the current product being offset by orders or completed sales of the new product as it becomes available. However, when the Osborne effect occurs, the quantity of unsold goods increases and the company must react by discounting and/or lowering production of the current product, both of which depress cash flow.
Summarize the passage
The Osborne effect is a social phenomenon of customers canceling or deferring orders for the current but soon-to-be-obsolete product. This happens as an unexpected drawback of a company's announcing a future product prematurely. It is an example of cannibalization. The term originates from the bankruptcy of Osborne Computer Corporation. It was caused when the company announced the more performant successors for their current product Osborne 1 much in advance. It led customers and dealers to cancel orders their Osborne 1 orders.
null
false
null
Tell me which ones are edible and which ones are not: Lemon, Table, Carrot, Carat, John
Edible: Lemon, Carrot Not Edible: Table, Carat, John
null
false
null
What is the name of the curvy street in San Francisco?
Lombard Street in San Francisco is often referred to as the "crookedest" or most curvy street in the world.
null
false
null
Can you potty train a bunny?
Just like cats, bunnies can be litter trained, which is why so many bunny owners are able to let their buns run free in the house. Most rabbits prefer to do their business in one spot, and so litter training comes naturally.
2003.05377
false
null
The bossa-nova and jovem-guarda genres, which have few instances in the dataset, are among the most difficult ones to classify using the model. The pop genre, by contrast, has a small distribution between the number of songs and the number of artists, and could not be well classified by our model. This may indicate that our model was unable to identify a pattern due to the low number of songs per artist, or that the song lyrics of this genre cover several subjects that are confused with other genres. The bossa-nova and jovem-guarda genres, which have few instances in the dataset, are among the most difficult ones to classify using the model.
what genre was the most difficult to classify?
The answers are shown as follows: * bossa-nova and jovem-guarda genres
null
false
null
What is a Series I-Bond?
An I-Bond is a savings bond you can purchase from treasury direct that helps protect you from inflation. An I-Bond earns interest at both a fixed rate and the inflation rate, which is determined by the treasury department and is adjusted twice a year. You can purchase up to $10,000 of electronic I-Bonds per calendar year.
null
false
null
Give me a list of herbs I can plant in my garden
Mint, Thyme, Basil, Cilantro, Parsley, Tarragon, Oregano
null
false
null
What does the idiom burning your bridges mean?
The idiom burning your bridges means do not do something that you may regret later. It is commonly used in the scenario when someone is quitting a job. If someone is quitting a job, someone else may say to that person 'Quit your job professionally and respectfully! Don't burn your bridges!'. Someone who burns their bridges risks that the people they worked with may never give them a positive professional reference nor want to hire them in a future opportunity.
1709.00947
false
null
It is now clear that a key aspect for future work will be developing additional performance metrics based on topological properties. We are in line with recent work BIBREF16 , proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embeddings and not so much in generic scores. For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine. Future work will necessarily include developing this type of metrics. We are in line with recent work BIBREF16 , proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embeddings and not so much in generic scores. For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine.
What new metrics are suggested to track progress?
The answers are shown as follows: * For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine
2003.00639
false
null
We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6.
What automatic evaluation metrics are used?
The answers are shown as follows: * BLEU * embedding-based metrics (Average, Extrema, Greedy and Coherence) * , entropy-based metrics (Ent-{1,2}) * distinct metrics (Dist-{1,2,3} and Intra-{1,2,3})
null
false
null
What are the benefits of a high protein diet
High protein diets can have a lot of benefits. For one, if you are interested in muscle growth, protein helps with that, and a high protein diet can lead to increased muscle growth. Additionally, protein rich meals can help people stay full longer which can help decrease snacking leading to weight loss
null
false
52
Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression") or evidence of depression (e.g., “depressed over disappointment"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps"), disturbed sleep (e.g., “another restless night"), or fatigue or loss of energy (e.g., “the fatigue is unbearable") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0. Based on our prior initial experiments using these feature groups, we learned that support vector machines perform with the highest F1-score compared to other supervised approaches.
What's the performance of support vector machines compared to other supervised approaches?
Based on their prior initial experiments using these feature groups, they learned that support vector machines perform with the highest F1-score compared to other supervised approaches.
null
false
null
Why do you love Biriyani?
Because it's flavorful
null
false
null
Provide for me, a number of famous Japanese animation characters that I could research for an assignment on anime.
Here are a number of famous Japanese animation characters that you could use for further research: Kaito Kuroba (Detective Conan), Giyu Tomioka (Demon Slayer), Muichiro Tokito (Demon Slayer), Toge Inumaki (Jujutsu Kaisen), Rimuru Tempest (That Time I Got Reincarnated As A Slime), Gintoki Sakata (Gintama), Doraemon (Doraemon), Serena (Pokemon), Satoru Gojo (Jujutsu Kaisen), and Levi Ackerman (Attack On Titan).
null
false
null
Classify these objects as edible or inedible: pizza, slipper, cake, banana, biscuit, shoe, pencil, chalk, soda, juice, television, computer, mirror, cereal, rice
These items are edible: pizza, cake, banana, biscuit, soda, cereal, rice These items are inedible: slipper, shoe, pencil, chalk, television, computer, mirror
null
false
211
Events are a kind of important objective information of the world. Structuralizing and representing such information as machine-readable knowledge are crucial to artificial intelligence BIBREF0, BIBREF1. The main idea is to learn distributed representations for structured events (i.e. event embeddings) from text, and use them as the basis to induce textual features for downstream applications, such as script event prediction and stock market prediction. Parameterized additive models are among the most widely used for learning distributed event representations in prior work BIBREF2, BIBREF3, which passes the concatenation or addition of event arguments' word embeddings to a parameterized function. The function maps the summed vectors into an event embedding space. Furthermore, BIBREF4 ding2015deep and BIBREF5 weber2018event propose using neural tensor networks to perform semantic composition of event arguments, which can better capture the interactions between event arguments. This line of work only captures shallow event semantics, which is not capable of distinguishing events with subtle differences. On the one hand, the obtained event embeddings cannot capture the relationship between events that are syntactically or semantically similar, if they do not share similar word vectors. For example, as shown in Figure FIGREF2 (a), “PersonX threw bomb” and “PersonZ attacked embassy”. On the other hand, two events with similar word embeddings may have similar embeddings despite that they are quite unrelated, for example, as shown in Figure FIGREF2 (b), “PersonX broke record” and “PersonY broke vase”. Note that in this paper, similar events generally refer to events with strong semantic relationships rather than just the same events. One important reason for the problem is the lack of the external commonsense knowledge about the mental state of event participants when learning the objective event representations. In Figure FIGREF2 (a), two event participants “PersonY” and “PersonZ” may carry out a terrorist attack, and hence, they have the same intent: “to bloodshed”, which can help representation learning model maps two events into the neighbor vector space. In Figure FIGREF2 (b), a change to a single argument leads to a large semantic shift in the event representations, as the change of an argument can result in different emotions of event participants. Who “broke the record” is likely to be happy, while, who “broke a vase” may be sad. Hence, intent and sentiment can be used to learn more fine-grained semantic features for event embeddings. Such commonsense knowledge is not explicitly expressed but can be found in a knowledge base such as Event2Mind BIBREF6 and ATOMIC BIBREF7. Thus, we aim to incorporate the external commonsense knowledge, i.e., intent and sentiment, into the learning process to generate better event representations. Specifically, we propose a simple and effective model to jointly embed events, intents and emotions into the same vector space. A neural tensor network is used to learn baseline event embeddings, and we define a corresponding loss function to incorporate intent and sentiment information. Extensive experiments show that incorporating external commonsense knowledge brings promising improvements to event embeddings, achieving 78% and 200% improvements on hard similarity small and big dataset, respectively. With better embeddings, we can achieve superior performances on script event prediction and stock market prediction compared to state-of-the-art baseline methods. Understanding events requires effective representations that contain commonsense knowledge. High-quality event representations are valuable for many NLP downstream applications. This paper proposed a simple and effective framework to incorporate commonsense knowledge into the learning process of event embeddings. Experimental results on event similarity, script event prediction and stock prediction showed that commonsense knowledge enhanced event embeddings can improve the quality of event representations and benefit the downstream applications. Events are a kind of important objective information of the world. Structuralizing and representing such information as machine-readable knowledge are crucial to artificial intelligence. The main idea is to learn distributed representations for structured events (i.e. event embeddings) from text, and use them as the basis to induce textual features for downstream applications, such as script event prediction and stock market prediction.****Experimental results on event similarity, script event prediction and stock prediction showed that commonsense knowledge enhanced event embeddings can improve the quality of event representations and benefit the downstream applications.
Can their model induce textual features for downstream applications?
Yes.
null
false
null
In the game Dungeons and Dragons, players can be spell casters or non-spell casters. Sort this list into those two categories: Wizard, Bard, Ranger, Barbarian, Cleric, Druid, Rogue, Sorcerer, Warlock, Fighter and Monk.
In D&D the casters are the Bard, Cleric, Druid, Sorcerer, Warlock, and Wizard. The Rogue, Fighter, Barbarian, Monk and Ranger are non-spell casters.
null
false
48
Table-to-text generation is an important and challenging task in natural language processing, which aims to produce the summarization of numerical table BIBREF0, BIBREF1. The related methods can be empirically divided into two categories, pipeline model and end-to-end model. The former consists of content selection, document planning and realisation, mainly for early industrial applications, such as weather forecasting and medical monitoring, etc. The latter generates text directly from the table through a standard neural encoder-decoder framework to avoid error propagation and has achieved remarkable progress. In this paper, we particularly focus on exploring how to improve the performance of neural methods on table-to-text generation. Recently, ROTOWIRE, which provides tables of NBA players' and teams' statistics with a descriptive summary, has drawn increasing attention from academic community. Figure FIGREF1 shows an example of parts of a game's statistics and its corresponding computer generated summary. We can see that the tables has a formal structure including table row header, table column header and table cells. “Al Jefferson” is a table row header that represents a player, “PTS” is a table column header indicating the column contains player's score and “18” is the value of the table cell, that is, Al Jefferson scored 18 points. Several related models have been proposed . They typically encode the table's records separately or as a long sequence and generate a long descriptive summary by a standard Seq2Seq decoder with some modifications. Wiseman explored two types of copy mechanism and found conditional copy model BIBREF3 perform better . Puduppully enhanced content selection ability by explicitly selecting and planning relevant records. Li improved the precision of describing data records in the generated texts by generating a template at first and filling in slots via copy mechanism. Nie utilized results from pre-executed operations to improve the fidelity of generated texts. However, we claim that their encoding of tables as sets of records or a long sequence is not suitable. Because (1) the table consists of multiple players and different types of information as shown in Figure FIGREF1. The earlier encoding approaches only considered the table as sets of records or one dimensional sequence, which would lose the information of other (column) dimension. (2) the table cell consists of time-series data which change over time. That is to say, sometimes historical data can help the model select content. Moreover, when a human writes a basketball report, he will not only focus on the players' outstanding performance in the current match, but also summarize players' performance in recent matches. Lets take Figure FIGREF1 again. Not only do the gold texts mention Al Jefferson's great performance in this match, it also states that “It was the second time in the last three games he's posted a double-double”. Also gold texts summarize John Wall's “double-double” performance in the similar way. Summarizing a player's performance in recent matches requires the modeling of table cell with respect to its historical data (time dimension) which is absent in baseline model. Although baseline model Conditional Copy (CC) tries to summarize it for Gerald Henderson, it clearly produce wrong statements since he didn't get “double-double” in this match. To address the aforementioned problems, we present a hierarchical encoder to simultaneously model row, column and time dimension information. In detail, our model is divided into three layers. The first layer is used to learn the representation of the table cell. Specifically, we employ three self-attention models to obtain three representations of the table cell in its row, column and time dimension. Then, in the second layer, we design a record fusion gate to identify the more important representation from those three dimension and combine them into a dense vector. In the third layer, we use mean pooling method to merge the previously obtained table cell representations in the same row into the representation of the table's row. Then, we use self-attention with content selection gate BIBREF4 to filter unimportant rows' information. To the best of our knowledge, this is the first work on neural table-to-text generation via modeling column and time dimension information so far. We conducted experiments on ROTOWIRE. Results show that our model outperforms existing systems, improving baseline BLEU from 14.19 to 16.85 ($+18.75\%$), P% of relation generation (RG) from 74.80 to 91.46 ($+22.27\%$), F1% of content selection (CS) from 32.49 to 41.21 ($+26.84\%$) and content ordering (CO) from 15.42 to 20.86 ($+35.28\%$) on test set. It also exceeds the state-of-the-art model in terms of those metrics. The first layer is used to learn the representation of the table cell. Specifically, we employ three self-attention models to obtain three representations of the table cell in its row, column and time dimension. Then, in the second layer, we design a record fusion gate to identify the more important representation from those three dimension and combine them into a dense vector. In the third layer, we use mean pooling method to merge the previously obtained table cell representations in the same row into the representation of the table's row.
What are the three layers of the authors' model?
In the first layer, the authors employ three self-attention models to obtain three representations of the table cell in its row, column and time dimension. Then, in the second layer, the authors design a record fusion gate to identify the more important representation from those three dimension and combine them into a dense vector. Then, the authors use mean pooling method to merge the previously obtained table cell representations in the same row into the representation of the table's row. Then, the authors use self-attention with content selection gate to filter unimportant rows' information
1701.04653
true
null
As the table shows, terms extracted from Yahoo! Answers tend to be more related, in terms of the number of correlated terms, to attributes related to religion or ethnicity compared to terms from Twitter. However, for two particular attributes (i.e., Price and Buddhist), the number of correlated terms from Twitter is higher than the ones from Yahoo! Answers . These results collectively suggest that there is a wealth of terms, both in Yahoo! Answers and Twitter, which can be used to predict the population demographics. terms extracted from Yahoo! Answers tend to be more related, in terms of the number of correlated terms, to attributes related to religion or ethnicity compared to terms from Twitter. However, for two particular attributes (i.e., Price and Buddhist), the number of correlated terms from Twitter is higher than the ones from Yahoo! Answers .
On Twitter, do the demographic attributes and answers show more correlations than on Yahoo! Answers?
No.
null
false
119
Automatic classification of sentiment has mainly focused on categorizing tweets in either two (binary sentiment analysis) or three (ternary sentiment analysis) categories BIBREF0 . In this work we study the problem of fine-grained sentiment classification where tweets are classified according to a five-point scale ranging from VeryNegative to VeryPositive. To illustrate this, Table TABREF3 presents examples of tweets associated with each of these categories. Five-point scales are widely adopted in review sites like Amazon and TripAdvisor, where a user's sentiment is ordered with respect to its intensity. From a sentiment analysis perspective, this defines a classification problem with five categories. In particular, Sebastiani et al. BIBREF1 defined such classification problems whose categories are explicitly ordered to be ordinal classification problems. To account for the ordering of the categories, learners are penalized according to how far from the true class their predictions are. Although considering different scales, the various settings of sentiment classification are related. First, one may use the same feature extraction and engineering approaches to represent the text spans such as word membership in lexicons, morpho-syntactic statistics like punctuation or elongated word counts BIBREF2 , BIBREF3 . Second, one would expect that knowledge from one task can be transfered to the others and this would benefit the performance. Knowing that a tweet is “Positive” in the ternary setting narrows the classification decision between the VeryPositive and Positive categories in the fine-grained setting. From a research perspective this raises the question of whether and how one may benefit when tackling such related tasks and how one can transfer knowledge from one task to another during the training phase. Our focus in this work is to exploit the relation between the sentiment classification settings and demonstrate the benefits stemming from combining them. To this end, we propose to formulate the different classification problems as a multitask learning problem and jointly learn them. Multitask learning BIBREF4 has shown great potential in various domains and its benefits have been empirically validated BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 using different types of data and learning approaches. An important benefit of multitask learning is that it provides an elegant way to access resources developed for similar tasks. By jointly learning correlated tasks, the amount of usable data increases. For instance, while for ternary classification one can label data using distant supervision with emoticons BIBREF9 , there is no straightforward way to do so for the fine-grained problem. However, the latter can benefit indirectly, if the ternary and fine-grained tasks are learned jointly. The research question that the paper attempts to answer is the following: Can twitter sentiment classification problems, and fine-grained sentiment classification in particular, benefit from multitask learning? To answer the question, the paper brings the following two main contributions: (i) we show how jointly learning the ternary and fine-grained sentiment classification problems in a multitask setting improves the state-of-the-art performance, and (ii) we demonstrate that recurrent neural networks outperform models previously proposed without access to huge corpora while being flexible to incorporate different sources of data. An important benefit of multitask learning is that it provides an elegant way to access resources developed for similar tasks.
What are the benefits of multitasking?
It provides an elegant way to access resources developed for similar tasks.
null
false
null
Sony was the first to introduce a full-frame mirrorless camera, the α7, in 2013.
Which full-frame mirrorless camera was first available in the market?
Sony α7 in 2013
null
false
null
Give me a list of three historical piano composers.
Johann Sebastian Bach, Ludwig van Beethoven, and Frédéric Chopin.
null
false
null
Name four pork products I could expect to find in a delicatessen
Pork is a versatile meat. You can expect to find Bacon, Proscuitto, Salami and Pork Belly in a delicatessen near you.
null
false
null
What constitutes the AWS Kinesis family?
The Kinesis Family of Services is a set of Amazon Web Services (AWS) products designed for processing and analyzing streaming data in real-time. The Kinesis Family of Services includes three main services: - Amazon Kinesis Data Streams is a scalable and durable real-time data streaming service that enables you to collect and process large amounts of data in real-time. You can use Kinesis Data Streams to build custom applications for real-time processing of streaming data, such as log processing, real-time analytics, and data ingestion from various sources. - Amazon Kinesis Data Firehose is a fully managed service that allows you to capture, transform, and load streaming data into Amazon S3, Amazon Redshift, and other data stores in near real-time. It simplifies the process of loading streaming data to these destinations without the need for any intermediate servers or data processing. - Amazon Kinesis Data Analytics is a fully managed service that enables you to analyze streaming data in real-time using Apache Flink stream processing framework.
null
false
null
How do you give a good presentation?
There are several things you should think about if you'd like to give a good presentation. 1. Know your audience. If you use jargon or reference concepts that your audience doesn't understand, you may lose your audience's attention. 2. Pay attention to how your audience is reacting. Do they appear bored? Do they look confused? You can and should make adjustments to what you're presenting based on this real-time feedback. 3. Communicate clearly. You should make a deliberate attempt to speak clearly, and at a reasonable pace. You can think of an orator you admire, and channel their communication style. 4. Practice, practice, practice. Practicing the presentation beforehand can help your confidence. Even if you're extremely nervous, if you know the content of the presentation you're going to have more confidence than otherwise. Additionally, not having to spend as much conscious effort thinking about what content you need to say allows for more of your attention to be spent on some of the above suggestions, like ensuring you are communicating clearly and paying attention to your audience. Through knowing your audience, paying attention to their reactions, communicating clearly and practicing, you can significantly increase your chances of giving a good presentation.
1902.00672
false
null
Various classes of NP-hard problems involving a submodular and non-decreasing function can be solved approximately by polynomial time algorithms with provable approximation factors. Algorithms "Detection of hypergraph transversals for text summarization" and "Detection of hypergraph transversals for text summarization" are our core methods for the detection of approximations of maximal budgeted hypergraph transversals and minimal soft hypergraph transversals, respectively. In each case, a transversal is found and the summary is formed by extracting and aggregating the associated sentences. Algorithm "Detection of hypergraph transversals for text summarization" is based on an adaptation of an algorithm presented in BIBREF30 for the maximization of submodular functions under a Knaspack constraint. It is our primary transversal-based summarization model, and we refer to it as the method of Transversal Summarization with Target Length (TL-TranSum algorithm). Algorithm "Detection of hypergraph transversals for text summarization" is an application of the algorithm presented in BIBREF20 for solving the submodular set covering problem. We refer to it as Transversal Summarization with Target Coverage (TC-TranSum algorithm). Both algorithms produce transversals by iteratively appending the node inducing the largest increase in the total weight of the covered hyperedges relative to the node weight. While long sentences are expected to cover more themes and induce a larger increase in the total weight of covered hyperedges, the division by the node weights (i.e. the sentence lengths) balances this tendency and allows the inclusion of short sentences as well. In contrast, the methods of sentence selection based on a maximal relevance and a minimal redundancy such as, for instance, the maximal marginal relevance approach of BIBREF31 , tend to favor the selection of long sentences only. The main difference between algorithms "Detection of hypergraph transversals for text summarization" and "Detection of hypergraph transversals for text summarization" is the stopping criterion: in algorithm "Detection of hypergraph transversals for text summarization" , the approximate minimal soft transversal is obtained whenever the targeted hyperedge coverage is reached while algorithm "Detection of hypergraph transversals for text summarization" appends a given sentence to the approximate maximal budgeted transversal only if its addition does not make the summary length exceed the target length $L$ . FLOAT SELECTED: Table 2: Comparison with related graph- and hypergraph-based summarization systems. While long sentences are expected to cover more themes and induce a larger increase in the total weight of covered hyperedges, the division by the node weights (i.e. the sentence lengths) balances this tendency and allows the inclusion of short sentences as well. In contrast, the methods of sentence selection based on a maximal relevance and a minimal redundancy such as, for instance, the maximal marginal relevance approach of BIBREF31 , tend to favor the selection of long sentences only. FLOAT SELECTED: Table 2: Comparison with related graph- and hypergraph-based summarization systems.
How does the model compare with the MMR baseline?
The answers are shown as follows: * Moreover, our TL-TranSum method also outperforms other approaches such as MaxCover ( $5\%$ ) and MRMR ( $7\%$ )
null
false
null
What is the best way to make Indian ginger tea?
To make two cups of Indian masala tea, first take 1.5 cups of water in a pot and add about 1 teaspoon of grated ginger, add 2 tablespoons of loose tea leaves (best is Tetley tea). Bring this to boil and add 3/4 cup of milk (whole milk tastes the best) and 3 tablespoons of sugar. Simmer and let the mixture boil for 5 minutes. Use a fine strainer to pour the tea into cups. Enjoy your cup of the best Indian Ginger tea ever.
2001.05493
false
null
The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset. The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset.
What data/studies do the authors provide to support the assertion that the majority of aggressive conversations contain code-mixed languages?
None
null
false
349
NIC - Show and Tell uses CNN model which is currently yielding the state-of-the-art results. The model achieved 0.628 when evaluating on BLEU-1 on COCO-2014 dataset. For CNN part, we utilize VGG-16 BIBREF20 architecture pre-trained on COCO-2014 image sets with all categories. In decoding part, LSTM is not only trained to predict sentence but also to compute probability for each word to be generated. As a result, output sentence will be chosen using search algorithms to find the one that have words yielding the maximum probabilities. In decoding part, LSTM is not only trained to predict sentence but also to compute probability for each word to be generated.
Is LSTM the only trained to predict sentences?
No.
null
false
null
Sir John Evelyn (1591–1664) was an English politician who sat in the House of Commons at various times between 1628 and 1660. He reluctantly supported the Parliamentary side in the English Civil War. Evelyn was the son of Sir John Evelyn of Kingston, Godstone, Surrey and Marden, MP and his wife Elizabeth Stever, daughter of William Stever of Kingston upon Thames. He was baptised at Kingston upon Thames on 20 October 1591. He was admitted at Emmanuel College, Cambridge on 13 March 1606. He was a member of the Virginia Company in 1612 and of the East India Company in 1624. He was a JP for Surrey from 1624.
Who did John Evelyn support during the English Civil War?
the Parliamentary side
null
false
null
The Tampa Bay Rays are an American professional baseball team based in St. Petersburg, Florida. The Rays compete in Major League Baseball (MLB) as a member club of the American League (AL) East division. Since its inception, the team's home venue has been Tropicana Field. Following nearly three decades of unsuccessfully trying to gain an expansion franchise or enticing existing teams to relocate to the Tampa Bay area, an ownership group led by Vince Naimoli was approved on March 9, 1995. The team began play as the Tampa Bay Devil Rays in the 1998 Major League Baseball season. The team's first decade of play was marked by futility; they finished in last place in the AL East in all but the 2004 season, when they finished second to last. Following the 2007 season, Stuart Sternberg, who had purchased controlling interest in the team from Vince Naimoli two years earlier, changed the team's name from "Devil Rays" to "Rays", now meaning both a manta ray and a ray of sunshine; a manta ray logo appears on the uniform sleeves while a sunburst appears on the uniform front. The 2008 season saw the Rays post their first winning season, their first AL East championship, and their first American League pennant (defeating the rival Boston Red Sox in the ALCS), though they lost to the Philadelphia Phillies in that year's World Series. Since then, the Rays have played in the postseason seven more times, winning the American League pennant again in 2020 and losing to the Los Angeles Dodgers in that year's World Series. The Tampa Bay Rays' chief rivals are the Boston Red Sox and the New York Yankees, which also play in the AL East. Regarding the former, there have been several notable on-field incidents. The Rays also have an in-state interleague rivalry with the National League (NL)'s Miami Marlins (originally the Florida Marlins), whom they play in the Citrus Series. Through 2022, the Rays' all-time record is 1,912–2,034 (.485)
Based on the reference text about the Tampa Bay Rays, what is the name of their stadium and who are their main rivals?
The name of the Tampa Bay Rays stadium is Tropicana Field and their main rivals are the Boston Red Sox and New York Yankees.
null
false
null
What does below SAS code do PROC SUMMARY NWAY MISSING DATA=named_data; CLASS col1 col2 col3 col4; VAR tot_amt tot_gst_amt total_settlement_amt; OUTPUT OUT=PYMT_SUM(DROP=_:) SUM=; QUIT;
The SAS code block you provided uses the PROC SUMMARY procedure to calculate summary statistics for a dataset called named_data. Here's what each line of the code does: NWAY - specifies that the summary statistics should be calculated for all possible combinations of the CLASS variables. MISSING - specifies that missing values in the named_data dataset should be included in the summary statistics. CLASS - specifies the variables that define the groups for which the summary statistics should be calculated. VAR - specifies the variables for which summary statistics should be calculated. OUTPUT - specifies the output dataset and the statistics to include in the output. OUT - specifies the name of the output dataset. DROP=_: - drops the automatic variables created by SAS. SUM= - calculates the sum of the specified variables. The resulting output dataset PYMT_SUM will contain the summary statistics for each combination of the CLASS variables specified
null
false
null
Who is the most powerful person in the world?
The answer is subjective as it depends on definition of power at a global scale. If the criteria is based on a person representing a country with significant military power, economic power and geo-political influence, then the two most powerful individuals in the world are Joe Biden (current president of the United States) and Xi Jinping (leader of the Chinese Communist Party) and leader of China
null
false
null
Describe the governing arrangements provided by the Articles of Confederation. What were the main weaknesses of the document?
The Articles of Confederation (AoC) was the first written constitution of the United States. There were various governing arrangements provided by the AoC, but the most important was protecting the power or sovereignty of the states. The state would not have ratified the AoC if the federal government did not guarantee the powers of the states. The AoC defined a federal government with a unicameral legislature (one vote per state). This legislation had the following powers: declare war, conduct “foreign or commercial relations with other countries” (i.e. treaties), and executive functions that would be carried out by legislative committees. It also “suggested a federal court system”. This federal government was to be funded by the states. Though it provided these powers to the federal government, there were inherent weaknesses built into the federal government. First, there was no way for the legislature to enforce its decisions. Second, a declared war or commercial treaty required the consent of at least 9 states. Even if a war was approved, there was no federal army, only state militias. Third, the federal government relied on the states for funding. If the states didn’t like legislation passed, they could simply withhold funding. Fourth, amendments to the AoC required the unanimous agreement of all thirteen states. A realization came that these weaknesses inhibited the governing ability of the federal government. This set the stage for dropping the AoC, which then lead to the current Constitution of the United States.
null
false
105
Since humans amass more and more generally available data in the form of unstructured text it would be very useful to teach machines to read and comprehend such data and then use this understanding to answer our questions. A significant amount of research has recently focused on answering one particular kind of questions the answer to which depends on understanding a context document. These are cloze-style questions BIBREF0 which require the reader to fill in a missing word in a sentence. An important advantage of such questions is that they can be generated automatically from a suitable text corpus which allows us to produce a practically unlimited amount of them. That opens the task to notoriously data-hungry deep-learning techniques which now seem to outperform all alternative approaches. Two such large-scale datasets have recently been proposed by researchers from Google DeepMind and Facebook AI: the CNN/Daily Mail dataset BIBREF1 and the Children's Book Test (CBT) BIBREF2 respectively. These have attracted a lot of attention from the research community BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 with a new state-of-the-art model coming out every few weeks. However if our goal is a production-level system actually capable of helping humans, we want the model to use all available resources as efficiently as possible. Given that we believe that if the community is striving to bring the performance as far as possible, it should move its work to larger data. This thinking goes in line with recent developments in the area of language modelling. For a long time models were being compared on several "standard" datasets with publications often presenting minuscule improvements in performance. Then the large-scale One Billion Word corpus dataset appeared BIBREF15 and it allowed Jozefowicz et al. to train much larger LSTM models BIBREF16 that almost halved the state-of-the-art perplexity on this dataset. We think it is time to make a similar step in the area of text comprehension. Hence we are introducing the BookTest, a new dataset very similar to the Children's Book test but more than 60 times larger to enable training larger models even in the domain of text comprehension. Furthermore the methodology used to create our data can later be used to create even larger datasets when the need arises thanks to further technological progress. We show that if we evaluate a model trained on the new dataset on the now standard Children's Book Test dataset, we see an improvement in accuracy much larger than other research groups achieved by enhancing the model architecture itself (while still using the original CBT training data). By training on the new dataset, we reduce the prediction error by almost one third. On the named-entity version of CBT this brings the ensemble of our models to the level of human baseline as reported by Facebook BIBREF2 . However in the final section we show in our own human study that there is still room for improvement on the CBT beyond the performance of our model. We think it is time to make a similar step in the area of text comprehension. Hence we are introducing the BookTest, a new dataset very similar to the Children’s Book test but more than 60 times larger1 to enable training larger models even in the domain of text comprehension. Furthermore the methodology used to create our data can later be used to create even larger datasets when the need arises thanks to further technological progress.
What area are they studying?
Text comprehension.
null
false
null
Isbank Museum is a museum in Ankara, Turkey. The official name of the museum is "Economic Independence Museum" (Türkiye İş Bankası İktisadi Bağımsızlık Müzesi) This is the second museum of the bank the first being the İşbank Museum in İstanbul. The museum is in Ulus Square facing the Victory Monument at 39°56′34″N 32°51′17″E.
Where is the economic independence museum located
The Isbank Museum is a museum in Ankara, Turkey. The official name of the museum is Economic Independence Museum. This is the second museum of the bank the first being the Isbank Museum in Istanbul. The museum is in Ulus Square facing the Victory Monument at 39°56′34″N 32°51′17″E.
null
false
null
What are the 4 oceans on earth
Atlantic Ocean Arctic Ocean Pacific Ocean Indian Ocean
null
false
null
Why do people like to travel?
Traveling has always been innate in human behavior for many reasons. The desire to discover, experience and answer simple curiosities drives many people to travel to places near and far. There are many benefits of traveling such as improving mental health, gain creativity and inspiration, finding an understanding of yourself and reducing stress and anxiety.
null
false
null
Classify each of the following as creative or analytical: painting, math, physics, chemistry, drawing, filming
Painting: Creative Math: Analytical Physics: Analytical Chemistry: Analytical Drawing: Creative Filming: Creative
null
false
null
Basketball began with its invention in 1891 in Springfield, Massachusetts, by Canadian physical education instructor James Naismith as a less injury-prone sport than football. Naismith was a 31-year old graduate student when he created the indoor sport to keep athletes indoors during the winters.
How was basketball was invented?
Basketball began with its invention in 1891 in Springfield, Massachusetts, by Canadian physical education instructor James Naismith as a less injury-prone sport than football. Naismith was a 31-year old graduate student when he created the indoor sport to keep athletes indoors during the winters. The game became established fairly quickly and grew very popular as the 20th century progressed, first in America and then in other parts of the world. After basketball became established in American colleges, the professional game followed. The American National Basketball Association (NBA), established in 1946, grew to a multibillion-dollar enterprise by the end of the century, and basketball became an integral part of American culture.
null
false
null
What is a planet ?
The International Astronomical Union (IAU) defined in August 2006, that a planet in the Solar system is a celestial body that : - orbits the Sun (the host star of the Solar system), - has enough mass to assume a hydrostatic equilibrium (nearly round) shape, - is big enough that its gravity cleared away any other similarly-sized objects near its orbit around the Sun.
null
false
null
What is sake and how is it made?
Sake is an alcoholic beverage that originated in Japan hundreds of years ago and is now enjoyed around the world. Despite being sometimes called a “rice wine”, Sake is actually a brewed beverage that is more similar to beer than it is to wine. Sake typically has an ABV of around 15%. Sake is made from 4 ingredients: rice, yeast, water and Koji. First, rice is washed and then steamed. Then the rice goes into a special room where the environment is tightly controlled called the “Koji Room” where a special enzyme called Koji is sprinkled on the rice. The Koji converts the starch in the rice to sugar. Then, yeast and water is added to the rice in a large container to start the fermentation process of converting the sugar in alcohol. One the mixture is fermented to the right ABV, then the mixture is pressed which separates the liquid from the residue rice. Typically, the sake is the bottled and pasteurized for longer shelf life.
null
false
null
Give me a list of the most famous tech companies in the world
Amazon, Google, Microsoft, Twitter, Facebook, Apple, Oracle, Alibaba
null
false
null
What is an Incorporation?
This means forming a company. Some companies have limited liability. In other words, the members of the company are not personally liable for debts which the company runs up, as long as the company is run properly.
null
false
null
What is the difference between common law and civil law?
The primary difference between common law and civil law is that in common law jurisdictions, published court decisions are of primary importance to understanding legal rules. In civil law jurisdictions, published rules or statutes are of primary importance. Common law's origins are found in England, where court decisions filled in gaps in "writs" issued by the monarch. Civil law, by comparison, traces its origins to codes of laws compiled beginning around the time of the Roman Empire. While there are around 150 countries that largely follow a common law model, and 80 that follow a civil law model, the reality is that many jurisdictions incorporate elements of both common law and civil law. Even within a country, different models may apply.
null
false
null
What is the best club soccer team in Spain?
Real Madrid
null
false
null
Is crocodile a reptile or an amphibian?
Crocodiles are reptiles. While they live both on land and in water, that in itself doesn't make them an amphibian. It lacks other amphibian characteristics, like a moist, glandular skin that lacks scales and feathers. While a crocodile, like a reptile, has dry, scaly skin that helps prevent water loss and provides them protection. They lay their eggs on ground.
null
false
130
We downloaded questions and answers via the Smartvote API. The downloaded data cover 175 communal, cantonal and national elections between 2011 and 2020. All candidates in an election who participate in Smartvote are asked the same set of questions, but depending on the locale they see translated versions of the questions. They can answer each question with either `yes', `rather yes', `rather no', or `no'. They can supplement each answer with a comment of at most 500 characters. The questions asked on Smartvote have been edited by a team of political scientists. They are intended to cover a broad range of political issues relevant at the time of the election. A detailed documentation of the design of Smartvote and the editing process of the questions is provided by BIBREF12. We merged the two labels on each pole into a single label: `yes' and `rather yes' were combined into `favor'; `rather no', or `no' into `against`. This improves the consistency of the data and the comparability to previous stance detection datasets. We did not further preprocess the text of the comments. As the API does not provide the language of comments, we employed a language identifier to automatically annotate this information. We used the langdetect library BIBREF13. For each responder we classified all the comments jointly, assuming that responders did not switch code during the answering of the questionnaire. We applied the identifier in a two-step approach. In the first run we allowed the identifier to output all 55 languages that it supports out of the box, plus Romansh, the fourth official language in Switzerland. We found that no Romansh comments were detected and that all unexpected outputs were misclassifications of German, French or Italian comments. We further concluded that little or no Swiss German comments are in the dataset: If they were, some of them would have manifested themselves in the form of misclassifications (e.g. as Dutch). In the second run, drawing from these conclusions, we restricted the identifier's output to English, French, German and Italian. We pre-filtered the questions and answers to improve the quality of the dataset. To keep the domain of the data surveyable, we set a focus on national-level questions. Therefore, all questions and corresponding answers pertaining to national elections were included. In the context of communal and cantonal elections, candidates have answered both local questions and a subset of the national questions. Of those elections, we only considered answers to the questions that also had been asked in a national election. Furthermore, they were only used to augment the training set while the validation and test sets were restricted to answers from national elections. We discarded the less than 20 comments classified as English. Furthermore, instances that met any of the following conditions were filtered from the dataset: Question is not a closed question or does not address a clearly defined political issue. No comment was submitted by the candidate or the comment is shorter than 50 characters. Comment starts with “but” or a similar indicator that the comment is not a self-contained statement. Comment contains a URL. In total, a fifth of the original comments were filtered out. The questions have been organized by the Smartvote editors into categories (such as “Economy”). We further consolidated the pre-defined categories into 12 broad topics (Table TABREF7). The dataset is shared under a CC BY-NC 4.0 license. Copyright remains with www.smartvote.ch. Given the sensitive nature of the data, we increase the anonymity of the data by hashing the respondents' IDs. No personal attributes of the respondents, such as their party affiliation, are included in the dataset. We provide a data statement BIBREF15 in Appendix SECREF8. The downloaded data cover 175 communal, cantonal and national elections between 2011 and 2020.
How many communal, cantonal and national elections do the downloaded data cover?
175.
null
false
null
What is the next big thing after cloud computing ?
Consensus suggests that Edge commuting with be the next big technology to compliment cloud compute.
1810.11118
true
null
In this section, we propose new simple disentanglement models that perform better than prior methods, and re-examine prior work. The models we consider are: In this section, we propose new simple disentanglement models that perform better than prior methods, and re-examine prior work.
Did they experiment with the corpus?
Yes.
null
false
102
Rendering natural language descriptions from structured data is required in a wide variety of commercial applications such as generating descriptions of products, hotels, furniture, etc., from a corresponding table of facts about the entity. Such a table typically contains {field, value} pairs where the field is a property of the entity (e.g., color) and the value is a set of possible assignments to this property (e.g., color = red). Another example of this is the recently introduced task of generating one line biography descriptions from a given Wikipedia infobox BIBREF0 . The Wikipedia infobox serves as a table of facts about a person and the first sentence from the corresponding article serves as a one line description of the person. Figure FIGREF2 illustrates an example input infobox which contains fields such as Born, Residence, Nationality, Fields, Institutions and Alma Mater. Each field further contains some words (e.g., particle physics, many-body theory, etc.). The corresponding description is coherent with the information contained in the infobox. Note that the number of fields in the infobox and the ordering of the fields within the infobox varies from person to person. Given the large size (700K examples) and heterogeneous nature of the dataset which contains biographies of people from different backgrounds (sports, politics, arts, etc.), it is hard to come up with simple rule-based templates for generating natural language descriptions from infoboxes, thereby making a case for data-driven models. Based on the recent success of data-driven neural models for various other NLG tasks BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , one simple choice is to treat the infobox as a sequence of {field, value} pairs and use a standard seq2seq model for this task. However, such a model is too generic and does not exploit the specific characteristics of this task as explained below. First, note that while generating such descriptions from structured data, a human keeps track of information at two levels. Specifically, at a macro level, she would first decide which field to mention next and then at a micro level decide which of the values in the field needs to be mentioned next. For example, she first decides that at the current step, the field occupation needs attention and then decides which is the next appropriate occupation to attend to from the set of occupations (actor, director, producer, etc.). To enable this, we use a bifocal attention mechanism which computes an attention over fields at a macro level and over values at a micro level. We then fuse these attention weights such that the attention weight for a field also influences the attention over the values within it. Finally, we feed a fused context vector to the decoder which contains both field level and word level information. Note that such two-level attention mechanisms BIBREF6 , BIBREF7 , BIBREF8 have been used in the context of unstructured data (as opposed to structured data in our case), where at a macro level one needs to pay attention to sentences and at a micro level to words in the sentences. Next, we observe that while rendering the output, once the model pays attention to a field (say, occupation) it needs to stay on this field for a few timesteps (till all the occupations are produced in the output). We refer to this as the stay on behavior. Further, we note that once the tokens of a field are referred to, they are usually not referred to later. For example, once all the occupations have been listed in the output we will never visit the occupation field again because there is nothing left to say about it. We refer to this as the never look back behavior. To model the stay on behaviour, we introduce a forget (or remember) gate which acts as a signal to decide when to forget the current field (or equivalently to decide till when to remember the current field). To model the never look back behaviour we introduce a gated orthogonalization mechanism which ensures that once a field is forgotten, subsequent field context vectors fed to the decoder are orthogonal to (or different from) the previous field context vectors. We experiment with the WikiBio dataset BIBREF0 which contains around 700K {infobox, description} pairs and has a vocabulary of around 400K words. We show that the proposed model gives a relative improvement of 21% and 20% as compared to current state of the art models BIBREF0 , BIBREF9 on this dataset. The proposed model also gives a relative improvement of 10% as compared to the basic seq2seq model. Further, we introduce new datasets for French and German on the same lines as the English WikiBio dataset. Even on these two datasets, our model outperforms the state of the art methods mentioned above. We experiment with the WIKIBIO dataset (Lebret et al., 2016) which contains around 700K {infobox, description} pairs and has a vocabulary of around 400K words.
What dataset is used to train the model?
A dataset that contains data from Wikipedia infobox
null
false
null
Cyberpunk 2077 is a 2020 action role-playing video game developed by CD Projekt Red and published by CD Projekt. Set in Night City, an open world set in the Cyberpunk universe, players assume the role of a customisable mercenary known as V, who can acquire skills in hacking and machinery with options for melee and ranged combat. The main story follows V's struggle as they deal with a mysterious cybernetic implant that threatens to overwrite their body with the personality and memories of a deceased celebrity only perceived by V; the two must work together to be separated and save V's life.
Who is the protagonist in CyberPunk 2077?
The main story revolves around the protagonist V, as they deal with a mysterious cybernetic implant that threatens to overwrite their body with another person's memories.
null
false
null
Behind Delta tables is the data stored in Parquet file formats?
Yes, they are parquet. Delta API is a protocol and the layer manages the individual parquet files as if they were a larger table. If BQ can consume from the Delta protocol it could work but I would not suggest reading the files inside of a Delta table as just a parquet table - there would likely be too many stale files
null
false
null
In the series A Song of Ice and Fire, who is the founder of House Targaryen?
While the Targaryen family has years of historic ties to Valyria, it wasn't until Aegon's Conquest in the year 1 AC was the family established as a Westerosi House; Aegon the Conqueror is the house's founder.
null
false
null
Who founded Linkedin?
Reid Hoffman is the founder of Linkedin.
null
false
null
What are five popular songs by Jack Harlow?
Five popular songs by Jack Harlow are First Class, WHATS POPPIN, Dua Lipa, Tyler Herro, and Churchill Downs.
null
false
null
What is the evolved form of Pikachu?
The evolved form of Pikachu is Raichu.
null
false
253
As an essential part of a task-oriented dialogue system BIBREF0 , the task of natural language generation (NLG) is to produce a natural language utterance containing the desired information given a semantic representation consisting of dialogue act types with a set of slot-value pairs. Conventional methods using hand-crafted rules often generates monotonic utterances and it requires substantial amount of human engineering work. Recently, various neural approaches BIBREF1 , BIBREF2 , BIBREF3 have been proposed to generate accurate, natural and diverse utterances. However, these methods are typically developed for particular domains. Moreover, they are often data-intensive to train. The high annotation cost prevents developers to build their own NLG component from scratch. Therefore, it is extremely useful to train a NLG model that can be generalized to other NLG domains or tasks with a reasonable amount of annotated data. This is referred to low-resource NLG task in this paper. Recently, some methods have been proposed for low-resource NLG tasks. Apart from the simple data augmentation trick BIBREF4 , specialized model architectures, including conditional variational auto-encoders (CVAEs, BIBREF3 , BIBREF5 , BIBREF6 ) and adversarial domain adaptation critics BIBREF5 , have been proposed to learn domain-invariant representations. Although promising results were reported, we found that datasets used by these methods are simple which tend to enumerate many slots and values in an utterance without much linguistic variations. As a consequence, over-fitting the slots and values in the low-resource target domain could even outperform those versions trained with rich source domain examples BIBREF6 . Fortunately, there is a new large-scale dialog dataset (MultiWoz, BIBREF7 ) that contains a great variety of domains and linguistic patterns that allows us to conduct extensive and meaningful experimental analysis for low-resource NLG tasks. In this paper, instead of casting the problem as model-based approaches, we propose a generalized optimization-based meta-learning approach to directly enhance the optimization procedure for the low-resource NLG task. We start by arguing that a recently proposed model-agnostic meta-learning algorithm (MAML, BIBREF8 ) is a nice fit to the low-resource NLG task. Then, we proposed a generalized NLG algorithm called Meta-NLG based on MAML by viewing languages in different domains or dialog act types as separate Meta NLG tasks. Following the essence of MAML, the goal of Meta-NLG is to learn a better initialization of model parameters that facilitates fast adaptation to new low-resource NLG scenarios. As Meta-NLG is model-agnostic as long as the model can be optimized by gradient descent, we could apply it to any existing NLG models to optimize them in a way that adapt better and faster to new low-resource tasks. The main contribution of this paper is two-fold: Then, we proposed a generalized NLG algorithm called Meta-NLG based on MAML by viewing languages in different domains or dialog act types as separate Meta NLG tasks.
How to work as separate Meta NLG tasks?
It works by viewing languages in different domains or dialog act types as separate Meta NLG tasks.
null
false
508
In the quantized network, its gradient shows vanishing except for nondifferentiable points. The network thus cannot be learned by the standard backpropagation, so that an alternative approach called Straight Through Estimator (STE), which replaces the part of the gradient with a simple differentiable function, is used. While STE is known to work well for learning the quantized network empirically, it has not been established theoretically. A recent study by Yin et al. (2019) has provided theoretical support for STE. However, its justification is still limited to the model in the one-hidden layer network with the binary activation where Gaussian generates the input data, and the true labels are output from the teacher network with the same binary network architecture. In this paper, we discuss the effectiveness of STEs in more general situations without assuming the shape of the input distribution and the labels. By considering the scale symmetry of the network and specific properties of the STEs, we find that STE with clipped Relu is superior to STEs with identity function and vanilla Relu. The clipped Relu STE, which breaks the scale symmetry, may pick up one of the local minima degenerated in scales, while the identity STE and vanilla Relu STE, which keep the scale symmetry, may not pick it up. To confirm this observation, we further present an analysis of a simple misspecified model as an example. We find that all the stationary points are identical with the vanishing points of the cRelu STE gradient, while some of them are not identical with the vanishing points of the identity and Relu STE. Finally we have numerically confirmed the observation for the mixture Gaussian model with various teacher network. To confirm our conjecture in more general case, we have numerically studied the Gaussian mixture input with various mean values. As teacher networks, we have tested tanh-type and sin-type as well as Relu-type. For all the examined setups, cRelu STE behaves like the population gradient, while id/Relu STEs show qualitatively different behaviors as described below. To calculate the population loss and STE gradients, we have generated the mixture Gaussian samples and have taken their average. The population gradient has been obtained by calculating the finite difference of the population loss. We have demonstrated the back propagation with learning rate η = 0.01 given in Algorithm 1. Shown in Fig. are the results of ten mixture Gaussian input with random mean values for each components of Z. We employed m = 20, n = 25, and the tanh-type teacher network. We find that the population gradient and cRelu STE show similar results, while id/Relu STEs are completely different. This indicates cRelu STE is less biased than id/Relu STEs. In the case of id/Relu STEs, at early steps up to around 500 step, |w| decreases around the magnitude of the update quantity, so that it begins to oscillate. Then it escapes the oscillation, and the loss function shows the convergence to a point different from the local minimum achieved by population gradient, while |w| becomes larger and larger due to their scale invariance. Interestingly, the values of the loss function are small compared to the one obtained by population gradient, which implies that id/Relu STEs avoid being trapped in the local solution due to their large bias. However, note that even at that point, the magnitude of w continues to grow and eventually become numerically unstable. Figure: Numerical results of the back-propagation by population gradient and three STEs. We generate 10000 samples which follow ten mixture Gaussian input with random mean values. We employ the tanh-type teacher network. We find that all the stationary points are identical with the vanishing points of the cRelu STE gradient, while some of them are not identical with the vanishing points of the identity and Relu STE. Finally we have numerically confirmed the observation for the mixture Gaussian model with various teacher network.****We find that the population gradient and cRelu STE show similar results, while id/Relu STEs are completely different. This indicates cRelu STE is less biased than id/Relu
Are there no experimental results? Even not on MNIST or a toy example to show that the theoretical findings correlate with empirical observations
Thank you very much for pointing out the issue. As a first step towards understanding how the theoretical findings are valid for empirical situations, we have numerically tested the Gausssian mixture input with various mean values. We have found that cRelu STE is similar to the population gradient, while id/Relu STEs shows completely different behavior as shown in Fig.1. The behaviors are the same as what we discussed theoretically.
null
false
null
In which medium does sound travel faster and why
Sound travels faster in solids because molecules in a solid medium are much closer together than those in a liquid or gas, which makes the sound travel faster. Sound waves travel over 17 times faster through steel than air.
null
false
null
What equipment is required for scuba diving?
Scuba diving requires * An air (or enriched air) tank * Breathing device - typically an open circuit regulator or a closed circuit rebreather * Buoyancy Control Device (bcd) * Depth gauge * Submersible Pressure Gauge * Dive Computer or Watch * Mask * Fins Optional equipment * Wetsuit * Weights * Surface Marker Buoy * Dive Light
null
false
null
What Is The Tesla Model Y’s Charger Type?
The Tesla Model Y can charge up to 11 kW with AC charging and 210 kW with DC charging. They have used the standard European connector types ever since November 2018. Their AC connector is the Type 2 connector, often called Mennekes, after the German manufacturer that designed them. They are the most used AC connector in the world and can be found at home, work, and some public charging stations, although not all public charging networks supply AC charging. The Tesla Model Y uses the CCS connector for DC charging. This is the most popular DC charger globally and can be used mainly at public charging stations in the UK. The Type 2 and CCS connectors are combined into the CCS 2 connector, which you use to charge your Model Y.
null
false
null
Doleshwor Mahadeva (Nepali: डोलेश्वर महादेव) is a Hindu Temple of Lord Shiva located in Suryabinayak, south eastern part of Bhaktapur District, Nepal, and is believed to be the head part of Kedarnath temple located in Uttarakhand, India. History For 4000 years people have been searching for the head of the Panch Kedar temples, a bull who was Shiva in reality, who assumed the shape of a bull to avoid the five Pandava brothers, the heroes of the Mahabharat. The legend goes back to the fabled battle of Kurukshetra fought between the five Pandava brothers and their cousins, the 100 Kaurava brothers, which is the pivot of the Mahabharata. Many folk legends related to the Garhwal region, Lord Shiva and the creation of the Panch Kedar temples are narrated. A folk legend about Panch Kedar relates to the Pandavas, the heroes of the Hindu epic Mahabharata. The Pandavas defeated and slayed their cousins — the Kauravas in the epic Kurukshetra war. They wished to atone for the sins of committing fratricide (gotra hatya) and Brāhmanahatya (killing of Brahmins — the priest class) during the war. Thus, they handed over the reins of their kingdom to their kin and left in search of lord Shiva and to seek his blessings. First, they went to the holy city of Varanasi (Kashi), believed to be Shiva's favourite city and known for its Kashi Vishwanath Temple. But, Shiva wanted to avoid them as he was deeply incensed by the death and dishonesty at the Kurukshetra war and was, therefore, insensitive to Pandavas' prayers. Therefore, he assumed the form of a bull (Nandi) and hid in the Garhwal region. Not finding Shiva in Varanasi, the Pandavas went to Garhwal Himalayas. Bhima, the second of the five Pandava brothers, then standing astride two mountains started to look for Shiva. He saw a bull grazing near Guptakashi (“hidden Kashi” — the name derived from the hiding act of Shiva). Bhima immediately recognized the bull to be Shiva. Bhima caught hold of the bull by its tail and hind legs. But the bull-formed Shiva disappeared into the ground to later reappear in parts, with the hump raising in Kedarnath, the arms appearing in Tungnath, the face showing up at Rudranath, the nabhi (navel) and stomach surfacing in Madhyamaheshwar and the hair appearing in Kalpeshwar. The Pandavas pleased with this reappearance in five different forms, built temples at the five places for venerating and worshipping Shiva. The Pandavas were thus freed from their sins. It is also believed that the fore portions of Shiva appeared at Doleshwor Mahadeva Temple, Bhaktapur district Nepal. A variant of the tale credits Bhima of not only catching the bull, but also stopping it from disappearing. Consequently, the bull was torn asunder into five parts and appeared at five locations in the Kedar Khand of Garhwal region of the Himalayas. After building the Panch Kedar Temples, the Pandavas meditated at Kedarnath for salvation, performed yagna (fire sacrifice) and then through the heavenly path called the Mahapanth (also called Swargarohini), attained heaven or salvation.[citation needed]. The Panch Kedar Temples are constructed in the North-Indian Himalayan Temple architecture with the Kedarnath, Tungnath and Madhyamaheshwar temples looking similar. After completing the pilgrimage of Lord Shiva's darshan at the Panch Kedar Temples, it is an unwritten religious rite to visit Lord Vishnu at the Badrinath Temple, as a final affirmatory proof by the devotee that he has sought blessings of Lord Shiva. Research and findings The Hindu activist Bharat Jangam had been researching and claiming that Doleshwar Mahadev is the head part of Kedarnath based on the surprising links between Kedarnath and Doleshwor. The Sculptures of Shive found in both shrines are 4,000 years old. Even a stone scripture found in Doleshwor was written in Sanskrit and Old Nepalese. The priests in both shrines are selected from India's southern states of Karnataka, Andhra Pradesh, Kerala and Tamil Nadu. Both priests affix the word ‘ling’ after their names to convey their proximity to the god as his worshipper and both temples have a cluster of five Shiva shrines. The main deity of both priests is Birbhadra, a companion of Shiva, according to Hindu scriptures. Recognition On August 22, 2009 the head priest of Kedarnath peeth Shree 1008 Jagat Guru Bheemashankarling Shivacharya unveiled the plaque claiming that Doleswore Mahadev, located in Jangam math Bhaktapur, is the head part of Shri Kedarnath. He performed Rudra Abhishek, a special worshipping at the Doleshwor temple situated in Sipadol village of Bhaktapur district. This is also mentioned in the inscription (Sheela Lekh) at Doleshwar Mahadev. Thousands of devotees have been thronging in the Doleswor Mahadev Temple.
Have you heard of Doleshwor Mahadeva temple?
Yes, Doleshwor Mahadeva (Nepali: डोलेश्वर महादेव) is a Hindu Temple of Lord Shiva located in Suryabinayak, south eastern part of Bhaktapur District, Nepal
null
false
null
Who is the greatest basketball player of all time?
While this question continues to be debated by sports fans and pundits around the globe nearly every single day, usually the answer comes down to one of a handful of players: Michael Jordan, Kareem Abdul-Jabbar, LeBron James, or Bill Russell. Michael Jordan is the most common answer to this question. He won 6 championship rings, 5 MVP's and never lost an NBA Finals series. If he didn't take a two year hiatus from the NBA to play baseball, there is a good chance that he would have won 8 NBA championships in a row. Michael was a fantastic defender, and arguably the most relentless competitor that the NBA has ever seen. He was notorious for gambling on almost anything, and his iconic Air Jordan shoe is largely responsible for Nike's success as a global brand. Kareem Abdul-Jabbar was, until recently, the NBA's all time leading scorer. He played 20 seasons in the NBA, was a 19-time All-Star, 6-time MVP, and a 6-time NBA champion. He was incredibly consistent throughout his entire career, and transcended the NBA as a civil rights activist, author and even an actor. LeBron James is the greatest NBA player of this generation. He is a five-time NBA champion, four-time MVP, and recently surpassed Kareem Abdul-Jabbar as the NBA's all time leading scorer. No player came in to the NBA with greater expectations, and somehow, LeBron surpassed them all. LeBron first made headlines at age 15, and many of his high school games were nationally televised. Bill Russell was the ultimate teammate. In 13 seasons in the NBA he won 11 titles, all as a member of the Boston Celtics. During his last two seasons, he won the title as a Player/Coach, also becoming the first black coach to win an NBA title. Russell was a five-time MVP and 12-time NBA All Star, and is widely respected as the best defensive player of his era. He had a well-publicized rivalry with Wilt Chamberlain, but while Wilt chased stats, Bill made sure his team won. In addition to these four, there have been many other amazing basketball players over the years, including modern stars like Stephen Curry, Kevin Durant, the late Kobe Bryant, Magic Johnson, Larry Bird and more. Debating who is the best player ever is part of what makes basketball so entertaining. So while most people may say that Michael Jordan is the greatest player ever, this debate will rage on for years to come as more and more stars enter the league and build their careers.
null
false
68
Today, Internet is one of the widest available media worldwide. Recommendation systems are increasingly being used in various applications such as movie recommendation, mobile recommendation, article recommendation and etc. Collaborative Filtering (CF) and Content-Based (CB) are Well-known techniques for building recommendation systems. Topic modeling based on LDA, is a powerful technique for semantic mining and perform topic extraction. In the past few years, many articles have been published based on LDA technique for building recommendation systems. In this paper, we present taxonomy of recommendation systems and applications based on LDA. In addition, we utilize LDA and Gibbs sampling algorithms to evaluate ISWC and WWW conference publications in computer science. Our study suggest that the recommendation systems based on LDA could be effective in building smart recommendation system in online communities. Collaborative Filtering (CF) and Content-Based (CB) are Well-known techniques for building recommendation systems.
What technique is most commonly used for building recommendation systems?
Collaborative Filtering (CF) and Content-Based (CB) are Well-known for the task.
null
false
192
Topic modeling approaches are unsupervised statistical algorithms that usually considers each document as a "bag of words". There were several attempts to enrich word-based topic models (=unigram topic models) with additional prior knowledge or multiword expressions. Andrzejewski et al. BIBREF5 incorporated knowledge by Must-Link and Cannot-Link primitives represented by a Dirichlet Forest prior. These primitives were then used in BIBREF6 , where similar words are encouraged to have similar topic distributions. However, all such methods incorporate knowledge in a hard and topic-independent way, which is a simplification since two words that are similar in one topic are not necessarily of equal importance for another topic. Xie et al. BIBREF7 proposed a Markov Random Field regularized LDA model (MRF-LDA), which utilizes the external knowledge to improve the coherence of topic modeling. Within a document, if two words are labeled as similar according to the external knowledge, their latent topic nodes are connected by an undirected edge and a binary potential function is defined to encourage them to share the same topic label. Distributional similarity of words is calculated beforehand on a large text corpus. In BIBREF8 , the authors gather so-called lexical relation sets (LR-sets) for word senses described in WordNet. The LR-sets include synonyms, antonyms and adjective-attribute related words. To adapt LR-sets to a specific domain corpus and to remove inappropriate lexical relations, the correlation matrix for word pairs in each LR-set is calculated. This matrix at the first step is used for filtrating inappropriate senses, then it is used to modify the initial LDA topic model according to the generalized Polya urn model described in BIBREF9 . The generalized Polya urn model boosts probabilities of related words in word-topic distributions. Gao and Wen BIBREF10 presented Semantic Similarity-Enhanced Topic Model that accounts for corpus-specific word co-occurrence and word semantic similarity calculated on WordNet paths between corresponding synsets using the generalized Polya urn model. They apply their topic model for categorizing short texts. All above-mentioned approaches on adding knowledge to topic models are limited to single words. Approaches using ngrams in topic models can be subdivided into two groups. The first group of methods tries to create a unified probabilistic model accounting unigrams and phrases. Bigram-based approaches include the Bigram Topic Model BIBREF11 and LDA Collocation Model BIBREF12 . In BIBREF13 the Topical N-Gram Model was proposed to allow the generation of ngrams based on the context. However, all these models are enough complex and hard to compute on real datasets. The second group of methods is based on preliminary extraction of ngrams and their further use in topics generation. Initial studies of this approach used only bigrams BIBREF14 , BIBREF15 . Nokel and Loukachevitch BIBREF16 proposed the LDA-SIM algorithm, which integrates top-ranked ngrams and terms of information-retrieval thesauri into topic models (thesaurus relations were not utilized). They create similarity sets of expressions having the same word components and sum up frequencies of similarity set members if they co-occur in the same text. In this paper we describe the approach to integrate whole manual thesauri into topic models together with multiword expressions. In this paper we describe the approach to integrate whole manual thesauri into topic models together with multiword expressions.
In their method, what is integrated into topic models?
Whole manual thesauri and multiword expressions.
null
false
98
Lexical analysis, syntactic analysis, semantic analysis, disclosure analysis and pragmatic analysis are five main steps in natural language processing BIBREF0 , BIBREF1 . While morphology is a basic task in lexical analysis of English, word segmentation is considered a basic task in lexical analysis of Vietnamese and other East Asian languages processing. This task is to determine borders between words in a sentence. In other words, it is segmenting a list of tokens into a list of words such that words are meaningful. Word segmentation is the primary step in prior to other natural language processing tasks i. e., term extraction and linguistic analysis (as shown in Figure 1). It identifies the basic meaningful units in input texts which will be processed in the next steps of several applications. For named entity recognization BIBREF2 , word segmentation chunks sentences in input documents into sequences of words before they are further classified in to named entity classes. For Vietnamese language, words and candidate terms can be extracted from Vietnamese copora (such as books, novels, news, and so on) by using a word segmentation tool. Conformed features and context of these words and terms are used to identify named entity tags, topic of documents, or function words. For linguistic analysis, several linguistic features from dictionaries can be used either to annotating POS tags or to identifying the answer sentences. Moreover, language models can be trained by using machine learning approaches and be used in tagging systems, like the named entity recognization system of Tran et al. BIBREF2 . Many studies forcus on word segmentation for Asian languages, such as: Chinese, Japanese, Burmese (Myanmar) and Thai BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . Approaches for word segmentation task are variety, from lexicon-based to machine learning-based methods. Recently, machine learning-based methods are used widely to solve this issue, such as: Support Vector Machine or Conditional Random Fields BIBREF7 , BIBREF8 . In general, Chinese is a language which has the most studies on the word segmentation issue. However, there is a lack of survey of word segmentation studies on Asian languages and Vietnamese as well. This paper aims reviewing state-of-the-art word segmentation approaches and systems applying for Vietnamese. This study will be a foundation for studies on Vietnamese word segmentation and other following Vietnamese tasks as well, such as part-of-speech tagger, chunker, or parser systems. There are several studies about the Vietnamese word segmentation task over the last decade. Dinh et al. started this task with Weighted Finite State Transducer (WFST) approach and Neural Network approach BIBREF9 . In addition, machine learning approaches are studied and widely applied to natural language processing and word segmentation as well. In fact, several studies used support vector machines (SVM) and conditional random fields (CRF) for the word segmentation task BIBREF7 , BIBREF8 . Based on annotated corpora and token-based features, studies used machine learning approaches to build word segmentation systems with accuracy about 94%-97%. According to our observation, we found that is lacks of complete review approaches, datasets and toolkits which we recently used in Vietnamese word segmentation. A all sided review of word segmentation will help next studies on Vietnamese natural language processing tasks have an up-to-date guideline and choose the most suitable solution for the task. The remaining part of the paper is organized as follows. Section II discusses building corpus in Vietnamese, containing linguistic issues and the building progress. Section III briefly mentions methods to model sentences and text in machine learning systems. Next, learning models and approaches for labeling and segmenting sequence data will be presented in Section IV. Section V mainly addresses two existing toolkits, vnTokenizer and JVnSegmenter, for Vietnamese word segmentation. Several experiments based on mentioned approaches and toolkits are described in Section VI. Finally, conclusions and future works are given in Section VII. Lexical analysis, syntactic analysis, semantic analysis, disclosure analysis and pragmatic analysis are five main steps in natural language processing BIBREF0 , BIBREF1 . While morphology is a basic task in lexical analysis of English, word segmentation is considered a basic task in lexical analysis of Vietnamese and other East Asian languages processing. This task is to determine borders between words in a sentence. In other words, it is segmenting a list of tokens into a list of words such that words are meaningful. Word segmentation is the primary step in prior to other natural language processing tasks i. e., term extraction and linguistic analysis (as shown in Figure 1). It identifies the basic meaningful units in input texts which will be processed in the next steps of several applications. For named entity recognization BIBREF2 , word segmentation chunks sentences in input documents into sequences of words before they are further classified in to named entity classes. For Vietnamese language, words and candidate terms can be extracted from Vietnamese copora (such as books, novels, news, and so on) by using a word segmentation tool. Conformed features and context of these words and terms are used to identify named entity tags, topic of documents, or function words. For linguistic analysis, several linguistic features from dictionaries can be used either to annotating POS tags or to identifying the answer sentences. Moreover, language models can be trained by using machine learning approaches and be used in tagging systems, like the named entity recognization system of Tran et al. BIBREF2 . Many studies forcus on word segmentation for Asian languages, such as: Chinese, Japanese, Burmese (Myanmar) and Thai BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . Approaches for word segmentation task are variety, from lexicon-based to machine learning-based methods. Recently, machine learning-based methods are used widely to solve this issue, such as: Support Vector Machine or Conditional Random Fields BIBREF7 , BIBREF8 . In general, Chinese is a language which has the most studies on the word segmentation issue. However, there is a lack of survey of word segmentation studies on Asian languages and Vietnamese as well. This paper aims reviewing state-of-the-art word segmentation approaches and systems applying for Vietnamese. This study will be a foundation for studies on Vietnamese word segmentation and other following Vietnamese tasks as well, such as part-of-speech tagger, chunker, or parser systems. There are several studies about the Vietnamese word segmentation task over the last decade. Dinh et al. started this task with Weighted Finite State Transducer (WFST) approach and Neural Network approach BIBREF9 . In addition, machine learning approaches are studied and widely applied to natural language processing and word segmentation as well. In fact, several studies used support vector machines (SVM) and conditional random fields (CRF) for the word segmentation task BIBREF7 , BIBREF8 . Based on annotated corpora and token-based features, studies used machine learning approaches to build word segmentation systems with accuracy about 94%-97%. According to our observation, we found that is lacks of complete review approaches, datasets and toolkits which we recently used in Vietnamese word segmentation. A all sided review of word segmentation will help next studies on Vietnamese natural language processing tasks have an up-to-date guideline and choose the most suitable solution for the task. The remaining part of the paper is organized as follows. Section II discusses building corpus in Vietnamese, containing linguistic issues and the building progress. Section III briefly mentions methods to model sentences and text in machine learning systems. Next, learning models and approaches for labeling and segmenting sequence data will be presented in Section IV. Section V mainly addresses two existing toolkits, vnTokenizer and JVnSegmenter, for Vietnamese word segmentation. Several experiments based on mentioned approaches and toolkits are described in Section VI. Finally, conclusions and future works are given in Section VII. However, there is a lack of survey of word segmentation studies on Asian languages and Vietnamese as well. This paper aims reviewing state-of-the-art word segmentation approaches and systems applying for Vietnamese.****According to our observation, we found that is lacks of complete review approaches, datasets and toolkits which we recently used in Vietnamese word segmentation.
What gaps in the NLP domain does this study fill?
There is a lack of survey of word segmentation studies on Asian languages especially on Vietnamese .This paper reviews state-of-the-art word segmentation approaches and systems applying for Vietnamese.
null
false
null
Who invented the computer?
Charles Babbage is widely credited with having conceived the idea for the first computer, and indeed he has been referred to as the "father of computing." His designs led to the creation of the world's first digital device - "the Difference Engine". Babbage's ideas were picked up by Ada Lovelace, who described an algorithm to compute Bernoulli numbers using one of his proposed devices. In documenting this algorithm, Lovelace has been recognised as the world's first computer programmer, and in the notes that accompanied it she is also credited with making the first distinction between hardware and software. Alan Turing is often recognised as being the father of modern computer science, and for developing the first modern computers. During the Second World War, Turing's devices broke the Enigma Code, which was used by the Germans to encode all of their military communications. In so doing, Turing is credited with saving millions of lives. After the war, Turing published a detailed design for an "Automatic Computing Engine" (ACE), which very much a precursor of the modern computer, even storing its programs in memory. In 1950, Turing published a paper describing what become known as the "Turing Test" - a method to compare human and machine outputs. This paper is widely seen as being a seminal moment in the development of Artificial Intelligence.
1807.03367
false
null
We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs. We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk).
How was the dataset collected?
The answers are shown as follows: * crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk)
null
false
350
Machine reading comprehension and question answering has becomes a crucial application problem in evaluating the progress of AI system in the realm of natural language processing and understanding BIBREF0 . The computational linguistics communities have devoted significant attention to the general problem of machine reading comprehension and question answering. However, most of existing reading comprehension tasks only focus on shallow QA tasks that can be tackled very effectively by existing retrieval-based techniques BIBREF1 . For example, recently we have seen increased interest in constructing extractive machine reading comprehension datasets such as SQuAD BIBREF2 and NewsQA BIBREF3 . Given a document and a question, the expected answer is a short span in the document. Question context usually contains sufficient information for identifying evidence sentences that entail question-answer pairs. For example, 90.2% questions in SQuAD reported by Min BIBREF4 are answerable from the content of a single sentence. Even in some multi-turn conversation tasks, the existing models BIBREF5 mostly focus on retrieval-based response matching. In this paper, we focus on multiple-choice reading comprehension datasets such as RACE BIBREF6 in which each question comes with a set of answer options. The correct answer for most questions may not appear in the original passage which makes the task more challenging and allow a rich type of questions such as passage summarization and attitude analysis. This requires a more in-depth understanding of a single document and leverage external world knowledge to answer these questions. Besides, comparing to traditional reading comprehension problem, we need to fully consider passage-question-answer triplets instead of passage-question pairwise matching. In this paper, we propose a new model, Dual Co-Matching Network, to match a question-answer pair to a given passage bidirectionally. Our network leverages the latest breakthrough in NLP: BERT BIBREF7 contextual embedding. In the origin BERT paper, the final hidden vector corresponding to first input token ([CLS]) is used as the aggregation representation and then a standard classification loss is computed with a classification layer. We think this method is too rough to handle the passage-question-answer triplet because it only roughly concatenates the passage and question as the first sequence and uses question as the second sequence, without considering the relationship between the question and the passage. So we propose a new method to model the relationship among the passage, the question and the candidate answer. Firstly we use BERT as our encode layer to get the contextual representation of the passage, question, answer options respectively. Then a matching layer is constructed to get the passage-question-answer triplet matching representation which encodes the locational information of the question and the candidate answer matched to a specific context of the passage. Finally we apply a hierarchical aggregation method over the matching representation from word-level to sequence-level and then from sequence level to document-level. Our model improves the state-of-the-art model by 2.6 percentage on the RACE dataset with BERT base model and further improves the result by 3 percentage with BERT large model. In this paper, we propose a new model, Dual Co-Matching Network, to match a question-answer pair to a given passage bidirectionally. Our network leverages the latest breakthrough in NLP: BERT contextual embedding.
What model is proposed in the paper?
Dual Co-Matching Network.
null
false
null
The National Basketball League (NBL) was a professional basketball league in the United States established in 1937. After the 1948–49 season, its twelfth, it merged with the Basketball Association of America (BAA) to create the National Basketball Association (NBA). Five current NBA teams trace their history back to the NBL: the Atlanta Hawks, the Detroit Pistons, the Los Angeles Lakers, the Philadelphia 76ers, and the Sacramento Kings.
Write some important points about the National Basketball League from the given text
1. In the United States, a professional basketball league called the National Basketball League (NBL) was founded in 1937. 2. It united with the Basketball Association of America (BAA) to become the National Basketball Association after its twelfth season, 1948–1949 (NBA). 3. The Atlanta Hawks, Detroit Pistons, Los Angeles Lakers, Philadelphia 76ers, and Sacramento Kings are five current NBA teams whose roots can be traced to the NBL.
null
false
461
Consider the Gaussian mixtures corresponding to the source and target channel conditional densities, where θ c and θ c are the parameter vectors of the original and adapted MDN. Here µ i (x) ∈ R d is the mean vector, σ 2 i (x) ∈ R d + is the variance vector, and π i (x) ∈ [0, 1] is the prior probability (weight) of component i for the original mixture. We have assumed that the Gaussian components have a diagonal covariance matrix, with σ 2 i (x) being the diagonal elements 2 . The mixture weights are parameterized using the softmax function as π i (x) = e αi(x) / k j=1 e αj (x) , ∀i. The MDN simply predicts the un-normalized weights α i (x) ∈ R or the prior logits. The parameter vector of component i is defined as Affine and Inverse-Affine Feature Transfomations. Applying the above result to our MDN with k components, we define the affine feature transformation for a given symbol x and component i, (3) It is straightforward to also define the inverse-affine transformation from y to y as For the case of diagonal covariances, we constrain C i to be diagonal. These feature transformations will be used for aligning the target and source class-conditional distributions of the decoder input. Parameter Transformations. The corresponding transformations between the source and target Gaussian mixture parameters for any symbol x ∈ X and component i ∈ [k] are given by where ) is a diagonal scale matrix for the variances; and β i ∈ R and γ i ∈ R are the scale and offset for the prior logits. The vector of all adaptation parameters to be optimized is defined as , where ψ T i contains the affine-transformation parameters from component i. The number of adaptation parameters (dimension of ψ) is given by k (d 2 + 2 d + 2). This is typically much smaller than the number of MDN parameters (weights and biases from all layers), even for shallow fully-connected NNs. In Fig., the adaptation layer mapping φ(x) to φ(x) basically implements the parameter transformations defined in Eq. (). Assumptions and Key Insight. The proposed adaptation of the MDN is based on the affinetransformation property of multivariate Gaussians, i.e., one can transform between any two multivariate Gaussians through an affine transformation. Assumption 1: The source and target Gaussian mixtures have the same number of components. This is a practical assumption we make in order to not have to change the architecture of the MDN. Adding or removing components would require a change to the output layer of the MDN. Also, this assumption can be practically justified when k is chosen to be sufficiently large. Assumption 2: The two mixtures have a one-to-one correspondence between the components. This assumption makes in mathematically convenient to derive a closed-form expression for the KL-divergence between two Gaussian mixtures, which would not be possible in the general case. Based on the above assumptions, we can formulate the MDN adaptation as an equivalent problem of finding the optimal set of affine transformations (one per-component) from the source to the target Gaussian mixture. This is a much smaller problem compared to optimizing the weights of all the MDN layers. Moreover, the affine transformations are bijective, allowing the feature and parameter mapping to be applied in the inverse direction. To reduce the possibility of the adapted MDN finding bad solutions due to the small-sample regime, we include a regularization term based on the KL-divergence (KLD) in the adaptation objective. Assumptions and Key Insight. The proposed adaptation of the MDN is based on the affinetransformation property of multivariate Gaussians, i.e., one can transform between any two multivariate Gaussians through an affine transformation. Assumption 1: The source and target Gaussian mixtures have the same number of components. This is a practical assumption we make in order to not have to change the architecture of the MDN. Adding or removing components would require a change to the output layer of the MDN. Also, this assumption can be practically justified when k is chosen to be sufficiently large. Assumption 2: The two mixtures have a one-to-one correspondence between the components. This assumption makes in mathematically convenient to derive a closed-form expression for the KL-divergence between two Gaussian mixtures, which would not be possible in the general case. Based on the above assumptions, we can formulate the MDN adaptation as an equivalent problem of finding the optimal set of affine transformations (one per-component) from the source to the target Gaussian mixture. This is a much smaller problem compared to optimizing the weights of all the MDN layers. Moreover, the affine transformations are bijective, allowing the feature and parameter mapping to be applied in the inverse direction. To reduce the possibility of the adapted MDN finding bad solutions due to the small-sample regime, we include a regularization term based on the KL-divergence (KLD) in the adaptation objective
The authors assume that the source and target distribution have the same number of components with a one-to-one correspondence. How realistic is the working hypothesis?
In the revised paper, we state these two assumptions and provide a brief justification for the same. The assumption of same number of components is a practical one that we make in order to not have to change the architecture of the MDN. Adding or removing components would require a change to the output layer of the MDN. Also, this assumption can be practically justified when the number of components is chosen to be sufficiently large. With a large number of components, the appearance or disappearance of a few clusters of data from the true data distribution can be compensated for by other components in the Gaussian mixture.
null
false
null
Stanley B. Goldenberg is a meteorologist with NOAA/AOML's Hurricane Research Division in Miami (Virginia Key), Florida. Goldenberg has specialized in climate studies and hurricanes. Stan's hurricane-related research has included developing and implementing significant improvements to one of the earlier numerical hurricane-track prediction models used by the National Hurricane Center and more recently, examining the various climatic factors which influence the variability of hurricane activity in the Atlantic from intraseasonal to multidecadal time scales. He has done extensive research into the physical mechanisms responsible for the connection between El Niño and Atlantic hurricane activity. He was the first author of the research report published in Science establishing the fact that the Atlantic hurricane basin has entered a multidecadal-scale era of greatly increased hurricane activity. The paper concluded that the increase in hurricane activity was due to natural climate fluctuations rather than from any long-term temperature trends (which some attribute to anthropogenic global warming). (This paper was recognized with the Office of Oceanic and Atmospheric Research Outstanding Scientific Paper Award.) He is one of the lead authors of NOAA's Seasonal Hurricane Outlooks for the Atlantic basin and was a co-recipient of NOAA's Bronze Medal for that work. Goldenberg has participated in numerous research flights into and around hurricanes on NOAA's WP-3D and Gulfstream IV aircraft, including flights into Hurricane Katrina (2005) as it made landfall on the Louisiana/Mississippi coast.
What is Stanley B. Goldenberg's profession?
Stanley B. Goldenberg is a meteorologist with NOAA/AOML's Hurrican Research Division, specializing in climate studies and hurricanes.
null
false
73
Two sets of corpora are used in this study, both are publicly available. The first set consists of English reviews and the second set contains restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian). We focus on polarity detection in reviews, therefore all datasets in this study have two class values (positive, negative). With the goal of building a generalizable sentiment analysis model, we used three different training sets as provided in Table TABREF5 . One of these three datasets (Amazon reviews BIBREF23 , BIBREF24 ) is larger and has product reviews from several different categories including book reviews, electronics products reviews, and application reviews. The other two datasets are to make the model more specialized in the domain. In this paper we focus on restaurant reviews as our domain and use Yelp restaurant reviews dataset extracted from Yelp Dataset Challenge BIBREF25 and restaurant reviews dataset as part of a Kaggle competition BIBREF26 . For evaluation of the multilingual approach, we use four languages. These datasets are part of SemEval-2016 Challenge Task 5 BIBREF27 , BIBREF28 . Table TABREF7 shows the number of observations in each test corpus. Two sets of corpora are used in this study, both are publicly available. The first set consists of English reviews and the second set contains restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian).
Which two corpora are used in this study?
The first set consists of English reviews and the second set contains restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian).
null
false
null
Name three dishes that were developed from conditions of scarcity or having to "make do” with available materials or ingredients.
1. Bouillabaisse - a French seafood stew, originated with making do with whatever catch a fisherman couldn’t sell. 2. Congee - an Asian dish which involves a small amount of rice being cooked in water to a large volume of rice porridge; the idea is to stretch rice and feed more people with less food; often served at times of famine in the old days. 3. Budae Jjigae - a Korean stew also known as Army Stew; it came about after Korean when resourceful Koreans were starving and salvaged the American leftover canned food (e.g. spam, beans, hotdogs, etc.) to make stew with kimchi and instant noodles.