paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Is GitHub Actions are free?
|
GitHub Actions usage is free for standard GitHub-hosted runners in public repositories, and for self-hosted runners. For private repositories, each GitHub account receives a certain amount of free minutes and storage for use with GitHub-hosted runners, depending on the product used with the account.
|
|
null | false
| null |
Augusta National Golf Club was founded by Bobby Jones and Clifford Roberts, the course was designed by Jones and Alister MacKenzie and opened for play in 1932. Since 1934, the club has played host to the annual Masters Tournament, one of the four men's major championships in professional golf, and the only major played each year at the same course. It was the top-ranked course in Golf Digest's 2009 list of America's 100 greatest courses and was the number ten-ranked course based on course architecture on Golfweek Magazine's 2011 list of best classic courses in the United States
|
What is Augusta National Golf Club?
|
Augusta National Golf Club, sometimes referred to as Augusta is a private Golf Club in Augusta Georgia that was opened in 1932. Augusta is famously known for hosting the Masters Tournament which is one of the four men's major championships in professional golf, and the only major played each year at the same course.
|
null | false
| 145
|
Language modeling is a probabilistic description of language phenomenon. It provides essential context to distinguish words which sound similar and therefore has one of the most useful applications in Natural Language Processing (NLP) especially in downstreaming tasks like Automatic Speech Recognition (ASR). Recurrent Neural Networks (RNN) especially Long Short Term Memory (LSTM) networks BIBREF0 have been the typical solution to language modeling which do achieve strong results. In spite of these results, their fundamental sequential computation constraint has restricted their use in the modeling of long-term dependencies in sequential data. To address these issues Transformer architecture was introduced. Transformers relies completely on an attention mechanism to form global dependencies between input and output. It also offers more parallelization and has achieved SOTA results in language modeling outperforming LSTM models BIBREF1.
In recent years,we have seen a lot of development based on this standard transformer models particularly on unsupervised pre-training(BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 which have set state-of-the art results on multiple NLP benchmarks. One such model architecture has been the Bidirectional Encoder Representations from Transformers (BERT) model which uses a deep bidirectional transformer architecture.
Another architecture of interest would be the Transformer-XL, which introduces the notion of recurrence in a self-attention model.
The primary research focus though has been mostly on English language for which abundant data is present. It is interesting to see the performance of these models for an agglutinative language like Finnish, which is morphologically richer than English.
In this project, we explore the implementation of Transformer-based models (BERT and Transformer-XL) in language modeling for Finnish. We will use the same training data as in BIBREF8 so that we can do fair comparisons with the performance of the LSTM models. Also, as the BERT model is a bi-directional transformer, we will have to approximate the conditional probabilities given a sequence of words. We also experiment with using sub-word units with Transformer-XL to cope with the large vocabulary problems associated with the Finnish Language. With smaller units, the modeled sequences are longer, and we hope that the recursive XL architecture can allow us to still model long term effects. To the best of our knowledge this is the first work with the Finnish language to use the following:
Approximation of perplexity using a BERT architecture
Using Transformer-XL architecture with sub-word units.
Comparison of Transformer and LSTM models as language models in the same comparable settings with an agglutinative language.
We also experiment with using sub-word units with Transformer-XL to cope with the large vocabulary problems associated with the Finnish Language.
|
What problem does the experiment with using sub-word units with Transformer-XL cope with?
|
To cope with the large vocabulary problems associated with the Finnish Language.
|
null | false
| null |
Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American science fiction author and a multiple recipient of the Hugo and Nebula awards. In 1995, Butler became the first science-fiction writer to receive a MacArthur Fellowship. Born in Pasadena, California, Butler was raised by her widowed mother. Extremely shy as a child, Butler found an outlet at the library reading fantasy, and in writing. She began writing science fiction as a teenager. She attended community college during the Black Power movement. While participating in a local writer's workshop, she was encouraged to attend the Clarion Workshop, then held in Pennsylvania, which focused on science fiction.
|
How old was Octavia Butler when she died?
|
At the time of her death in 2006, Octavia Butler was 58 years old.
|
1611.03382
| false
| null |
Very recently, the success of deep neural networks in many natural language processing tasks ( BIBREF20 ) has inspired new work in abstractive summarization . BIBREF2 propose a neural attention model with a convolutional encoder to solve this task. BIBREF3 build a large dataset for Chinese text summarization and propose to feed all hidden states from the encoder into the decoder. More recently, BIBREF4 extended BIBREF2 's work with an RNN decoder, and BIBREF8 proposed an RNN encoder-decoder architecture for summarization. Both techniques are currently the state-of-the-art on the DUC competition. However, the encoders exploited in these methods lack the ability to encode each word condition on the whole text, as an RNN encodes a word into a hidden vector by taking into account only the words up to that time step. In contrast, in this work we propose a `Read-Again' encoder-decoder architecture, which enables the encoder to understand each input word after reading the whole sentence. Our encoder first reads the text, and the results from the first read help represent the text in the second pass over the source text. Our second contribution is a simple copy mechanism that allows us to significantly reduce the decoder vocabulary size resulting in much faster inference times. Furthermore our copy mechanism allows us to handle out-of-vocabulary words in a principled manner. Finally our experiments show state-of-the-art performance on the DUC competition.
Very recently, the success of deep neural networks in many natural language processing tasks ( BIBREF20 ) has inspired new work in abstractive summarization . BIBREF2 propose a neural attention model with a convolutional encoder to solve this task.
More recently, BIBREF4 extended BIBREF2 's work with an RNN decoder, and BIBREF8 proposed an RNN encoder-decoder architecture for summarization. Both techniques are currently the state-of-the-art on the DUC competition.
|
What is the state-of-the art?
|
neural attention model with a convolutional encoder with an RNN decoder and RNN encoder-decoder
|
null | false
| null |
The Indian Institutes of Technology (IITs) are prestigious central government owned public technical institutes located across India. Known for their excellence in education, they are under the ownership of the Ministry of Education of the Government of India. They are governed by the Institutes of Technology Act, 1961, declaring them as Institutes of National Importance and laying down their powers, duties, and framework for governance as the country's premier institutions in the field of technology. The act currently lists twenty-three IITs. Each IIT has autonomy and is linked to others through a common council called the IIT Council, which oversees their administration. The Minister of Education of India is the ex officio Chairperson of the IIT Council. Currently, there are 23 IITs across the country.
|
What does IIT stand for in India?
|
IIT stands for Indian Institutes of Technology. They are government owned technical institutes known for their excellence in education. Currently, there are 23 institutes across India.
|
null | false
| null |
What is XML?
|
Extensible Markup Language (XML) is a markup language and file format for storing, transmitting, and reconstructing arbitrary data. It defines a set of rules for encoding documents in a format that is both human-readable and machine-readable.
|
|
null | false
| null |
The CAC 40 index composition is reviewed quarterly by an independent Index Steering Committee (French: Conseil Scientifique). If any changes are made, they are effected a minimum of two weeks after the review meeting. At each review date, the companies listed on Euronext Paris are ranked according to free float market capitalization and share turnover over the prior 12 months. From the top 100 companies in this ranking, forty are chosen to enter the CAC 40 such that it is "a relevant benchmark for portfolio management" and "a suitable underlying asset for derivatives products". If a company has more than one class of shares traded on the exchange, only the most actively traded of these will be accepted into the index (generally this will be the ordinary share)
|
How is CAC 40 selected?
|
CAC 40 is selected from top 100 of the companies listed on Euronext Paris are benchmarked and selected by Conseil Scientifique. Considering the weighted average of stock prices over a certain period and some indicators, only the most actively traded of these will be accepted into the index.
|
null | false
| null |
Sweet Tree Holdings is a Vermont, United States, company that produces maple products from a sugarbush and processing facility in Island Pond.
|
Where does this company produce its products?
|
Island Pond, Vermont
|
null | false
| null |
Avul Pakir Jainulabdeen Abdul Kalam BR (/ˈɑːbdəl kəˈlɑːm/ (listen); 15 October 1931 – 27 July 2015) was an Indian aerospace scientist and statesman who served as the 11th president of India from 2002 to 2007. He was born and raised in Rameswaram, Tamil Nadu and studied physics and aerospace engineering. He spent the next four decades as a scientist and science administrator, mainly at the Defence Research and Development Organisation (DRDO) and Indian Space Research Organisation (ISRO) and was intimately involved in India's civilian space programme and military missile development efforts. He thus came to be known as the Missile Man of India for his work on the development of ballistic missile and launch vehicle technology. He also played a pivotal organisational, technical, and political role in India's Pokhran-II nuclear tests in 1998, the first since the original nuclear test by India in 1974.
Kalam was elected as the 11th president of India in 2002 with the support of both the ruling Bharatiya Janata Party and the then-opposition Indian National Congress. Widely referred to as the "People's President", he returned to his civilian life of education, writing and public service after a single term. He was a recipient of several prestigious awards, including the Bharat Ratna, India's highest civilian honour.
|
Given a reference text about a former President of India, Dr. A. P. J. Abdul Kalam, tell me when he was born, where he was raised and the subjects he specialised in studies. Why was Abdul Kalam known as the Missile Man of India?
|
Avul Pakir Jainulabdeen Abdul Kalam BR, referred to as APJ Abdul Kalam in short, was born on 15th October 1931. He was born and raised in Rameswaram in Tamil Nadu, South India. APJ Abdul Kalam specialised in Physics and Aerospace Engineering as part of his studies.
He had a career of around 4 decades as a scientist as well as an administrator with the organisations such as Defence Research and Development Organisation (DRDO) and Indian Space Research Organisation (ISRO). In these roles, APJ Abdul Kalam played a key role in India's civilian space program and military missile development activities. He also played a vital role in the development of ballistic missile and launch vehicle technology. As a result of his above mentioned contributions, Abdul Kalam was known as the Missile Man of India.
|
null | false
| null |
Who was Henry Ford?
|
Henry Ford (July 30, 1863 – April 7, 1947) was an American industrialist and business magnate. He was the founder of Ford Motor Company, and chief developer of the assembly line technique of mass production. Ford created the first automobile that middle-class Americans could afford, and his conversion of the automobile from an expensive luxury into an accessible conveyance profoundly impacted the landscape of the 20th century.
Ford was born on a farm in Michigan's Springwells Township to a Belgian family, leaving home at age 16 to work in Detroit. It was a few years before this time that Ford first experienced automobiles, and throughout the later half of the 1880s, Ford began repairing and later constructing engines, and through the 1890s worked with a division of Edison Electric. He officially founded Ford Motor Company in 1903, after prior failures in business but success in constructing automobiles.
Ford's 1908 introduction of the Model T automobile revolutionized both transportation and American industry. As the Ford Motor Company sole owner, he became one of the richest and best-known people in the world. He is credited with "Fordism", the mass production of inexpensive goods coupled with high wages for workers. Ford was also among the pioneers of the five-day work week. Ford had a global vision, with consumerism as the key to peace. His intense commitment to systematically lowering costs resulted in many technical and business innovations, including a franchise system that put dealerships throughout North America and major cities on six continents.
Ford was widely known for his pacifism during the first years of World War I. In the 1920s Ford promoted antisemitic content through his newspaper The Dearborn Independent, and the book, The International Jew. After his son Edsel died in 1943, Ford resumed control of the company but was too frail to make decisions and quickly came under the control of subordinates. He turned over the company to his grandson Henry Ford II in 1945. He died in 1947 after leaving most of his wealth to the Ford Foundation, and control of the company to his family.
|
|
null | false
| null |
What is Prometheus?
|
Prometheus is an open-source system for monitoring services and alerts based on a time series data model. Prometheus collects data and metrics from different services and stores them according to a unique identifier—the metric name—and a time stamp.
|
|
null | false
| null |
Classify the following foods as Mexican or Italian: Burrito, Pizza, Pasta, Quesadilla, Taco, Gelato
|
Burrito: Mexican
Pizza: Italian
Pasta: Italian
Quesadilla: Mexican
Taco: Mexican
Gelato: Italian
|
|
null | false
| null |
What factors should I considering when selecting a tennis racquet?
|
There are a series of factors when considering to purchase a tennis racquets. Primary factors include head size, length, weight, stiffness, and swingweight
Head size - Power is closely correlated to head size - the larger the head size, the more power can be generated, all other factors being equal. A larger head size also provides larger hitting area and 'sweetspot'. Smaller head sizes tend to equate to more control and maneuverability.
Length - The standard length is a 27 inches, but can range from 26.5 inches to 29 inches. A longer racquets provides more reach and often results in more power. Similar to head size, longer racquets may limit maneuverability and control.
Weight - The heavier the racquet, the more power you will generate (increasing the 'swingweight') of the racquet. The lighter the racquet, the more maneuverable and easier it will be for players to get the racket to the right swing position. The balance of the racquet is also critical as part of weight. A more head light racket (more weight in the handle) provides increased maneuverability, especially when using a heavier racquet. A more balanced or head heavy racket are often reserved for lighter racquets to maintain the mass behind the ball
Stiffness - Generally speaking, the stiffer the racket, the less energy is lost as the ball is redirected with your racquet and thus, creating more power. More flexible racquets allow for more feel and control. Flexible racquets provide more comfort as less shock and impact is transferred to the arm and elbow.
Swingweight - Swingweight measures how heavy a racquet feels when you swing it and the amount of power you can transfer to the ball. Higher swingweight will also provide more stability in the racquet but will sacrifice maneuverability.
|
|
1911.03058
| false
| null |
Given the mapped embedding spaces, the translations are retrieved using a distance metric, with Cross-Lingual Similarity Scaling BIBREF12 as the most common and best performing in the literature. Intuitively, CSLS decreases the scores of pairs that lie in dense areas, increasing the scores of rarer words (which are harder to align). The retrieved pairs are compared to the gold standard and evaluated using precision at $k$ (P@$k$, evaluating how often the correct translation is within the $k$ retrieved nearest neighbours of the query). Throughout this work we report P@1, which is equivalent to accuracy, but we also provide results with P@5 and P@10 in the Appendix.
The retrieved pairs are compared to the gold standard and evaluated using precision at $k$ (P@$k$, evaluating how often the correct translation is within the $k$ retrieved nearest neighbours of the query). Throughout this work we report P@1, which is equivalent to accuracy, but we also provide results with P@5 and P@10 in the Appendix.
|
What evaluation metrics did they use?
|
The answers are shown as follows:
* we report P@1, which is equivalent to accuracy
* we also provide results with P@5 and P@10 in the Appendix
|
null | false
| null |
Tulip mania (Dutch: tulpenmanie) was a period during the Dutch Golden Age when contract prices for some bulbs of the recently introduced and fashionable tulip reached extraordinarily high levels. The major acceleration started in 1634 and then dramatically collapsed in February 1637. It is generally considered to have been the first recorded speculative bubble or asset bubble in history. In many ways, the tulip mania was more of a then-unknown socio-economic phenomenon than a significant economic crisis. It had no critical influence on the prosperity of the Dutch Republic, which was one of the world's leading economic and financial powers in the 17th century, with the highest per capita income in the world from about 1600 to about 1720. The term "tulip mania" is now often used metaphorically to refer to any large economic bubble when asset prices deviate from intrinsic values.
|
What does the term "tulip mania" mean?
|
The term "tulip mania" is now often used metaphorically to refer to any large economic bubble when asset prices deviate from intrinsic values.
|
null | false
| 46
|
The main focus of this paper is to apply domain adaptation for SQA. In this approach, we have two SQA models (QANets), one trained from target domain data (ASR hypotheses) and another trained from source domain data (reference transcriptions). Because the two domains share common information, some layers in these two models can be tied in order to model the shared features. Hence, we can choose whether each layer in the QA model should be shared. Tying the weights between the source layer and the target layer in order to learn a symmetric mapping is to project both source and target domain data to a shared common space. Different combinations will be investigated in our experiments.
More specifically, we incorporate a domain discriminator into the SQA model shown in Figure FIGREF2 , which can enforce the embedding encoder to project the sentences from both source and target domains into a shared common space and consequentially to be ASR-error robust. Although the embedding encoder for both domains may implicitly learn some common latent representations, adversarial learning can provide a more direct training signal for aligning the output distribution of the embedding encoder from both domains. The embedding encoder takes in a sequence of word vectors and generates a sequence of hidden vectors with the same length. We use INLINEFORM0 and INLINEFORM1 ( INLINEFORM2 and INLINEFORM3 ) to represent the hidden vector sequence given the question INLINEFORM4 and the document INLINEFORM5 in the target (source) domain respectively.
The domain discriminator INLINEFORM0 focuses on identifying the domain of the vector sequence is from given INLINEFORM1 or INLINEFORM2 , where the objective is to minimize INLINEFORM3 . DISPLAYFORM0
Given a training example from the target domain ( INLINEFORM0 ), INLINEFORM1 learns to assign a lower score to INLINEFORM2 and INLINEFORM3 in that example, that is, to minimize INLINEFORM4 and INLINEFORM5 . On the other hand, given a training example from the source domain ( INLINEFORM6 ), INLINEFORM7 learns to assign a larger value to INLINEFORM8 and INLINEFORM9 .
Furthermore, we update the parameters of the embedding encoders to maximize the domain classification loss INLINEFORM0 , which works adversarially towards the domain discriminator. We thus expect the model to learn features and structures that can generalize across domains when the outputs of INLINEFORM1 are indistinguishable from the outputs of INLINEFORM2 . The loss function for embedding encoder, INLINEFORM3 , is formulated as DISPLAYFORM0
where INLINEFORM0 is a hyperparameter. The two embedding encoders in the QA model are learned to maximize INLINEFORM1 while minimizing the loss for QA, INLINEFORM2 . Because the parameters of other layers in QA model are independent to the loss of the domain discriminator, the loss function of other layers, INLINEFORM3 , is equivalent to INLINEFORM4 , that is, INLINEFORM5 .
Although the discriminator is applied to the output of embedding encoder in Figure FIGREF2 , it can be also applied to other layers. Considering that almost all QA model contains such embedding encoders, the proposed approach is expected to generalize to other QA models in addition to QANet.
In this approach, we have two SQA models (QANets), one trained from target domain data (ASR hypotheses) and another trained from source domain data (reference transcrip tions).
|
How many SQA models (QANets) does the proposed approach have?
|
Two SQA models (QANets).
|
2001.00582
| false
| null |
The goal of this part is to highlight the differences present in the excitation when a given speaker produces different voice qualities. The De7 database used for this study was designed by Marc Schroeder as one of the first attempts of creating diphone databases for expressive speech synthesis BIBREF2. The database contains three voice qualities (modal, soft and loud) uttered by a German female speaker, with about 50 minutes of speech available for each voice quality. In Section SECREF1, the glottal flow estimation method and glottal flow parametrization used in this work are briefly presented. The harmonicity of speech is studied via the maximum voiced frequency in Section SECREF3. As an important perceptual charactersitic, spectral tilt is analyzed in Section SECREF4. Section SECREF6 compares the so-called eigenresiduals BIBREF3 of the different voice qualities. Finally Section SECREF8 quantifies the separability between the three voice qualities for the extracted excitation features.
The De7 database used for this study was designed by Marc Schroeder as one of the first attempts of creating diphone databases for expressive speech synthesis BIBREF2. The database contains three voice qualities (modal, soft and loud) uttered by a German female speaker, with about 50 minutes of speech available for each voice quality.
|
What large corpus is used for experiments?
|
The answers are shown as follows:
* The De7 database
|
null | false
| 322
|
The same procedure of generating a single triple can be applied for the generation of each triple in a set of triples. However, the NL output would contain redundant information and consequently sound very artificial. Thus, the goal is to transform the generated description to sound more natural. To this end, we focus on two types of transformation rules (cf. BIBREF21): ordering and clustering and grouping. In the following, we describe the transformation rules we employ in more detail. Note that clustering and ordering (SECREF18) is applied before grouping (SECREF26).
We process the input trees in descending order with respect to the frequency of the variables they contain, starting with the projection variables and only after that turning to other variables. As an example, consider the following triples about two of the most known people in the world:
The five triples are verbalized as given in UNKREF20–UNKREF24. Clustering and ordering first take all sentences containing the subject :Albert_Einstein, i.e. UNKREF21 –UNKREF23, which are ordered such that copulative sentences (such as Albert Einstein is a scientist) come before other sentences, and then takes all sentences containing the remaining subject :William_Shakespeare in UNKREF20 and UNKREF24 resulting in a sequence of sentences as in UNKREF25.
William Shakespeare is a writer.
Albert Einstein's birth place is Ulm.
Albert Einstein's death place is Princeton.
Albert Einstein is a scientist.
William Shakespeare's death date is 23 April 1616.
Albert Einstein is a scientist. Albert Einstein's birth place is Ulm. Albert Einstein's death place is Princeton. William Shakespeare's is a writer. William Shakespeare's death date is 23 April 1616.
dalianishovy describe grouping as a process “collecting clauses with common elements and then collapsing the common elements”. The common elements are usually subject noun phrases and verb phrases (verbs together with object noun phrases), leading to subject grouping and object grouping. To maximize the grouping effects, we collapse common prefixes and suffixes of sentences, irrespective of whether they are full subject noun phrases or complete verb phrases. In the following we use $X_1,X_2,$...$X_N$ as variables for the root nodes of the input sentences and $Y$ as variable for the root node of the output sentence. Furthermore, we abbreviate a subject $\texttt {subj}(X_i,s_i)$ as $\texttt {s}_i$, an object $\texttt {dobj}(X_i,o_i)$ as $\texttt {o}_i$, and a verb $\texttt {root}(\text{\em ROOT}_i,v_i)$ as $\texttt {v}_i$.
Subject grouping collapses the predicates (i.e. verb and object) of two sentences if their subjects are the same, as specified in UNKREF27 (abbreviations as above).
$\rho (\texttt {s}_1)=\rho (\texttt {s}_2)\wedge \texttt {cc}(\texttt {v}_1,\text{\em coord})$
$\Rightarrow \texttt {root}(Y,\texttt {coord}(\texttt {v}_1,\texttt {v}_2))\wedge \, \texttt {subj}(\texttt {v}_1,\texttt {s}_1)\wedge \texttt {dobj}(\texttt {v}_1,\texttt {o}_1)\wedge \texttt {subj}(\texttt {v}_2,\texttt {s}_1)\wedge \texttt {dobj}(\texttt {v}_1,\texttt {o}_2) $
An example are the sentences given in UNKREF28, which share the subject Albert Einstein and thus can be collaped into a single sentence.
Albert Einstein is a scientist and Albert Einstein is known for general relativity.
$\Rightarrow $ Albert Einstein is a scientist and known for general relativity.
Object grouping collapses the subjects of two sentences if the realizations of the verbs and objects of the sentences are the same, where the $\text{\em coord}\in \lbrace \text{\sf and},\text{\sf or}\rbrace $ is the coordination combining the input sentences $X_1$ and $X_2$, and $\texttt {coord}\in \lbrace \texttt {conj},\texttt {disj}\rbrace $ is the corresponding coordination combining the subjects.
$\rho (\texttt {o}_1)=\rho (\texttt {o}_2)\wedge \rho (\texttt {v}_1)=\rho (\texttt {v}_2)\wedge \texttt {cc}(\texttt {v}_1,\text{\em coord})$
$\Rightarrow \texttt {root}(Y,\texttt {PLURAL}(\texttt {v}_1))\wedge \, \texttt {subj}(\texttt {v}_1,\texttt {coord}(\texttt {s}_1,\texttt {s}_2))\wedge \texttt {dobj}(\texttt {v}_1,\texttt {o}_1) $
For example, the sentences in UNKREF30 share their verb and object, thus they can be collapsed into a single sentence. Note that to this end the singular auxiliary was needs to be transformed into its plural form were.
Benjamin Franklin was born in Boston. Leonard Nimoy was born in Boston. $\Rightarrow $ Benjamin Franklin and Leonard Nimoy were born in Boston.
To this end, we focus on two types of transformation rules (cf. (Dalianis and Hovy, 1996)): ordering and clustering and grouping.
|
What are the two types of transformation rules?
|
ordering and clustering and grouping.
|
1712.09127
| false
| null |
Suppose we have a number of different corpora INLINEFORM0 , which for example can be based on different categories or sentiments of text documents. We suppose that INLINEFORM1 , INLINEFORM2 , where each INLINEFORM3 represents a document. The words in all corpora are collected in a dictionary, and indexed from 1 to INLINEFORM4 . We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”
We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”
|
Which GAN do they use?
|
The answers are shown as follows:
* weGAN
* deGAN
|
null | false
| null |
What is the name of the first ten amendments to the Constitution?
|
the bill of rights
|
|
1909.00430
| false
| null |
Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0
Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0
|
How is the expectation regularization loss defined?
|
The answers are shown as follows:
* DISPLAYFORM0
|
null | false
| null |
Mariano Rivera (born November 29, 1969) is a Panamanian-American former professional baseball pitcher who played 19 seasons in Major League Baseball (MLB) for the New York Yankees, from 1995 to 2013. Nicknamed "Mo" and "Sandman", he spent most of his career as a relief pitcher and served as the Yankees' closer for 17 seasons. A thirteen-time All-Star and five-time World Series champion, he is MLB's career leader in saves (652) and games finished (952). Rivera won five American League (AL) Rolaids Relief Man Awards and three Delivery Man of the Year Awards, and he finished in the top three in voting for the AL Cy Young Award four times. He was inducted into the Baseball Hall of Fame as part of its class of 2019 in his first year of eligibility, and was the first and to date only player ever to be elected unanimously by the Baseball Writers' Association of America (BBWAA).
Raised in the modest Panamanian fishing village of Puerto Caimito, Rivera was an amateur player until he was signed by the Yankees organization in 1990. He debuted in the major leagues in 1995 as a starting pitcher, before permanently converting to a relief pitcher late in his rookie year. After a breakthrough season in 1996 as a setup man, he became the Yankees' closer in 1997. In the following seasons, he established himself as one of baseball's top relievers, leading the major leagues in saves in 1999, 2001, and 2004. Rivera primarily threw a sharp-moving, mid-90s mile-per-hour cut fastball that frequently broke hitters' bats and earned a reputation as one of the league's toughest pitches to hit. With his presence at the end of games, signaled by his foreboding entrance song "Enter Sandman", Rivera was a key contributor to the Yankees' success in the late 1990s and early 2000s. An accomplished postseason performer, he was named the 1999 World Series Most Valuable Player (MVP) and the 2003 AL Championship Series MVP, and he holds several postseason records, including lowest earned run average (ERA) (0.70) and most saves (42).
|
From the given text, list some of the accomplishments of pitcher Mariano Rivera in his baseball career, in a comma-separated list.
|
All-Star, World Series champion, MLB's career leader in saves, MLB's career leader in games finished, AL Rolaids Relief Man Award, Delivery Man of the Year Award, Baseball Hall of Fame, World Series Most Valuable Player, AL Championship Series MVP, postseason lowest earned run average, postseason most saves
|
null | false
| 283
|
Scripts were developed as a means of representing stereotypical event sequences and interactions in narratives. The benefits of scripts for encoding common sense knowledge, filling in gaps in a story, resolving ambiguous references, and answering comprehension questions have been amply demonstrated in the early work in natural language understanding BIBREF0 . The earliest attempts to learn scripts were based on explanation-based learning, which can be characterized as example-guided deduction from first principles BIBREF1 , BIBREF2 . While this approach is successful in generalizing from a small number of examples, it requires a strong domain theory, which limits its applicability.
More recently, some new graph-based algorithms for inducing script-like structures from text have emerged. “Narrative Chains” is a narrative model similar to Scripts BIBREF3 . Each Narrative Chain is a directed graph indicating the most frequent temporal relationship between the events in the chain. Narrative Chains are learned by a novel application of pairwise mutual information and temporal relation learning. Another graph learning approach employs Multiple Sequence Alignment in conjunction with a semantic similarity function to cluster sequences of event descriptions into a directed graph BIBREF4 . More recently still, graphical models have been proposed for representing script-like knowledge, but these lack the temporal component that is central to this paper and to the early script work. These models instead focus on learning bags of related events BIBREF5 , BIBREF6 .
While the above approches demonstrate the learnability of script-like knowledge, they do not offer a probabilistic framework to reason robustly under uncertainty taking into account the temporal order of events. In this paper we present the first formal representation of scripts as Hidden Markov Models (HMMs), which support robust inference and effective learning algorithms. The states of the HMM correspond to event types in scripts, such as entering a restaurant or opening a door. Observations correspond to natural language sentences that describe the event instances that occur in the story, e.g., “John went to Starbucks. He came back after ten minutes.” The standard inference algorithms, such as the Forward-Backward algorithm, are able to answer questions about the hidden states given the observed sentences, for example, “What did John do in Starbucks?”
There are two complications that need to be dealt with to adapt HMMs to model narrative scripts. First, both the set of states, i.e., event types, and the set of observations are not pre-specified but are to be learned from data. We assume that the set of possible observations and the set of event types to be bounded but unknown. We employ the clustering algorithm proposed in BIBREF4 to reduce the natural language sentences, i.e., event descriptions, to a small set of observations and states based on their Wordnet similarity.
The second complication of narrative texts is that many events may be omitted either in the narration or by the event extraction process. More importantly, there is no indication of a time lapse or a gap in the story, so the standard forward-backward algorithm does not apply. To account for this, we allow the states to skip generating observations with some probability. This kind of HMMs, with insertions and gaps, have been considered previously in speech processing BIBREF7 and in computational biology BIBREF8 . We refine these models by allowing state-dependent missingness, without introducing additional “insert states” or “delete states” as in BIBREF8 . In this paper, we restrict our attention to the so-called “Left-to-Right HMMs” which have acyclic graphical structure with possible self-loops, as they support more efficient inference algorithms than general HMMs and suffice to model most of the natural scripts. We consider the problem of learning the structure and parameters of scripts in the form of HMMs from sequences of natural language sentences. Our solution to script learning is a novel bottom-up method for structure learning, called SEM-HMM, which is inspired by Bayesian Model Merging (BMM) BIBREF9 and Structural Expectation Maximization (SEM) BIBREF10 . It starts with a fully enumerated HMM representation of the event sequences and incrementally merges states and deletes edges to improve the posterior probability of the structure and the parameters given the data. We compare our approach to several informed baselines on many natural datasets and show its superior performance. We believe our work represents the first formalization of scripts that supports probabilistic inference, and paves the way for robust understanding of natural language texts.
We consider the problem of learning the structure and parameters of scripts in the form of HMMs from sequences of natural language sentences.
|
What problems do the authors consider?
|
The authors consider the problem of learning the structure and parameters of scripts in the form of HMMs from sequences of natural language sentences.
|
null | false
| null |
Prehistory to early 19th century
The following table lists estimates for the population of India (including what are now Pakistan and Bangladesh) from prehistory up until 1820. It includes estimates and growth rates according to five economic historians, along with interpolated estimates and overall aggregate averages derived from their estimates.[citation needed]
Estimates of historical world population
Year Aggregate average Period Average
% growth
/ century
Population % of World population
10,000 BC 1,000 0.83% Stone Age 30.28
4000 BC 1,000,000 30.83%
2000 BC 13,000,000 37.143% Bronze Age 26.25
500 BC 25,000,000 41.70% Iron Age
400 BC 26,000,000 43.96%
200 BC 31,000,000 47.63% Maurya era
1 AD 35,000,000 35.56% Classical
era
200 41,000,000 36.15%
400 47,000,000 40%
500 50,000,000 43.58%
600 53,000,000 48.83% Early
medieval
era
700 60,000,000 56.67%
800 64,000,000 55%
900 70,000,000 53.34%
1000 79,000,000 30%
1100 83,000,000 35% Late
medieval
era
1200 86,000,000 36.67%
1300 91,000,000 38.34%
1400 97,000,000 30%
1500 105,000,000 21.67%
1600 140,000,000 23.33% Early modern era
1650 170,000,000 26.15%
1700 140,000,000 20%
1750 183,000,000 21.53%
1800 200,000,000 20%
1820 210,000,000 19.09%
The population grew from the South Asian Stone Age in 10,000 BC to the Maurya Empire in 200 BC at a steadily increasing growth rate, before population growth slowed down in the Early Medieval Era up to 700 AD, and then started decreasing the up to 1500 AD.
Under Mughal Empire, India experienced a high economic and demographic upsurge, due to Mughal agrarian reforms that intensified agricultural production. 15% of the population lived in urban centres, higher than the percentage of the population in 19th-century British India and contemporary Europe up until the 19th century. Those estimates were criticised by Tim Dyson, who considers them exaggerations. According to Dyson, urbanisation of the Mughal empire was less than 9%
Under the reign of Akbar (reigned 1556–1605) in 1600, the Mughal Empire's urban population was up to 17 million people, larger than the urban population in Europe. By 1700, Mughal India had an urban population of 23 million people, larger than British India's urban population of 22.3 million in 1871. Nizamuddin Ahmad (1551–1621) reported that, under Akbar's reign, Mughal India had 120 large cities and 3,200 townships. A number of cities in India had a population between a quarter-million and half-million people, with larger cities including Agra (in Agra Subah) with up to 800,000 people and Dhaka (in Bengal Subah) with over 1 million people. Mughal India also had a large number of villages, with 455,698 villages by the time of Aurangzeb (reigned 1658–1707).
|
what was population of India during Mauryan Era?
|
It was around 31,000,000 during Maurya Era (200BC) .
|
null | false
| null |
National Beer Day is celebrated in the United States every year on April 7, marking the day that the Cullen–Harrison Act came into force after having been signed into law by President Franklin D. Roosevelt on March 22, 1933. This led to the Eighteenth Amendment being repealed on December 5, 1933, with the ratification of the Twenty-first Amendment. April 6, the day prior to National Beer Day, is known as New Beer's Eve.
|
Given this paragraph about National Beer Day in the United States, which president signed National Beer Day into law?
|
Franklin D. Roosevelt.
|
null | false
| null |
I am looking into buying a new monitor, how do I know which one I should buy?
|
It depends a lot on the use case and how much money you want to spend. First decide on a budget. After that you should decide on what size the monitor should have. Then, within that size and budget range take a look at different features such as resolution, color accuracy, refresh rate and connectivity options. A high resolution is good for work and gaming but also requires a good graphics card. Color accuracy is important if you plan to use the monitor for artistic work, such as editing movies and images. Refresh rate is important if you plan to do gaming, especially online. Connectivity options such as power delivery over USB-C is important if you want to simply connect the monitor with a laptop.
|
|
null | false
| null |
Which countries produces the most wine?
|
As of 2021, the following countries have produced the most wine (decreasing order):
- Italy
- France
- Spain
- United States of America
- Australia
- Chile
- Argentina
- South Africa
- Germany
- Portugal
|
|
1909.09779
| false
| null |
Table TABREF48 shows the BLEU score of all three models based on English-Hindi, Hindi-English on CFILT's test dataset respectively. From the results which we get, it is evident that the transformer model achieves higher BLEU score than both Attention encoder-decoder and sequence-sequence model. Attention encoder-decoder achieves better BLEU score and sequence-sequence model performs the worst out of the three which further consolidates the point that if we are dealing with long source and target sentences then attention mechanism is very much required to capture long term dependencies and we can solely rely on the attention mechanism, overthrowing recurrent cells completely for the machine translation task.
From the results which we get, it is evident that the transformer model achieves higher BLEU score than both Attention encoder-decoder and sequence-sequence model.
|
How were their results compared to state-of-the-art?
|
The answers are shown as follows:
* transformer model achieves higher BLEU score than both Attention encoder-decoder and sequence-sequence model
|
null | false
| null |
What do people usually do at a birthday party?
|
It is traditional for people to celebrate their birthdays with a party by inviting friends and family. You usually see a birthday cake with candles. People like to sing the "happy birthday" song together. Often there can be games come along designed by the host. For example, some people may prepare trivia questions related to their personal hobbies for attendees to answer. Balloons, flowers, banners are common decorations at a birthday party.
|
|
null | false
| null |
how to fight a speeding ticket?
|
Three common ways to resolve a speeding tickets are 1. taking a defensive driving course: it takes about 20 hours to complete the course content but you can speed it up by playing the videos at 2x speed with a chrome plug-in. Some people just let the video play like background noise and do whatever is on the side. 2. hire an attorney to take care of it, the cost depends on your ticket details and varies based on attorneys. But it is the most care-free way as the attorney deals with the court on your behalf. There is no guarantee it always succeeds though. 3. go to the court yourself and fight it. This takes a lot of gut because you will need to face the judge and prosecuting attorney or the traffic policy on the court the defend yourself.
|
|
null | false
| null |
What are the different air routes for traveling between Genova (GOA) and San Francisco (SFO) from fastest to longest?
|
1 - GOA-MUC-SFO (United Airlines)
2 - GOA-FRA-SFO (United Airlines)
3 - GOA-AMS-SFO (KLM)
4 - GOA-FCO-SFO (ITA Airways)
5 - GOA-FCO-JFK-SFO (ITA Airways + Delta Airlines)
6 - GOA-FCO-LAX-SFO (ITA Airways + Delta Airlines)
7 - GOA-AMS-JFK-SFO (KLM + Delta Airlines)
8 - GOA-AMS-ATL-SFO (KLM + Delta Airlines)
9 - GOA-AMS-MSP-SFO (KLM + Delta Airlines)
|
|
null | false
| null |
In what century was the airplane commercialized?
|
Commercial airplanes became available in the 20th century
|
|
null | false
| null |
Give me a list of popular news podcast to listen to
|
The Daily, Up First, Pod Save America, Global News Podcast, Morning Wire
|
|
null | false
| null |
Why do my fingers hurt when playing guitar?
|
While learning guitar, it is very common for finger tips to hurt. Usually what occurs with practice is the development of tougher skin called calluses that make the guitar less painful to play.
|
|
null | false
| null |
The team was founded in 1972 in Atlanta as the Atlanta Flames before relocating to Calgary in 1980. The Flames played their first three seasons in Calgary at the Stampede Corral before moving into the Scotiabank Saddledome (originally the Olympic Saddledome) in 1983. In 1985–86, the Flames became the first Calgary team since the 1923–24 Tigers to compete for the Stanley Cup. In 1988–89, the Flames won their first and only Stanley Cup title. The Flames' unexpected run to the 2004 Stanley Cup Finals gave rise to the Red Mile, and in 2011, the team hosted and won the second Heritage Classic outdoor game.
|
Given this paragraph about the history of the Calgary Flames hockey team, what year did the Calgary Flames win their only Stanley Cup
|
The Calgary Flames won their only Stanley Cup in 1989
|
null | false
| null |
Are humans animals?
|
Yes, humans are animals; humans are mammals related to the great apes. Other mammals include bears, dogs, and cats. Other non-mammalian animals include penguins, lobsters, and alligators. Even insects such as bees and flies are animals, though they are not mammals.
|
|
null | false
| null |
What are the options to get from South Shields to North Shields by public transport and which is the quickest option
|
There are 3 options to get from South Shields to North Shields. 1) Take the Shields Ferry from South Shields to North Fields, 2) take the Metro Yellow Line from South Shields to Jarrow and at Jarrow transfer to the number 10 bus to North Shields, 3) take the Metro Yellow Line from South Shields towards St James via Whitley Bay, getting off at the North Shields stop. The quickest option is the Shields Ferry, which takes approximately 35 minutes
|
|
null | false
| null |
Classify each of the following exercises as cardio or weights: bench press, running, cycling, squat, jump roping, curl
|
Bench Press: Weights
Running: Cardio
Cycling: Cardio
Squat: Weights
Jump Roping: Cardio
Curl: Weights
|
|
2002.08307
| false
| null |
Model compression BIBREF7, which attempts to shrink a model without losing accuracy, is a viable approach to decreasing GPU usage. It might also be used to trade accuracy for memory in some low-resource cases, such as deploying to smartphones for real-time prediction. The main questions this paper attempts to answer are: Does compressing BERT impede it's ability to transfer to new tasks? And does fine-tuning make BERT more or less compressible?
To explore these questions, we compressed English BERT using magnitude weight pruning BIBREF8 and observed the results on transfer learning to the General Language Understanding Evaluation (GLUE) benchmark BIBREF9, a diverse set of natural language understanding tasks including sentiment analysis, NLI, and textual similarity evaluation. We chose magnitude weight pruning, which compresses models by removing weights close to 0, because it is one of the most fine-grained and effective compression methods and because there are many interesting ways to view pruning, which we explore in the next section.
Our findings are as follows: Low levels of pruning (30-40%) do not increase pre-training loss or affect transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from being transferred to downstream tasks. This information is not equally useful to each task; tasks degrade linearly with pre-train loss, but at different rates. High levels of pruning, depending on the size of the downstream dataset, may additionally degrade performance by preventing models from fitting downstream datasets. Finally, we observe that fine-tuning BERT on a specific task does not improve its prunability or change the order of pruning by a meaningful amount.
The main questions this paper attempts to answer are: Does compressing BERT impede it's ability to transfer to new tasks? And does fine-tuning make BERT more or less compressible?
To explore these questions, we compressed English BERT using magnitude weight pruning BIBREF8 and observed the results on transfer learning to the General Language Understanding Evaluation (GLUE) benchmark BIBREF9, a diverse set of natural language understanding tasks including sentiment analysis, NLI, and textual similarity evaluation. We chose magnitude weight pruning, which compresses models by removing weights close to 0, because it is one of the most fine-grained and effective compression methods and because there are many interesting ways to view pruning, which we explore in the next section.
Our findings are as follows: Low levels of pruning (30-40%) do not increase pre-training loss or affect transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from being transferred to downstream tasks.
|
How they observe that fine-tuning BERT on a specific task does not improve its prunability?
|
The answers are shown as follows:
* we compressed English BERT using magnitude weight pruning BIBREF8 and observed the results on transfer learning to the General Language Understanding Evaluation (GLUE) benchmark BIBREF9, a diverse set of natural language understanding tasks including sentiment analysis, NLI, and textual similarity evaluation.
|
null | false
| 136
|
We build and test our MMT models on the Multi30K dataset BIBREF21 . Each image in Multi30K contains one English (EN) description taken from Flickr30K BIBREF22 and human translations into German (DE), French (FR) and Czech BIBREF23 , BIBREF24 , BIBREF25 . The dataset contains 29,000 instances for training, 1,014 for development, and 1,000 for test. We only experiment with German and French, which are languages for which we have in-house expertise for the type of analysis we present. In addition to the official Multi30K test set (test 2016), we also use the test set from the latest WMT evaluation competition, test 2018 BIBREF25 .
The dataset contains 29,000 instances for training, 1,014 for development, and 1,000 for test.
|
What instances does this dataset contain?
|
The dataset contains 29,000 instances for training, 1,014 for development, and 1,000 for test.
|
null | false
| 49
|
We participated in the WMT 2016 shared news translation task by building neural translation systems for four language pairs: English INLINEFORM0 Czech, English INLINEFORM1 German, English INLINEFORM2 Romanian and English INLINEFORM3 Russian. Our systems are based on an attentional encoder-decoder BIBREF0 , using BPE subword segmentation for open-vocabulary translation with a fixed vocabulary BIBREF1 . We experimented with using automatic back-translations of the monolingual News corpus as additional training data BIBREF2 , pervasive dropout BIBREF3 , and target-bidirectional models.
We perform early stopping for single models, and use the 4 last saved models (with models saved every 30000 minibatches) for the ensemble results.
|
What the 4 last saved models (with models saved every 30000 mini batches) do the authors use for?
|
Ensemble results.
|
null | false
| 187
|
Knowledge graphs (KGs) containing relationship triples (subject, relation, object), denoted as (s, r, o), are the useful resources for many NLP and especially information retrieval applications such as semantic search and question answering BIBREF0 . However, large knowledge graphs, even containing billions of triples, are still incomplete, i.e., missing a lot of valid triples BIBREF1 . Therefore, much research efforts have focused on the knowledge graph completion task which aims to predict missing triples in KGs, i.e., predicting whether a triple not in KGs is likely to be valid or not BIBREF2 , BIBREF3 , BIBREF4 . To this end, many embedding models have been proposed to learn vector representations for entities (i.e., subject/head entity and object/tail entity) and relations in KGs, and obtained state-of-the-art results as summarized by BIBREF5 and BIBREF6 . These embedding models score triples (s, r, o), such that valid triples have higher plausibility scores than invalid ones BIBREF2 , BIBREF3 , BIBREF4 . For example, in the context of KGs, the score for (Melbourne, cityOf, Australia) is higher than the score for (Melbourne, cityOf, United Kingdom).
Triple modeling is applied not only to the KG completion, but also for other tasks which can be formulated as a triple-based prediction problem. An example is in search personalization, one would aim to tailor search results to each specific user based on the user's personal interests and preferences BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Here the triples can be formulated as (submitted query, user profile, returned document) and used to re-rank documents returned to a user given an input query, by employing an existing KG embedding method such as TransE BIBREF3 , as proposed by BIBREF12 . Previous studies have shown the effectiveness of modeling triple for either KG completion or search personalization. However, there has been no single study investigating the performance on both tasks.
Conventional embedding models, such as TransE BIBREF3 , DISTMULT BIBREF13 and ComplEx BIBREF14 , use addition, subtraction or simple multiplication operators, thus only capture the linear relationships between entities. Recent research has raised interest in applying deep neural networks to triple-based prediction problems. For example, BIBREF15 proposed ConvKB—a convolutional neural network (CNN)-based model for KG completion and achieved state-of-the-art results. Most of KG embedding models are constructed to modeling entries at the same dimension of the given triple, where presumably each dimension captures some relation-specific attribute of entities. To the best of our knowledge, however, none of the existing models has a “deep” architecture for modeling the entries in a triple at the same dimension.
BIBREF16 introduced capsule networks (CapsNet) that employ capsules (i.e., each capsule is a group of neurons) to capture entities in images and then uses a routing process to specify connections from capsules in a layer to those in the next layer. Hence CapsNet could encode the intrinsic spatial relationship between a part and a whole constituting viewpoint invariant knowledge that automatically generalizes to novel viewpoints. Each capsule accounts for capturing variations of an object or object part in the image, which can be efficiently visualized. Our high-level hypothesis is that embedding entries at the same dimension of the triple also have these variations, although it is not straightforward to be visually examined.
To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization. Different from the traditional modeling design of CapsNet where capsules are constructed by splitting feature maps, we use capsules to model the entries at the same dimension in the entity and relation embeddings. In our CapsE, INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are unique INLINEFORM3 -dimensional embeddings of INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. The embedding triple [ INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ] of (s, r, o) is fed to the convolution layer where multiple filters of the same INLINEFORM10 shape are repeatedly operated over every row of the matrix to produce INLINEFORM11 -dimensional feature maps. Entries at the same dimension from all feature maps are then encapsulated into a capsule. Thus, each capsule can encode many characteristics in the embedding triple to represent the entries at the corresponding dimension. These capsules are then routed to another capsule which outputs a continuous vector whose length is used as a score for the triple. Finally, this score is used to predict whether the triple (s, r, o) is valid or not.
In summary, our main contributions from this paper are as follows:
INLINEFORM0 We propose an embedding model CapsE using the capsule network BIBREF16 for modeling relationship triples. To our best of knowledge, our work is the first consideration of exploring the capsule network to knowledge graph completion and search personalization.
INLINEFORM0 We evaluate our CapsE for knowledge graph completion on two benchmark datasets WN18RR BIBREF17 and FB15k-237 BIBREF18 . CapsE obtains the best mean rank on WN18RR and the highest mean reciprocal rank and highest Hits@10 on FB15k-237.
INLINEFORM0 We restate the prospective strategy of expanding the triple embedding models to improve the ranking quality of the search personalization systems. We adapt our model to search personalization and evaluate on SEARCH17 BIBREF12 – a dataset of the web search query logs. Experimental results show that our CapsE achieves the new state-of-the-art results with significant improvements over strong baselines.
Therefore, much research efforts have focused on the knowledge graph completion task which aims to predict missing triples in KGs, i.e., predicting whether a triple not in KGs is likely to be valid or not.
|
Is the triple not in KGs likely to be valid or not?
|
Yes, it is.
|
null | false
| null |
How can you identify and suss out misinformation seen in so-called news stories and on social media?
|
Be sure to inspect the source of the content - look for known news organizations that are domestic or internationally recognized. Be leery of sources that you've never heard of (even if these sound legitimate) and when in doubt, search for information about the publication's ownership. It may be a government-backed entity or some other group regurgitating propaganda to further a specific cause. It is also important to go to the primary source wherever possible. This means if the article quotes a person/company/government/other entity, check that entity's website or social media feed to see if you can independently verify the information. Sometimes, propaganda can manufacture quotes or attribution and simply corroborating on your own can uncover lies and misinformation.
|
|
2002.00876
| false
| null |
Optimizations ::: a) Parallel Scan Inference
The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence. On parallel hardware, an appealing approach is a parallel scan ordering BIBREF35, typically used for computing prefix sums. To compute, $A(\ell )$ in this manner we first pad the sequence length $T$ out to the nearest power of two, and then compute a balanced parallel tree over the parts, shown in Figure FIGREF21. Concretely each node layer would compute a semiring matrix multiplication, e.g. $ \bigoplus _c \ell _{t, \cdot , c} \otimes \ell _{t^{\prime }, c, \cdot }$. Under this approach, we only need $O(\log N)$ steps in Python and can use parallel GPU operations for the rest. Similar parallel approach can also be used for computing sequence alignment and semi-Markov models.
Optimizations ::: b) Vectorized Parsing
Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized. The log-partition for parsing is computed with the Inside algorithm. This algorithm must compute each width from 1 through T in serial; however it is important to parallelize each inner step. Assuming we have computed all inside spans of width less than $d$, computing the inside span of width $d$ requires computing for all $i$,
Optimizations ::: c) Semiring Matrix Operations
The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\sum , \times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \times M$ and $M \times O$, we can broadcast with $\otimes $ to a tensor of size $N \times M \times O$ and then reduce dim $M$ by $\bigoplus $ at a huge memory cost. To avoid this issue, we implement custom CUDA kernels targeting fast and memory efficient tensor operations. For log, this corresponds to computing,
Parallel Scan Inference
The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence.
Vectorized Parsing
Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized.
Semiring Matrix Operations
The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\sum , \times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \times M$ and $M \times O$, we can broadcast with $\otimes $ to a tensor of size $N \times M \times O$ and then reduce dim $M$ by $\bigoplus $ at a huge memory cost.
|
What baselines are used in experiments?
|
The answers are shown as follows:
* Typical implementations of dynamic programming algorithms are serial in the length of the sequence
* Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized
* Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient
|
null | false
| null |
What are some ways to reuse smoked brisket?
|
Smoked brisket is delicious and can be reused in many ways, including:
Brisket rague
Bristet tacos
Brisket nachos
Chopped brisket sandwiches
Brisket stroganoff
Brisket with broccoli and rice
Brisket melt sandwiches
Brisket hash
Brisket omelets
Brisket scramble
Chopped brisket salad
|
|
null | false
| 118
|
One major challenge of building a QA dataset on tweets is the sparsity of informative tweets. Many users write tweets to express their feelings or emotions about their personal lives. These tweets are generally uninformative and also very difficult to ask questions about. Given the linguistic variance of tweets, it is generally hard to directly distinguish those tweets from informative ones. In terms of this, rather than starting from Twitter API Search, we look into the archived snapshots of two major news websites (CNN, NBC), and then extract the tweet blocks that are embedded in the news articles. In order to get enough data, we first extract the URLs of all section pages (e.g. World, Politics, Money, Tech) from the snapshot of each home page and then crawl all articles with tweets from these section pages. Note that another possible way to collect informative tweets is to download the tweets that are posted by the official Twitter accounts of news media. However, these tweets are often just the summaries of news articles, which are written in formal text. As our focus is to develop a dataset for QA on informal social media text, we do not consider this approach.
After we extracted tweets from archived news articles, we observed that there is still a portion of tweets that have very simple semantic structures and thus are very difficult to raise meaningful questions. An example of such tweets can be like: “Wanted to share this today - @IAmSteveHarvey". This tweet is actually talking about an image attached to this tweet. Some other tweets with simple text structures may talk about an inserted link or even videos. To filter out these tweets that heavily rely on attached media to convey information, we utilize a state-of-the-art semantic role labeling model trained on CoNLL-2005 BIBREF15 to analyze the predicate-argument structure of the tweets collected from news articles and keep only the tweets with more than two labeled arguments. This filtering process also automatically filters out most of the short tweets. For the tweets collected from CNN, INLINEFORM0 of them were filtered via semantic role labeling. For tweets from NBC, INLINEFORM1 of the tweets were filtered.
We then use Amazon Mechanical Turk to collect question-answer pairs for the filtered tweets. For each Human Intelligence Task (HIT), we ask the worker to read three tweets and write two question-answer pairs for each tweet. To ensure the quality, we require the workers to be located in major English speaking countries (i.e. Canada, US, and UK) and have an acceptance rate larger than INLINEFORM0 . Since we use tweets as context, lots of important information are contained in hashtags or even emojis. Instead of only showing the text to the workers, we use javascript to directly embed the whole tweet into each HIT. This gives workers the same experience as reading tweets via web browsers and help them to better compose questions.
To avoid trivial questions that can be simply answered by superficial text matching methods or too challenging questions that require background knowledge. We explicitly state the following items in the HIT instructions for question writing:
No Yes-no questions should be asked.
The question should have at least five words.
Videos, images or inserted links should not be considered.
No background knowledge should be required to answer the question.
To help the workers better follow the instructions, we also include a representative example showing both good and bad questions or answers in our instructions. Figure FIGREF14 shows the example we use to guide the workers.
As for the answers, since the context we consider is relatively shorter than the context of previous datasets, we do not restrict the answers to be in the tweet, otherwise, the task may potentially be simplified as a classification problem. The workers are allowed to write their answers in their own words. We just require the answers to be brief and can be directly inferred from the tweets.
After we retrieve the QA pairs from all HITs, we conduct further post-filtering to filter out the pairs from workers that obviously do not follow instructions. We remove QA pairs with yes/no answers. Questions with less than five words are also filtered out. This process filtered INLINEFORM0 of the QA pairs. The dataset now includes 10,898 articles, 17,794 tweets, and 13,757 crowdsourced question-answer pairs. The collected QA pairs will be directly available to the public, and we will provide a script to download the original tweets and detailed documentation on how we build our dataset. Also note that since we keep the original news article and news titles for each tweet, our dataset can also be used to explore more challenging generation tasks. Table TABREF19 shows the statistics of our current collection, and the frequency of different types of questions is shown in Table TABREF21 . All QA pairs were written by 492 individual workers.
For the purposes of human performance evaluation and inter-annotator agreement checking, we launch a different set of HITs to ask workers to answer questions in the test and development set. The workers are shown with the tweet blocks as well as the questions collected in the previous step. At this step, workers are allowed to label the questions as “NA" if they think the questions are not answerable. We find that INLINEFORM0 of the questions are labeled as unanswerable by the workers (for SQuAD, the ratio is INLINEFORM1 ). Since the answers collected at this step and previous step are written by different workers, the answers can be written in different text forms even they are semantically equal to each other. For example, one answer can be “Hillary Clinton” while the other is “@HillaryClinton”. As it is not straightforward to automatically calculate the overall agreement, we manually check the agreement on a subset of 200 random samples from the development set and ask an independent human moderator to verify the result. It turns out that INLINEFORM2 of the answers pairs are semantically equivalent, INLINEFORM3 of them are partially equivalent (one of them is incomplete) and INLINEFORM4 are totally inconsistent. The answers collected at this step are also used to measure the human performance. We have 59 individual workers participated in this process.
One major challenge of building a QA dataset on tweets is the sparsity of informative tweets.
|
What are the challenges?
|
One major challenge of building a QA dataset on tweets is the sparsity of informative tweets.
|
1911.00547
| false
| null |
We obtained 9,892 stories of sexual harassment incidents that was reported on Safecity. Those stories include a text description, along with tags of the forms of harassment, e.g. commenting, ogling and groping. A dataset of these stories was published by Karlekar and Bansal karlekar2018safecity. In addition to the forms of harassment, we manually annotated each story with the key elements (i.e. “harasser", “time", “location", “trigger"), because they are essential to uncover the harassment patterns. An example is shown in Figure FIGREF3. Furthermore, we also assigned each story classification labels in five dimensions (Table TABREF4). The detailed definitions of classifications in all dimensions are explained below.
We obtained 9,892 stories of sexual harassment incidents that was reported on Safecity. Those stories include a text description, along with tags of the forms of harassment, e.g. commenting, ogling and groping. A dataset of these stories was published by Karlekar and Bansal karlekar2018safecity. In addition to the forms of harassment, we manually annotated each story with the key elements (i.e. “harasser", “time", “location", “trigger"), because they are essential to uncover the harassment patterns. An example is shown in Figure FIGREF3. Furthermore, we also assigned each story classification labels in five dimensions (Table TABREF4). The detailed definitions of classifications in all dimensions are explained below.
|
What is the size of the dataset?
|
The answers are shown as follows:
* 9,892 stories of sexual harassment incidents
|
null | false
| null |
What is the name of Baltimore Maryland's professional football team?
|
The Baltimore Ravens
|
|
null | false
| null |
What are the main types of Thai curries?
|
Green curry
Red Curry
Yellow Curry
Massaman Curry
Panang Curry
|
|
null | false
| null |
What are the options for you to use as an egg substitute when you are vegan?
|
When you are vegan you don’t eat any animal products or by-products. Depending on what you are making and what you are looking for from a taste perspective there are several options for you to substitute eggs.
Silken Tofu - If you are making a vegan scramble, quiche, or homemade vegan mayonnaise. The creamy texture can help mimic the egg’s soft texture in your breakfast meals and desserts.
Unsweetened applesauce - You can use a quarter cup of unsweetened applesauce for one egg. It adds moisture and sweetness to what you are making. The natural fruit pectin also helps bind the ingredients. It can be dense than an egg in your baking so you may need to use a teaspoon of baking powder to encourage fluffiness.
Mashed banana - This has a very distinct flavor, so only use it in recipes when you think the banana flavor will complement what you are making. This works well in pancakes, brownies, and banana bread. You can use a ripe medium mashed banana for every large egg. The starches and sugars in the banana help develop a beautiful golden brown crust.
Aquafaba - This is an ideal substitute for egg whites, giving your dessert structure. Take three tablespoons of liquid from inside your can of chickpeas and whip them to a fluffy consistency.
Flaxseeds or Chia Seeds - This is best for recipes that don’t require too many eggs. Combine one tablespoon of seeds with three tablespoons of water for every large egg. Let it sit for 20 minutes to get a slimy mixture that mimics and egg.
Baking Soda and Vinegar - Mix a teaspoon of baking soda and a tablespoon of white vinegar together to help create the air bubbles needed in things like pancakes, cookies, and quick breads. This is ideal where two or less eggs are needed.
|
|
2002.11402
| false
| null |
FLOAT SELECTED: Figure 1. BERT + Bi-GRU + CRF, Final Architecture Chosen For Topic Detection Task.
FLOAT SELECTED: Figure 1. BERT + Bi-GRU + CRF, Final Architecture Chosen For Topic Detection Task.
|
How many layers does their system have?
|
4 layers
|
null | false
| null |
What type of weather is San Francisco famous for?
|
San Francisco is a famously foggy city
|
|
null | false
| null |
Who will win F1 championship in the world 2023
|
Max Vestrapen
|
|
null | false
| null |
Classify these items as likely to be found in a bedroom vs likely to be found in a park: laptop, water glass, grass, trail, water, rocks, pillow, curtains.
|
Laptop, water glass, pillow and curtains are all more likely to be found in a bedroom.
Grass, trail, and rocks are more likely to be found in a park.
Water is likely to be found in a bedroom in a glass for drinking, but could also be found in a park in the form of a lake or stream.
|
|
null | false
| null |
Give me a list of common German car manufacturers
|
Here are some common German car manufacturers:
Audi
BMW
Mercedes Benz
Opel
Porsche
Volkswagen
|
|
null | false
| null |
A funeral is a ceremony connected with the final disposition of a corpse, such as a burial or cremation, with the attendant observances. Funerary customs comprise the complex of beliefs and practices used by a culture to remember and respect the dead, from interment, to various monuments, prayers, and rituals undertaken in their honor. Customs vary between cultures and religious groups. Funerals have both normative and legal components. Common secular motivations for funerals include mourning the deceased, celebrating their life, and offering support and sympathy to the bereaved; additionally, funerals may have religious aspects that are intended to help the soul of the deceased reach the afterlife, resurrection or reincarnation.
The funeral usually includes a ritual through which the corpse receives a final disposition. Depending on culture and religion, these can involve either the destruction of the body (for example, by cremation or sky burial) or its preservation (for example, by mummification or interment). Differing beliefs about cleanliness and the relationship between body and soul are reflected in funerary practices. A memorial service (or celebration of life) is a funerary ceremony that is performed without the remains of the deceased person.
The word funeral comes from the Latin funus, which had a variety of meanings, including the corpse and the funerary rites themselves. Funerary art is art produced in connection with burials, including many kinds of tombs, and objects specially made for burial like flowers with a corpse.
|
From the passage list down the religious aspects of a funeral. List the results in comma separated format.
|
afterlife, resurrection, reincarnation
|
null | false
| null |
What is non dual philosophy?
|
The word non dual refers to things or experiences that happen to us which has the characteristics of uniformity. As an example in daily life we see or experience highs and lows, haves and have nots that result in emotions or feelings of happiness or sadness. The concept of non duality is to go deep within and understand that everything is temporary, and experience things before human thought labels each experience as good or bad.
|
|
null | false
| null |
Give me a list of the five most recent NBA champions.
|
1. Golden State Warriors (2022)
2. Milwaukee Bucks (2021)
3. Los Angeles Lakers (2020)
4. Toronto Raptors (2019)
5. Golden State Warriors (2018)
|
|
null | false
| null |
Which is a species of fish? Grass carp or Grass
|
Grass carp
|
|
2004.03034
| false
| null |
Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented.
Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument.
|
What annotations are available in the dataset?
|
The answers are shown as follows:
* 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact
|
null | false
| null |
What are the various methods to offer support for the improvement of the traditionally underprivileged sections of the society?
|
In many societies there are sections of people who are underprivileged. These people could be underprivileged because of several factors: ethnicity, religion, colour, language, nationality, gender, etc. The other people who do not come under the underprivileged section are the normal people.
The underprivileged people suffer major struggles in their life over decades or sometime over centuries. On the social development indices such as literacy, life expectancy, incarceration, health conditions, etc.. the underprivileged people lag behind the normal people significantly. Examples of underprivileged people include the indigenous people in many countries such as Australia, New Zealand, Canada, USA and also the people of African descent in the USA.
The people from the underprivileged communities have to brought into the mainstream and in order to achieve this objective, the required job and career opportunities have to be created. Education is a powerful tool to equip the people to gain the skills required to perform the roles. These people have been deprived of education for a long time and so the governments and the social leaders have to take the initiatives to extend education to the underprivileged. There has to be a concerted effort over a long time, some times for even 50 to 100 years, to improve the skill levels of the underprivileged people by focusing on education and training.
The underprivileged people have to be empowered citizens of the society. In order to enable these people to overcome the handicap of the suppression they have suffered for a long time, leaders from these communities have to be involved in the decision making and governance forums in the society. As an example, in Indian Parliament around 24% of the seats are reserved only for people from the underprivileged sections to contest and get elected to become members of the parliament. Using this model to reserve seats for the. underprivileged in the governance bodies, we can provide opportunity for the underprivileged to participate in the governance and enact laws that will protect their interests.
A combination of education and empowerment would go a long way in improving the life of the underprivileged people in the society.
|
|
null | false
| null |
Bush lost the 1992 presidential election to Democrat Bill Clinton following an economic recession, his turnaround on his tax promise, and the decreased emphasis of foreign policy in a post–Cold War political climate.
|
Why did George HW Bush lose the presidency?
|
George HW Bush lost the presidency due to an economic recession and his turnaround on a tax promise
|
null | false
| null |
Baseball is a bat-and-ball sport played between two teams of nine players each, taking turns batting and fielding. The game occurs over the course of several plays, with each play generally beginning when a player on the fielding team, called the pitcher, throws a ball that a player on the batting team, called the batter, tries to hit with a bat. The objective of the offensive team (batting team) is to hit the ball into the field of play, away from the other team's players, allowing its players to run the bases, having them advance counter-clockwise around four bases to score what are called "runs". The objective of the defensive team (referred to as the fielding team) is to prevent batters from becoming runners, and to prevent runners' advance around the bases. A run is scored when a runner legally advances around the bases in order and touches home plate (the place where the player started as a batter).
The principal objective of the batting team is to have a player reach first base safely; this generally occurs either when the batter hits the ball and reaches first base before an opponent retrieves the ball and touches the base, or when the pitcher persists in throwing the ball out of the batter's reach. Players on the batting team who reach first base without being called "out" can attempt to advance to subsequent bases as a runner, either immediately or during teammates' turns batting. The fielding team tries to prevent runs by getting batters or runners "out", which forces them out of the field of play. The pitcher can get the batter out by throwing three pitches which result in strikes, while fielders can get the batter out by catching a batted ball before it touches the ground, and can get a runner out by tagging them with the ball while the runner is not touching a base.
The opposing teams switch back and forth between batting and fielding; the batting team's turn to bat is over once the fielding team records three outs. One turn batting for each team constitutes an inning. A game is usually composed of nine innings, and the team with the greater number of runs at the end of the game wins. Most games end after the ninth inning, but if scores are tied at that point, extra innings are usually played. Baseball has no game clock, though some competitions feature pace-of-play regulations such as the pitch clock to shorten game time.
Baseball evolved from older bat-and-ball games already being played in England by the mid-18th century. This game was brought by immigrants to North America, where the modern version developed. Baseball's American origins, as well as its reputation as a source of escapism during troubled points in American history such as the American Civil War and the Great Depression, have led the sport to receive the moniker of "America's Pastime"; since the late 19th century, it has been unofficially recognized as the national sport of the United States, though in modern times is considered less popular than other sports, such as American football. In addition to North America, baseball is considered the most popular sport in parts of Central and South America, the Caribbean, and East Asia, particularly in Japan, South Korea, and Taiwan.
|
What is the objective of a batter in the sport of baseball?
|
The principal objective of the batting team in the sport of baseball is to have a player reach first base safely; this generally occurs either when the batter hits the ball and reaches first base before an opponent retrieves the ball and touches the base, or when the pitcher persists in throwing the ball out of the batter's reach. Players on the batting team who reach first base without being called "out" can attempt to advance to subsequent bases as a runner, either immediately or during teammates' turns batting. The fielding team tries to prevent runs by getting batters or runners "out", which forces them out of the field of play. The pitcher can get the batter out by throwing three pitches which result in strikes, while fielders can get the batter out by catching a batted ball before it touches the ground, and can get a runner out by tagging them with the ball while the runner is not touching a base.
|
null | false
| null |
Tell me whether each company is based inside or outside of the USA: Uber, Grab, Samsung, Meituan, Exxon, Amazon, Roche
|
Uber: Inside USA
Grab: Outside USA
Samsung: Outside USA
Meituan: Outside USA
Exxon: Inside USA
Amazon: Inside USA
Roche: Outside USA
|
|
null | false
| null |
Peggy Antonio (2 June 1917 – 11 January 2002, Melbourne, Australia) was an Australian women's Test cricketer, known as the "Girl Grimmett".
Antonio was raised in Port Melbourne, Victoria, a working class suburb of Melbourne. Her father was a Chilean docker of French and Spanish descent who died when she was 15 months. With the encouragement of her uncle she learnt her cricket from the boys in her neighbourhood streets. As a young girl during the Great Depression, she was lucky enough to find work at a shoe factory in the industrial suburb of Collingwood. The factory was home to a women's cricket team where Antonio came to the attention of Eddie Conlon, a club cricketer with an encyclopaedic knowledge of the game. With the assistance of Conlon, Antonio developed a rare mix of leg spin and off spin, including a top spinner and a wrong'un.
|
List down some points about life of cricketer Peggy Antonio from the given passage
|
1. Peggy Antonio, also referred to as the "Girl Grimmett," played women's Test cricket for Australia and died on January 11, 2002, in Melbourne, Australia.
2. Antonio grew up in the working-class Melbourne district of Port Melbourne, Victoria.
3. When she was 15 months old, her father, a Chilean docker of French and Spanish ancestry, passed away.
4. With her uncle's encouragement, she picked up cricket from the guys on the streets of her area.
5. She was fortunate enough to work at a shoe factory in the Collingwood industrial area during the Great Depression as a young child.
6. Eddie Conlon, a club cricketer with encyclopaedic knowledge of the game, saw Antonio when he was playing for the women's cricket team at the factory.
7. With Conlon's help, Antonio created an uncommon combination of leg spin and off spin, including a top spinner and a wrong'un.
|
null | false
| 60
|
Informal speech is different from formal speech, especially in Vietnamese due to many conjunctive words in this language. Building an ASR model to handle such kind of speech is particularly difficult due to the lack of training data and also cost for data collection. There are two components of an ASR system that contribute the most to the accuracy of it, an acoustic model and a language model. While collecting data for acoustic model is time-consuming and costly, language model data is much easier to collect.
The language model training for the Automatic Speech Recognition (ASR) system usually based on corpus crawled on formal text, so that some conjunctive words which often used in conversation will be missed out, leading to the system is getting biased to writing-style speech.
In this paper, we present our attempt to mitigate the problems using a large scale data set and a language model combination technique that only require a small amount of conversation data but can still handle very well conversation speech.
In this paper, we present our attempt to mitigate the problems using a large scale data set and a language model combination technique that only require a small amount of conversation data but can still handle very well conversation speech.
|
What technique does the author use?
|
A language model combination technique.
|
1809.08510
| false
| null |
To test this hypothesis we first trained UG-WGAN in English, Chinese and German following the procedure described in Section "UG-WGAN" . The embedding size of the table was 300 and the internal LSTM hidden size was 512. A dropout rate of $0.1$ was used and trained with the ADAM optimization method BIBREF23 . Since we are interested in the zero-shot capabilities of our representation, we trained our sentiment analysis model only on the english IMDB Large Movie Review dataset and tested it on the chinese ChnSentiCorp dataset and german SB-10K BIBREF24 , BIBREF25 . We binarize the label's for all the datasets.
For this experiment we trained UG-WGAN on the English and Russian language following the procedure described in Section "UG-WGAN" . We kept the hyper-parameters equivalent to the Sentiment Analysis experiment. All of the NLI model tested were run over the fixed UG embeddings. We trained two different models from literature, Densely-Connected Recurrent and Co-Attentive Network by BIBREF30 and Multiway Attention Network by BIBREF31 . Please refer to this papers for further implementation details.
One way to measure universality is by studying perplexity of our multi-lingual language model as we increase the number of languages. To do so we trained 6 UG-WGAN models on the following languages: English, Russian, Arabic, Chinese, German, Spanish, French. We maintain the same procedure as described above. The hidden size of the language model was increased to 1024 with 16K BPE tokens being used. The first model was trained on English Russian, second was trained on English Russian Arabic and so on. For arabic we still trained from left to right even though naturally the language is read from right to left. We report the results in Figure 5 . As the number of languages increases the gap between a UG-WGAN without any distribution matching and one with diminishes. This implies that the efficiency and representative power of UG-WGAN grows as we increase the number of languages it has to model.
To test this hypothesis we first trained UG-WGAN in English, Chinese and German following the procedure described in Section "UG-WGAN" .
For this experiment we trained UG-WGAN on the English and Russian language following the procedure described in Section "UG-WGAN" .
To do so we trained 6 UG-WGAN models on the following languages: English, Russian, Arabic, Chinese, German, Spanish, French.
|
What are the languages they consider in this paper?
|
The languages considered were English, Chinese, German, Russian, Arabic, Spanish, French
|
null | false
| 141
|
Three datasets are used in the experiments as described in the following subsections. We use two datasets for Thai sentence segmentation, and the third dataset is used for English punctuation restoration. The statistics of the preprocessed data are shown in Table TABREF31 , including the number of sequences and the number of vocabulary words in each dataset. We also calculate the average number of words per passage in the unlabeled data that do not appear in the labeled data, as shown in Table TABREF32 .
This dataset is a Thai part-of-speech-tagged dataset containing 10,864 sentences. In the corpus, text was separated into paragraphs, sentences, and words hierarchically by linguists. Each word was also manually assigned a POS by linguists. These data include no unlabeled data with the same word segmentation and POS tag set. Hence, we do not execute CVT on this dataset.
Our data preprocessing on the ORCHID corpus was similar to that in BIBREF8 : all the comments are removed, and the data are partitioned into 10 parts containing equal numbers of sentences to support 10-fold cross-validation. Each training set is split into one part used for validation and the rest is used for model training. Subsequently, all the words in each dataset are concatenated and then separated into sequences with 200 words per instance. Each sequence always begins with the first word of a sentence. If a sequence ends with an unfinished sentence, the next sequence starts with that complete sentence.
This Thai dataset includes many types of labeled data useful in sentence segmentation tasks. The raw text was generated by users having conversations in the financial domain and were acquired mainly by crawling social sites. The labeled data for sentence segmentation were manually annotated by linguists using the definitions in BIBREF21 .
At the time of this study, the dataset was extended from that in BIBREF21 ; the data were collected from January 2017 to December 2017. The labeled dataset includes 48,374 passages. To support semi-supervised learning, the first 3 months of data (96,777 passages) are unlabeled.
Because the data stem from social media, some text exists that cannot be considered as part of any sentence, such as product links, symbols unrelated to sentences, and space between sentences. These portions were not originally annotated as sentences by the linguists. However, in this work, we treat these portions as individual sentences and tag the first word of each fraction as the sentence boundary.
For evaluation purposes, the collection of passages in this dataset is based on 5-fold cross-validation, similar to the previous work BIBREF21 . The passages are treated as input sequences for the model. For each passage, word segmentation and POS tagging are processed by the custom models from this dataset.
We adopted this English-language dataset to enable comparisons with models intended for other languages. The dataset is composed of TED talk transcripts. To compare our model with those of previous works, we selected the training dataset for the machine translation track in IWSLT2012 and separated it into training and validation sets containing 2.1 million and 295 thousand words, respectively. The testing dataset is the IWSLT2011 reference set, which contains 13 thousand words. To acquire unlabeled data for semi-supervised learning, we adopted the IWSLT2016 machine translation track training data; duplicate talks that also appear in IWSLT2012 are discarded.
The data preprocessing follows the process in BIBREF36 . Each sequence is generated from 200 words, of which beginning is always the first word in a sentence. If a sentence is cut at the end of a sequence, that sentence is copied in full to the beginning of the next sequence.
To use our model, the POS of each word is required. However, the IWSLT dataset contains only the raw text of transcripts and does not include POS tags. Thus, we implement POS tagging using a special library BIBREF38 to predict the POS of each word.
Three datasets are used in the experiments as described in the following subsections. We use two datasets for Thai sentence segmentation, and the third dataset is used for English punctuation restoration.
|
What did they do with their three data sets in experiments?
|
They use two datasets for Thai sentence segmentation, and the third dataset is used for English punctuation restoration.
|
null | false
| null |
Pfizer (2021).svg
Corporate logo since 2021
Pfizer World Headquarters Entrance.jpg
Headquarters on 42nd Street in Midtown Manhattan
Type Public
Traded as
NYSE: PFE
S&P 100 component
S&P 500 component
Industry
Pharmaceutical
Biotechnology
Founded 1849; 174 years ago in New York City
Founders
Charles Pfizer
Charles F. Erhart
Headquarters New York City, U.S.
Area served Worldwide
Key people Albert Bourla (CEO)
Products
Pharmaceutical drugsvaccines
Revenue Increase US$100.3 billion (2022)
Operating income Increase US$31.40 billion (2022)
Net income Increase US$31.37 billion (2022)
Total assets Increase US$197.2 billion (2022)
Total equity Increase US$95.66 billion (2022)
Number of employees c. 83,000 (2022)
|
Extract the stock ticker from the following text
|
NYSE: PFE
|
1810.12885
| false
| null |
FLOAT SELECTED: Table 4: Performance of various methods and human.
FLOAT SELECTED: Table 4: Performance of various methods and human.
|
Which models do they try out?
|
DocQA, SAN, QANet, ASReader, LM, Random Guess
|
null | false
| null |
A vaccine is a biological preparation that provides active acquired immunity to a particular infectious or malignant disease. The safety and effectiveness of vaccines has been widely studied and verified. A vaccine typically contains an agent that resembles a disease-causing microorganism and is often made from weakened or killed forms of the microbe, its toxins, or one of its surface proteins. The agent stimulates the body's immune system to recognize the agent as a threat, destroy it, and to further recognize and destroy any of the microorganisms associated with that agent that it may encounter in the future.
|
What is the use of a vaccine?
|
Vaccines can be used to provide immunity to a specific disease or infection. It works by stimulating the body's immune system to be able to identify and destroy threat agents.
|
null | false
| null |
Describe a situation in which scraping data from the web may be justified. Illustrate this situation with a specific example.
|
I envisioned a business that tracks all the B2B digital partnerships (data connectors between apps). The B2B partnership data can be easily monetized with ads and can be a data source for content-generation activities like blogs, interactive charts, etc. An example of B2B partnership data is an app store or digital marketplace. (Example - https://ecosystem.hubspot.com/marketplace/apps/).
HubSpot ecosystem has B2B partnerships with 600+ companies. A rich data set that tracks B2B partnership data must track thousands of companies' App Store or marketplace. For example, the company's marketplace data connectors include the HubSpot ecosystem, Salesforce app exchange, Google Cloud marketplace, Azure, and Dropbox connectors. The B2B data connectors for all the companies must be scrapped from the company's website and are not available via API or other methods.
|
|
null | false
| null |
How would you describe the taste of coffee?
|
A hot coffee has an essence of oats, earth, and a bitter chocolate or cacao. Commonly served with some form dairy to cut through the natural bitter taste, and to add additional sweetness. When iced, coffee can transform into a refreshing beverage that has more of a spring morning tone to the aftertaste.
|
|
null | false
| null |
What are microservices?
|
Microservices are an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs. These services are owned by small, self-contained teams.
|
|
null | false
| null |
What is the mascot of Stanford University?
|
The mascot of Stanford is a redwood tree.
|
|
null | false
| null |
A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The word laser is an anacronym that originated as an acronym for light amplification by stimulated emission of radiation The first laser was built in 1960 by Theodore Maiman at Hughes Research Laboratories, based on theoretical work by Charles H. Townes and Arthur Leonard Schawlow.
A laser differs from other sources of light in that it emits light that is coherent. Spatial coherence allows a laser to be focused to a tight spot, enabling applications such as laser cutting and lithography. Spatial coherence also allows a laser beam to stay narrow over great distances (collimation), enabling applications such as laser pointers and lidar (light detection and ranging). Lasers can also have high temporal coherence, which allows them to emit light with a very narrow spectrum. Alternatively, temporal coherence can be used to produce ultrashort pulses of light with a broad spectrum but durations as short as a femtosecond.
Lasers are used in optical disc drives, laser printers, barcode scanners, DNA sequencing instruments, fiber-optic, and free-space optical communication, semiconducting chip manufacturing (photolithography), laser surgery and skin treatments, cutting and welding materials, military and law enforcement devices for marking targets and measuring range and speed, and in laser lighting displays for entertainment. Semiconductor lasers in the blue to near-UV have also been used in place of light-emitting diodes (LEDs) to excite fluorescence as a white light source. This permits a much smaller emitting area due to the much greater radiance of a laser and avoids the droop suffered by LEDs; such devices are already used in some car headlamps.
|
What is a laser and who created it?
|
A laser is a device that emits light from an electromagnetic radiation source. The first laser was built in 1960 by Theodore Maiman at Hughes Research Laboratories, based on theoretical work by Charles H. Townes and Arthur Leonard Schawlow.
Lasers emit a certain type of light. Light that is coherent. Coherence can be measured spatially or temporally. Spacial coherence allows laser beams to stay narrow over great distances. Temporal coherence can be used to produce ultrashort pules of light.
Some example of the uses of lasers are optical disc drives, laser printers, barcode scanners, fiber optics and photolithography
|
null | false
| null |
A serve (or, more formally, a service) in tennis is a shot to begin the point. The most common serve is used is an overhead serve. It is initiated by tossing the ball into the air over the server's head and hitting it when the arm is fully stretched out (usually near the apex of its trajectory) into the diagonally opposite service box without touching the net. The server may employ different types of serve: a flat, a top-spin, an American twist (or kick), or a slice serve. A severely sliced serve is sometimes called a sidespin. Some servers are content to use the serve simply to initiate the point; advanced players often try to hit a winning shot with their serve. A winning serve that is not touched by the opponent is called an ace.
|
How many types of serves exist in tennis?
|
There are different serve types in tennis. Overhead serve, flat serve, top-spin serve, kick serve and slice serve are some of the serve types.
|
1603.04513
| false
| null |
The block “filters” indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).
This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. We demonstrated that multichannel initialization and variable-size filters enhance system performance on sentiment classification and subjectivity classification tasks.
FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a “channel”) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign “-” in MVCNN (-Huang) etc. means “Huang” is not used. “versions / filters / tricks / layers” denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer.
The block “filters” indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).
This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters.
FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a “channel”) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign “-” in MVCNN (-Huang) etc. means “Huang” is not used. “versions / filters / tricks / layers” denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer.
|
What are the effects of extracting features of multigranular phrases?
|
The answers are shown as follows:
* The system benefits from filters of each size.
* features of multigranular phrases are extracted with variable-size convolution filters.
|
null | false
| null |
What is the difference between a bass guitar and a normal guitar?`
|
A bass guitar typically has four strings which are tuned to G, D, A and E in the standard tuning. A standard guitar, on the other hand, has six strings which are tuned to E, B, G, D, A and E in the standard tuning.
|
|
null | false
| null |
Write about the potential ethical concerns of de-identification techniques.
|
One of the ethical issues associated with the technique of suppression is the further underrepresentation of historically underrepresented groups. For example, within the MOOC dataset there exists only one row affiliated with a user from Antartica (continent). Similarly, the following countries have also appeared once or twice only in the entire dataset: San Marino, Falkland Islands, and Brunei. Geographically underrepresented, these pieces of information would get entirely eliminated under the suppression technique. This presents an ethical paradox: on one hand, we wish to preserve k-anonymity and protect the privacy of users from underrepresented regions; on the other hand, to reach that goal we would have to erase their existence in the dataset and further reinforce the pre-existing underrepresentation. Another example would be the column of the year of birth. The MOOCs user population are overrepresented by young people, whereas users over 80 years old only take a proportion less than 1% in the entire dataset. In particular, if we were to use the suppression technique to achieve k-anonymity, almost every piece of information associated with users over 80 years old would have to be discarded. Again, the elderly have been historically underrepresented in the industry of education, yet by using suppression, we protect their privacy at the expense of representation.
This loss of representation as a result of suppression is detrimental at multiple aspects and therefore deserves attention. As the MOOCs platform is deprived of relevant information on these users, the company will lose sight of the potential demand that these underrepresented groups have for online education, thereby failing to reach them through marketing and advertisement. Besides straining the platform’s user base and profitability, this would worsen the situation that the underrepresented groups have limited resources to online education. Moreover, inferences on the suppressed dataset could reinforce the societal bias that certain groups of people are not open to education – such as the elderly – whereas the reality is simply that we erase the information on the elderly who actively engage in online education platforms. As a result, the elderly could feel themselves further distanced from the mainstream media and technology. Therefore, using suppression alone to achieve k-anonymity within a dataset, while effective, could be complicated by the ethical issue of underrepresentation.
|
|
null | false
| null |
The Bichon Frisé is often depicted as a French dog. Although the Bichon breed type are originally Spanish, used as sailing dogs, also as herding dogs sometimes, the French developed them into a gentle lap-dog variety. The Bichon type arose from the water dogs, and is descended from the poodle-type dogs and either the Barbet or one of the water spaniel class of breeds.
|
Given this paragraph about Bichon Frisé, where did this dog breed originate?
|
The Bichon breed type are originally Spanish. The French developed them into a lap-dog variety, and thus the Bichon Frisé is often depicted as a French dog.
|
null | false
| null |
Identify which animal is domesticated or wild: Donkey, Leopard ca
|
Leopard cat is wild, Donkey is domesticated
|
|
null | false
| null |
Identify which of the following airlines are now defunct: Eastern Air Lines, Alaska Airlines, Virgin America Airlines, Virgin Atlantic Airlines, Pan American World Airways, Tower Air, Braniff International Airways, jetBlue, Spirit Airlines, Flybe
|
Eastern Air Lines, Virgin America Airlines, Pan American World Airways, Tower Air, Braniff International Airways, Flybe
|
|
null | false
| null |
Tell me which one of these companies is a popular manufacturer of golf clubs: Ping, Cobra, Nike, Taylormade
|
Ping, Cobra and Taylormade are all well known manufacturers of golf clubs and have been for decades. Nike stopped manufacturing golf clubs in 2016 but does continue to market golf clothing and similar accessories.
|
|
null | false
| 82
|
We evaluate the performance of text simplification using neural machine translation on available parallel sentences and additional simplified sentences.
Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing.
Metrics. Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 .
We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity BIBREF8 . The three non-native fluent English speakers are shown reference sentences and output sentences. They are asked whether the output sentence is much simpler (+2), somewhat simpler (+1), equally (0), somewhat more difficult (-1), and much more difficult (-2) than the reference sentence.
Methods. We use OpenNMT BIBREF24 as the implementation of the NMT system for all experiments BIBREF5 . We generally follow the default settings and training procedure described by Klein et al.(2017). We replace out-of-vocabulary words with a special UNK symbol. At prediction time, we replace UNK words with the highest probability score from the attention layer. OpenNMT system used on parallel data is the baseline system. To obtain a synthetic parallel training set, we back-translate a random sample of 100K sentences from the collected simplified corpora. OpenNMT used on parallel data and synthetic data is our model. The benchmarks are run on a Intel(R) Core(TM) i7-5930K CPU@3.50GHz, 32GB Mem, trained on 1 GPU GeForce GTX 1080 (Pascal) with CUDA v. 8.0.
We choose three statistical text simplification systems. PBMT-R is a phrase-based method with a reranking post-processing step BIBREF18 . Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R BIBREF25 . SBMT-SARI BIBREF19 is syntax-based translation model using PPDB paraphrase database BIBREF26 and modifies tuning function (using SARI). We choose two neural text simplification systems. NMT is a basic attention-based encoder-decoder model which uses OpenNMT framework to train with two LSTM layers, hidden states of size 500 and 500 hidden units, SGD optimizer, and a dropout rate of 0.3 BIBREF8 . Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper BIBREF20 . For the experiments with synthetic parallel data, we back-translate a random sample of 60 000 sentences from the collected simplified sentences into ordinary sentences. Our model is trained on synthetic data and the available parallel data, denoted as NMT+synthetic.
Results. Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. We also substantially outperform Dress, who previously reported SOTA result. The results of our human evaluation using Simplicity are also presented in Table 1. NMT on synthetic data is significantly better than PBMT-R, Dress, and SBMT-SARI on Simplicity. It indicates that our method with simplified data is effective at creating simpler output.
Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with statistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data.
It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI.
|
What can be verified from the results of all models on the WikiLarge dataset?
|
It verified that including synthetic data during training is very effective, and yields an improvement over their baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI.
|
null | false
| 77
|
The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists. Their quality varied widely throughout the corpus. The blank version was preferred by the annotators. We distribute the corpus in BioC JSON format. BioC was chosen as it allows us to capture the complexities of the annotations in the biomedical domain. It represented each documents properties ranging from full text, individual passages/sentences along with captured annotations and relationships in an organized manner. BioC is based on character offsets of annotations and allows the stacking of different layers.
The annotation was performed using WebAnno[ https://webanno.github.io/webanno/, 08/04/2019.] (Eckart de Castilho et al., 2016), a web-based tool for linguistic annotation.
|
How is the annotation performed?
|
The annotation was performed using WebAnno(Eckart de Castilho et al., 2016), a web-based tool for linguistic annotation.
|
null | false
| null |
Give me a list of horror/thriller books to read
|
The books of Blood by Clive Barker
A Lush and Seething Hell by John Hornor Jacobs
The Method by James Patterson
Dracula by Bram Stoker
Tender is the Flesh by Agustina Bazrerrica
Salem's Lot by Stephen King
|
|
null | false
| null |
According to Russian government figures, USSR losses within postwar borders now stand at 26.6 million, including 8 to 9 million due to famine and disease. In August 2009 the Polish Institute of National Remembrance (IPN) researchers estimated Poland's dead at between 5.6 and 5.8 million. Historian Rüdiger Overmans of the Military History Research Office (Germany) published a study in 2000 that estimated the German military dead and missing at 5.3 million, including 900,000 men conscripted from outside of Germany's 1937 borders, in Austria, and in east-central Europe. The Red Army claimed responsibility for the majority of Wehrmacht casualties during World War II. The People's Republic of China puts its war dead at 20 million, while the Japanese government puts its casualties due to the war at 3.1 million.
|
From the passage provided, extract the number of casualties by nation during World War II
|
USSR - 26.6 million
Poland - 5.8 million
Germany - 5.3 million
China - 20 million
Japan - 3.1 million
|
null | false
| null |
Who is the current prime minister of India?
|
Shri Narendra Modi is the current prime minister of India.
|
|
null | false
| 78
|
Social media platforms have made the spreading of fake news easier, faster as well as able to reach a wider audience. Social media offer another feature which is the anonymity for the authors, and this opens the door to many suspicious individuals or organizations to utilize these platforms. Recently, there has been an increased number of spreading fake news and rumors over the web and social media BIBREF0. Fake news in social media vary considering the intention to mislead. Some of these news are spread with the intention to be ironic or to deliver the news in an ironic way (satirical news). Others, such as propaganda, hoaxes, and clickbaits, are spread to mislead the audience or to manipulate their opinions. In the case of Twitter, suspicious news annotations should be done on a tweet rather than an account level, since some accounts mix fake with real news. However, these annotations are extremely costly and time consuming – i.e., due to high volume of available tweets Consequently, a first step in this direction, e.g., as a pre-filtering step, can be viewed as the task of detecting fake news at the account level.
The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts.
In this work, we investigate the problem of discriminating between factual and non-factual accounts in Twitter. To this end, we collect a large dataset of tweets using a list of propaganda, hoax and clickbait accounts and compare different versions of sequential chunk-based approaches using a variety of feature sets against several baselines. Several approaches have been proposed for news verification, whether in social media (rumors detection) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, or in news claims BIBREF5, BIBREF6, BIBREF7, BIBREF8. The main orientation in the previous works is to verify the textual claims/tweets but not their sources. To the best of our knowledge, this is the first work aiming to detect factuality at the account level, and especially from a textual perspective. Our contributions are:
[leftmargin=4mm]
We propose an approach to detect non-factual Twitter accounts by treating post streams as a sequence of tweets' chunks. We test several semantic and dictionary-based features together with a neural sequential approach, and apply an ablation test to investigate their contribution.
We benchmark our approach against other approaches that discard the chronological order of the tweets or read the tweets individually. The results show that our approach produces superior results at detecting non-factual accounts.
We benchmark our approach against other approaches that discard the chronological order of the tweets or read the tweets individually. The results show that our approach produces superior results at detecting non-factual accounts.
|
How does the authors' approach perform at detecting non-factual accounts?
|
The approach produces superior results at detecting non-factual accounts.
|
null | false
| 216
|
Named-entity recognition (NER) is the task of tagging relevant entities such as person, location and organization in unstructured text. Modern NER has been dominated by neural models BIBREF0, BIBREF1 combined with contextual embeddings from language models (LMs) BIBREF2, BIBREF3, BIBREF4. The LMs are pre-trained on large amounts of unlabeled text which allows the NER model to use the syntactic and semantic information captured by the LM embeddings. On the popular benchmark datasets CoNLL-03 BIBREF5 and Ontonotes 5 BIBREF6, neural models with LMs achieved state-of-the-art results without gazetteers features, unlike earlier approaches that heavily relied on them BIBREF7. Gazetteers are lists that contain entities such as cities, countries, and person names. The gazetteers are matched against unstructured text to provide additional features to the model. Data for building gazetteers is available for multiple language from structured data resources such as Wikipedia, DBpedia BIBREF8 and Wikidata BIBREF9.
In this paper, we propose GazSelfAttn, a novel gazetteer embedding approach that uses self-attention and match span encoding to build enhanced gazetteer representation. GazSelfAttn embeddings are concatenated with the input to a LSTM BIBREF10 or CNN BIBREF11 sequence layer and are trained end-to-end with the model. In addition, we show how to extract general gazetteers from the Wikidata, a structured knowledge-base which is part of the Wikipedia project.
Our contributions are the following:
[topsep=1pt, leftmargin=15pt, itemsep=-1pt]
We propose novel gazetteer embeddings that use self-attention combined with match span encoding.
We enhance gazetteer matching with multi-token and single-token matches in the same representation.
We demonstrate how to use Wikidata with entity popularity filtering as a resource for building gazetteers.
GazSelfAttn evaluations on CoNLL-03 and Ontonotes 5 datasets show F$_1$ score improvement over baseline model from 92.34 to 92.86 and from 89.11 to 89.32 respectively. Moreover, we perform ablation experiments to study the contribution of the different model components.
In addition, we show how to extract general gazetteers from the Wikidata, a structured knowledge-base which is part of the Wikipedia project.
|
What is Wikidata?
|
A structured knowledge-base which is part of the Wikipedia project.
|
null | false
| null |
Pampaneira is a village located in the province of Granada, Spain. According to the 2005 census (INE), the village has a population of 355 inhabitants.
It is one of three mountain villages of the Barranco de Poqueira in the Alpujarras region. The other two villages, Bubión and Capileira, are located higher in the Poqueira gorge.
The Poqueira gorge starts below the mountain peak of Mulhacén, where the river Poqueira rises. There are many opportunities for hiking in the area. Guided walks are arranged by the local tourist offices which also sell mountain hiking equipment. Other companies can plan self-guided treks through the area. The nearest major town is Órgiva. One of the highest all year-round lived-in mountain village in Spain, Trevélez is located nearby in the mountains to the east of the three villages.
Accommodation is available in all of the three villages but Capileira has the biggest range of facilities.
At the north side opens the Sierra Nevada (Spain) range with one of the highest peaks in Spain ; Mulhacén and Alcazaba (Sierra Nevada).
The village is located at about 1060 metres above sea level.
There are many craft shops selling ceramics, leather and clothes. Also on offer are the locally produced specialities of cheese and ham.
During the winter the Sierra Nevada and the Alpujarra mountains get covered in snow but the villages are rarely affected by snow for more than a few days a time.
There are bus connections running every day from the town of Granada, Spain.
|
Given a reference text about Pampaneira, tell me how many people live there, where it's located, and its elevation above sea level.
|
Pampaneira is located in the province of Granada, Spain and is 1060 metres above sea level. 355 people live in Pampaneira according to the 2005 census.
|
1904.07342
| true
| null |
We henceforth refer to a tweet affirming climate change as a “positive" sample (labeled as 1 in the data), and a tweet denying climate change as a “negative" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the “twint" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change" or “global warming", and further included disaster-specific search terms (e.g., “bomb cyclone," “blizzard," “snowstorm," etc.). We refer to the first data batch as “influential" tweets, and the second data batch as “event-related" tweets.
All data were downloaded from Twitter in two separate batches using the “twint" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change" or “global warming", and further included disaster-specific search terms (e.g., “bomb cyclone," “blizzard," “snowstorm," etc.).
|
Do they report results only on English data?
|
Yes.
|
null | false
| 409
|
The D and W selected by the assistant module in AdaGCCN compared to the original depth and width in CapsGNN on different datasets are displayed in Table 5. Note that the optimal D and W determined by AM are different on the 10 splittings. We take the D and W on the splitting with the best test accuracy for the comparison against CapsGNN
|
In the discussion the authors claim the proposed approach improves the explainability of the model, but I don't see how that is the case.
|
We agree with your suggestion. We added the comparison between the D and W selected by the assistant module and the original depth and width in CapsGNN on each dataset in Table 5 in the appendix.
|
null | false
| null |
Classify each of the following animal as a carnivore, omnivore or herbivore: tiger, bear, giraffe, wolverine, elephant, monkey
|
Carnivores: Tiger, wolverine
Omnivore: Bear, monkey
Herbivore: Elephant, giraffe
|
|
null | false
| 478
|
Time series data in the real world is high dimensional, unstructured, and complex with unique properties, leading to challenges for data modeling. In addition, without human recognizable patterns, it is much harder to label time series data than images and languages in realworld applications. These labeling limitations hinder deep learning methods, which typically require a huge amount of labeled data for training, been applied on time series data. Representation learning learns a fixed-dimension embedding from the original time series that keeps their inherent features. Comparing to the raw time series data, these representations are with better transferability and generalization capacity. To deal with labeling limitations, contrastive learning methods have been widely adopted in various domains for their soaring performance on representation learning, including vision, language, and graph-structured data. In a nutshell, contrastive learning methods typically train an encoder to map instances to an embedding space where dissimilar (negative) instances are easily distinguishable from similar (positive) ones and model predictions to be invariant to small noise applied to either input examples or hidden states.
Despite being effective and prevalent, contrastive learning has been less explored in the time series domain. Existing contrastive learning approaches often involve a specific data augmentation strategy that creates novel and realistic-looking training data without changing its label to construct positive alternatives for any input sample. Their success relies on carefully designed rules of thumb guided by domain expertise. Routinely used data augmentations for contrastive learning are mainly designed for image and language data, such as color distortion, flip, word replacement, and back-translation. These augmentation techniques generally do not apply to time series data.
Figure: InfoTS is composed of three parts: (1) candidate transformation that generates different augmentations of the original inputs, (2) a meta-network that selects the optimal augmentations, (3) an encoder that learns representations of time series instances. The meta-network is learned in tandem with contrastive encoder learning.
Recently, some researchers propose augmentations for time series to enhance the size and quality of the training data. For example, and propose to adopt jittering, scaling, and permutation strategies to generate augmented instances. extracts subsequences for data augmentation. In spite of the current progress, existing methods have two main limitations. First, unlike images with human recognizable features, time series data are often associated with inexplicable underlying patterns. Strong augmentation such as permutation may ruin such patterns and consequently, the model will mistake the negative handcrafts for positive ones. While weak augmentation methods such as jittering may generate augmented instances that are too similar to the raw inputs to be informative enough for contrastive learning. On the other hand, time series datasets from different domains may have diverse nature. Adapting a universal data augmentation method, such as subsequence in, for all datasets and tasks leads to sub-optimal performances. Other works follow empirical rules to select suitable augmentations from expensive trial-and-error. Akin to hand-crafting features, hand-picking choices of data augmentations are undesirable from the learning perspective. The diversity and heterogeneity of real-life time series data further hinder these methods away from wide applicability.
To address the challenges, we first introduce the criteria for selecting good data augmentations in contrastive learning. Data augmentation benefits generalizable, transferable, and robust representation learning by correctly extrapolating the input training space to a larger region. The positive instances enclose a discriminative zone in which all the data points should be similar to the original instance. The desired data augmentations for contrastive representation learning should have both high fidelity and high variety. High fidelity encourages the augmented data to maintain the semantic identity that is invariant to transformations. For example, if the downstream task is classification, then the generated augmentations of inputs should be class-preserving. Meanwhile, generating augmented samples with high variety benefits representation learning by increasing the generalization capacity. From the motivation, we theoretically analyze the information flows in data augmentations based upon information theory and derive the criteria for selecting desired time series augmentations. Due to the inexplicability in practical time series data, we assume that the semantic identity is presented by the target in the downstream task. Thus, high fidelity can be achieved by maximizing the mutual information between the downstream label and the augmented data. A one-hot pseudo label is assigned to each instance in the unsupervised setting when downstream labels are unavailable. These pseudo labels encourage augmentations of different instances to be distinguishable from each other. We show that data augmentations preserving these pseudo labels can add new information without decreasing the fidelity. Concurrently, we maximize the entropy of augmented data conditional on the original instances to increase the variety of data augmentations.
Based on the derived criteria, we propose an adaptive data augmentation method, InfoTS (as shown in Figure), by employing a meta-learning mechanism to avoid ad-hoc choices or painstakingly trial-and-error tuning. Specifically, we utilize a meta-network to learn the augmentation prior in tandem with contrastive learning. The meta-learner automatically selects optimal augmentations from candidate augmentations to generate feasible positive samples. Along with random sampled negative instances, augmented instances are then fed into a time series encoder to learn representations in a contrastive manner. With a reparameterization trick, the meta-network can be efficiently optimized with back-propagation based upon the proposed criteria. Therefore, the meta-network can automatically select data augmentations in a per dataset and per learning task manner without resorting to expert knowledge or tedious downstream validation. Our main contributions include:
• We propose criteria to guide the selection of data augmentations for contrastive time series representation learning without prefabricated knowledge. • We propose a meta-learning based method to automatically select feasible data augmentations for different time series datasets, which can be efficiently optimized with backpropagation. • We empirically verify the effectiveness of the proposed criteria to find optimal data augmentations. Extensive experiments demonstrate that InfoTS can achieve highly competitive performance with up to 11.4% reduction in MSE on forecasting task and up to 2.8% relative improvement in accuracy on classification task over the leading baselines.
We propose criteria to guide the selection of data augmentations for contrastive time series
representation learning without prefabricated knowledge. We propose a meta-learning based method to automatically select feasible data augmentations for different time series datasets, which can be efficiently optimized with backpropagation.**** In Figure 9, we show the relationship between forecasting performance and our proposed criteria. The results are consistent with the conclusion drawn from classification results that forecasting performance, evaluated with both MSE and MAE, is positively related to the proposed criteria. With the proposed criteria, we empirically show the advantage of the developed meta-network on learning optimal augmentations. The overall observations are consistent with ones from Section 4.4. Here, we provide more details. 1) “Random” is a variant that randomly selects a candidate augmentation method to generate augmented instances. “All” combines all candidate augmentations evenly to generate augmented instances. These two variances are routinely utilized in previous works (Chen et al., 2020; You et al., 2020). They work well when candidate augmentations are manually verified. Otherwise, their performances are heavily affected by the low-quality candidate augmentations, as shown in Tabel 8. On the other hand, by adaptively selecting optimal augmentations, InfoTS significantly improve the forecasting performances even with low-quality candidate augmentations. 2) By comparing InfoTS to its variant w/o Fidelity and w/o Variety, we further empirically demonstrate that both variety and fidelity make contributions to the superior performances of InfoTS.
|
About the naming of meta-learner network, what strong connection does the proposed approach to the meta-learning paradigm?
|
In the previous review, the reviewer pointed out a phenomenon that the improvement is not significant compared to "Random" and our two variants. We analyzed the phenomena and added more experiments in Appendix 3.2 to further demonstrate the effectiveness of the proposed method in practice. We thank the reviewer for the insightful comments, such that we can make our paper more well-qualified.The reviewer still believes that our ablation studies are unsupportive. To take the reviewer's concern, we summarized our major and minor contributions as follows.The two main contributions of our paper are 1) proposing criteria of data augmentations for time series based on information theory that good augmentations should have high variety and fidelity 2) proposing a meta-network to adaptively select augmentations for each dataset based on the proposed criteria. For the first contribution, we have verified with Figure 3 in Section 3 and Figure 9 in Appendix 3.2. For the second contribution, we have verified with Table 3 and Table 8 that we outperform over "Random," which is used in previous works. The margins become more significant if there is no prior knowledge on the selection of augmentations (Appendix 3.2). This phenomenon shows that InfoTS is more practical because the quality of augmentations on time series data is not easy to check by humans virtually. Besides, The comparison of InfoTS to its variants also demonstrated that both fidelity and variety are important for the selection of augmentations. Another minor contribution in this paper is the contrastive loss. We have also verified its effectiveness with ablation studies. Thus, we believe that we have adequately verified our claims with extensive experiments. As we analyzed in the experimental part, experimental results also support our claims and we respectfully disagree with the reviewer's comment on "unsupportive ablation study".
|
null | false
| null |
Anne Francis, née Gittins (1738 – 7 November 1800) was an English classical scholar and poet. She is notable for a poetic translation of the Bible's Song of Songs that focuses on the dramatic action of the song, rather than its Christian allegorical interpretation.
Anne Gittins was born in 1738 as the daughter of the Rev. Daniel Gittins, rector of South Stoke, near Arundel, Sussex. She was educated by her father in the classics and Hebrew. She married the Rev. Robert Bransby Francis, rector of Edgefield, near Holt, Norfolk.
Though she lacked formal classical training, Anne Francis was in contact with the leading Old Testament scholars of the time: her Poetical Translation of the Song of Songs was dedicated to John Parkhurst, and other subscribers included Robert Lowth and Benjamin Kennicott. The Poetical Translation focused on what she saw as the direct dramatic action of the song, rather than its Christian allegorical interpretation. Taking a cue from Thomas Harmer, Francis distinguished the voice of an "Egyptian Spouse" from that of a "Jewish Queen": in this love triangle, Solomon's marriage to the daughter of Pharaoh being resented by an earlier wife.
Francis's translation has not received much continuing attention. Yet in the verdict of one recent commentator, Francis's "insistence on bringing together learning and feeling, what she knew as male and female worlds, makes her Song of Songs an outstanding feminist monument."
In 1787, Francis changed publishers to raise her voice in response to Goethe's "Charlotte to Werther. A Poetical Epistle". Throughout the 1790s, she further strengthened the reactionary side of the political divide by periodically publishing work.
Despite Francis's lack of formal education, she aided in efforts to adapt French and German playscripts, as did various other female writers.
Anne Francis died on 7 November 1800, at the age of 62.
|
Who is Anne Francis?
|
Anne Francis, née Gittins (1738 – 7 November 1800) was an English classical scholar and poet. She is notable for a poetic translation of the Bible's Song of Songs that focuses on the dramatic action of the song, rather than its Christian allegorical interpretation.
Anne Gittins was born in 1738 as the daughter of the Rev. Daniel Gittins, rector of South Stoke, near Arundel, Sussex. She was educated by her father in the classics and Hebrew. She married the Rev. Robert Bransby Francis, rector of Edgefield, near Holt, Norfolk.
Though she lacked formal classical training, Anne Francis was in contact with the leading Old Testament scholars of the time: her Poetical Translation of the Song of Songs was dedicated to John Parkhurst, and other subscribers included Robert Lowth and Benjamin Kennicott. The Poetical Translation focused on what she saw as the direct dramatic action of the song, rather than its Christian allegorical interpretation. Taking a cue from Thomas Harmer, Francis distinguished the voice of an "Egyptian Spouse" from that of a "Jewish Queen": in this love triangle, Solomon's marriage to the daughter of Pharaoh being resented by an earlier wife.
Francis's translation has not received much continuing attention. Yet in the verdict of one recent commentator, Francis's "insistence on bringing together learning and feeling, what she knew as male and female worlds, makes her Song of Songs an outstanding feminist monument."
In 1787, Francis changed publishers to raise her voice in response to Goethe's "Charlotte to Werther. A Poetical Epistle". Throughout the 1790s, she further strengthened the reactionary side of the political divide by periodically publishing work.
Despite Francis's lack of formal education, she aided in efforts to adapt French and German playscripts, as did various other female writers.
Anne Francis died on 7 November 1800, at the age of 62.
|
null | false
| null |
Boom 1980–1987
In the 1980s a form of AI program called "expert systems" was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.
The rise of expert systems
An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.
Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.
In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.
|
What is an expert system, and when did they first emerge?
|
An expert system is a program that uses logical rules derived from expert knowledge to answer questions or solve problems in a specific domain of knowledge. The first expert system, called Dendral, was developed in 1965 by Edward Feigenbaum and his students and identified compounds from spectrometer readings.
|
null | false
| null |
What is the capital of Spain and what is it's population?
|
The capital of Spain is Madrid and it has approximately 3.4 million people
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.